idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
45,601
Interpretations of negative confidence interval
It seems to me SEM has little use except to calculate CI? What quantitative information can we derive from SEM? We can say the true mean weight of the 1000 chickens is likely (very qualitative) to fall between 2 kg to 8 kg (sample mean ± SEM), but do we know the probability? Let us begin with an observation. The SEM is not a descriptive statistic. It is derived from the data. It informs you about the sampling error of the statistic but not the uncertainty in the population. It is an artifact of the measurement process. Had you chosen a different measurement, such as the median, you would have had different standard errors. Likewise, had your model been different, you would have had different standard errors. There is an infinite number of possible confidence interval functions. You are using the standard one from a textbook, but it is not the only one. It is a model that has desirable properties, so it is taught, but there could be a different interval if you chose to formally model losses you would obtain from getting a bad sample. The SEM is providing sample-specific information. For the purposes of your question, its only use is as an interim step in a calculation. Confidence intervals tell you the area you have confidence in for the location of the mean (or some other statistic). Confidence intervals tell you nothing about the distribution of the sizes of the chickens themselves. The interval you may want is the tolerance interval. If you wanted to know the range where 95% of your population of chickens is likely to fall, then you want the 95% tolerance interval and not the 95% confidence interval. How to interpret the negative lower bound of CI? The bounds of a confidence interval have no interpretation. They are random numbers. A function that generates an interval is an $\alpha$ percent confidence interval if, upon infinite repetition, the interval would cover the true value of the parameter at least $\alpha$ percent of the time. If you create an $\alpha$ percent confidence interval and it is $[a,b]$ then the interpretation is that if you behave as if the true value were inside that range then you would be made a fool of less than $\alpha$ percent of the time once repetitions became very large. A negative bound is fine. Let's imagine that we are Mother Nature, and you know the true population mean is at 4 kg. You should be delighted then because the interval $[-.88,10.88]$ contains the actual value. The lower bound is indeed non-sense, but Frequentist methods allow non-sense answers as long as it covers the true value a certain percentage of the time. Also, note that narrow intervals are not better than wider ones. Narrow ones are not more accurate than wide ones. They are equally precise in that they cover the true value at least a fixed percentage of the time on large repetition. To see why, imagine that you divided the population of chickens in half randomly and weighed them. One-half of the chickens had a narrower interval than the other half. What about the randomization process made one group more accurate? Nothing. How much probability that the true mean weight will fall in the range between 0 kg - 10.88 kg? That is a model-specific question. I would be concerned that your data is not normally distributed. While they are probably normally distributed, given roughly equal ages and diets, the population contains chicks and very old chickens. I would be surprised to find that they were normally distributed on an uncontrolled basis. However, if we pretend that the chickens are sufficiently similar to each other to be normally distributed, then we can start to address your question. First, a confidence interval is not a statement of probability. If you want a probability, then you will need to use a Bayesian model. A Bayesian credible interval will tell you the probability that a parameter is inside some range. Frequentist methods will not do that. The reason is that there is either a 100% or a 0% chance that the parameter is inside the range, in Frequentist thinking. In Frequentist thinking, you cannot make a probability statement about a fact. George Washington either was the first President, or he was not. That is a factual question and not subject to probability statements. A Frequentist cannot say, "it is probably raining." A Bayesian can. It is either raining, or it is not. The parameter is either inside the range, or it is not. What you can say is that you have 95% confidence that the interval covers the parameter. What you cannot say is that there is a 95% chance that the parameter is inside the interval. That is not true. What you have confidence in is the procedure and not the data. Your data is a random collection. There is supposed to be nothing special about it. As such, your interval and sample mean are random too. There is nothing special about them either. The population parameter, $\mu$, is special. What makes a sample mean or a confidence interval special in any sense is their relationship to $\mu$. They summarize the information you have gathered about $\mu$ but are not $\mu$. The procedure gives you guarantees, if your model is valid, about how often you will make incorrect decisions and take incorrect actions based on the sample that you saw. Even tolerance intervals require you to state how often you want to be made a fool of. There is no absolute tolerance interval; there are only intervals given $\alpha$, the data, and the model.
Interpretations of negative confidence interval
It seems to me SEM has little use except to calculate CI? What quantitative information can we derive from SEM? We can say the true mean weight of the 1000 chickens is likely (very qualitative) to fal
Interpretations of negative confidence interval It seems to me SEM has little use except to calculate CI? What quantitative information can we derive from SEM? We can say the true mean weight of the 1000 chickens is likely (very qualitative) to fall between 2 kg to 8 kg (sample mean ± SEM), but do we know the probability? Let us begin with an observation. The SEM is not a descriptive statistic. It is derived from the data. It informs you about the sampling error of the statistic but not the uncertainty in the population. It is an artifact of the measurement process. Had you chosen a different measurement, such as the median, you would have had different standard errors. Likewise, had your model been different, you would have had different standard errors. There is an infinite number of possible confidence interval functions. You are using the standard one from a textbook, but it is not the only one. It is a model that has desirable properties, so it is taught, but there could be a different interval if you chose to formally model losses you would obtain from getting a bad sample. The SEM is providing sample-specific information. For the purposes of your question, its only use is as an interim step in a calculation. Confidence intervals tell you the area you have confidence in for the location of the mean (or some other statistic). Confidence intervals tell you nothing about the distribution of the sizes of the chickens themselves. The interval you may want is the tolerance interval. If you wanted to know the range where 95% of your population of chickens is likely to fall, then you want the 95% tolerance interval and not the 95% confidence interval. How to interpret the negative lower bound of CI? The bounds of a confidence interval have no interpretation. They are random numbers. A function that generates an interval is an $\alpha$ percent confidence interval if, upon infinite repetition, the interval would cover the true value of the parameter at least $\alpha$ percent of the time. If you create an $\alpha$ percent confidence interval and it is $[a,b]$ then the interpretation is that if you behave as if the true value were inside that range then you would be made a fool of less than $\alpha$ percent of the time once repetitions became very large. A negative bound is fine. Let's imagine that we are Mother Nature, and you know the true population mean is at 4 kg. You should be delighted then because the interval $[-.88,10.88]$ contains the actual value. The lower bound is indeed non-sense, but Frequentist methods allow non-sense answers as long as it covers the true value a certain percentage of the time. Also, note that narrow intervals are not better than wider ones. Narrow ones are not more accurate than wide ones. They are equally precise in that they cover the true value at least a fixed percentage of the time on large repetition. To see why, imagine that you divided the population of chickens in half randomly and weighed them. One-half of the chickens had a narrower interval than the other half. What about the randomization process made one group more accurate? Nothing. How much probability that the true mean weight will fall in the range between 0 kg - 10.88 kg? That is a model-specific question. I would be concerned that your data is not normally distributed. While they are probably normally distributed, given roughly equal ages and diets, the population contains chicks and very old chickens. I would be surprised to find that they were normally distributed on an uncontrolled basis. However, if we pretend that the chickens are sufficiently similar to each other to be normally distributed, then we can start to address your question. First, a confidence interval is not a statement of probability. If you want a probability, then you will need to use a Bayesian model. A Bayesian credible interval will tell you the probability that a parameter is inside some range. Frequentist methods will not do that. The reason is that there is either a 100% or a 0% chance that the parameter is inside the range, in Frequentist thinking. In Frequentist thinking, you cannot make a probability statement about a fact. George Washington either was the first President, or he was not. That is a factual question and not subject to probability statements. A Frequentist cannot say, "it is probably raining." A Bayesian can. It is either raining, or it is not. The parameter is either inside the range, or it is not. What you can say is that you have 95% confidence that the interval covers the parameter. What you cannot say is that there is a 95% chance that the parameter is inside the interval. That is not true. What you have confidence in is the procedure and not the data. Your data is a random collection. There is supposed to be nothing special about it. As such, your interval and sample mean are random too. There is nothing special about them either. The population parameter, $\mu$, is special. What makes a sample mean or a confidence interval special in any sense is their relationship to $\mu$. They summarize the information you have gathered about $\mu$ but are not $\mu$. The procedure gives you guarantees, if your model is valid, about how often you will make incorrect decisions and take incorrect actions based on the sample that you saw. Even tolerance intervals require you to state how often you want to be made a fool of. There is no absolute tolerance interval; there are only intervals given $\alpha$, the data, and the model.
Interpretations of negative confidence interval It seems to me SEM has little use except to calculate CI? What quantitative information can we derive from SEM? We can say the true mean weight of the 1000 chickens is likely (very qualitative) to fal
45,602
Interpretations of negative confidence interval
what you did - you created confidence intervals under assumption that chicken weights are drawn from normal disrtibution (with value range $(-\infty, \infty)$) - in fact these can be drawn from other disrtibution with $\mathbb{R_+}$ support e.g. erlang or chi distribution, but when sample size is $> 50$ we can assume that mean has normal disrtibution - so this value $-0.88$ is effect of that assumption, so you can interpret it as 0... but to do it in strict mathematical way you should find real distribution for chicken weights, then construct propper confidence intervals (which will be different than for normal distribution) and then you will have more accurate estimates and you will drawn= more meaningfull conclusions, but remember that conclusions you will draw will be conclusions about that sample of 1000 observations you already have!
Interpretations of negative confidence interval
what you did - you created confidence intervals under assumption that chicken weights are drawn from normal disrtibution (with value range $(-\infty, \infty)$) - in fact these can be drawn from other
Interpretations of negative confidence interval what you did - you created confidence intervals under assumption that chicken weights are drawn from normal disrtibution (with value range $(-\infty, \infty)$) - in fact these can be drawn from other disrtibution with $\mathbb{R_+}$ support e.g. erlang or chi distribution, but when sample size is $> 50$ we can assume that mean has normal disrtibution - so this value $-0.88$ is effect of that assumption, so you can interpret it as 0... but to do it in strict mathematical way you should find real distribution for chicken weights, then construct propper confidence intervals (which will be different than for normal distribution) and then you will have more accurate estimates and you will drawn= more meaningfull conclusions, but remember that conclusions you will draw will be conclusions about that sample of 1000 observations you already have!
Interpretations of negative confidence interval what you did - you created confidence intervals under assumption that chicken weights are drawn from normal disrtibution (with value range $(-\infty, \infty)$) - in fact these can be drawn from other
45,603
Interpretations of negative confidence interval
I would just (first of all) work on logarithmic scale and back-transform the confidence limits obtained on that scale. That way you're assured of positive limits. Going full Bayes on this is an answer of wide appeal, but as you're asking this question I am not clear that "learn a whole new approach to statistics" is likely to be practical immediately for you. All confidence limits are at best smart guesses. But it's clear that a negative lower limit is biologically absurd, so you owe it to science to avoid that if possible. I don't go with those who say "just round up to zero". The technique is inappropriate if it produces absurd results. More generally, a scale on which the data are symmetrically distributed will produce more sensible results than those you cite. A square root or a cube root scale might work better than a logarithmic scale in some cases. Some of this advice depends on taking your example fairly literally. What's axiomatic is that using logarithms first is guaranteed to yield positive upper and lower limits. (I regard this answer as consistent with advice to consider a generalized linear model with appropriate family and non-identity link.) PS Why not bootstrap a CI?
Interpretations of negative confidence interval
I would just (first of all) work on logarithmic scale and back-transform the confidence limits obtained on that scale. That way you're assured of positive limits. Going full Bayes on this is an answe
Interpretations of negative confidence interval I would just (first of all) work on logarithmic scale and back-transform the confidence limits obtained on that scale. That way you're assured of positive limits. Going full Bayes on this is an answer of wide appeal, but as you're asking this question I am not clear that "learn a whole new approach to statistics" is likely to be practical immediately for you. All confidence limits are at best smart guesses. But it's clear that a negative lower limit is biologically absurd, so you owe it to science to avoid that if possible. I don't go with those who say "just round up to zero". The technique is inappropriate if it produces absurd results. More generally, a scale on which the data are symmetrically distributed will produce more sensible results than those you cite. A square root or a cube root scale might work better than a logarithmic scale in some cases. Some of this advice depends on taking your example fairly literally. What's axiomatic is that using logarithms first is guaranteed to yield positive upper and lower limits. (I regard this answer as consistent with advice to consider a generalized linear model with appropriate family and non-identity link.) PS Why not bootstrap a CI?
Interpretations of negative confidence interval I would just (first of all) work on logarithmic scale and back-transform the confidence limits obtained on that scale. That way you're assured of positive limits. Going full Bayes on this is an answe
45,604
Interpretations of negative confidence interval
To produce a probability for weights you should really apply Bayesian methods in this case. This is not about frequentism against Bayes, but you have some very strong prior information here: You know, that a chickens weight is not negative and that it is not .5 kg. Standard frequentist methods are basically open for all results, often presuming normally distributed data and your example is a good example for non-normally distributed data. Find yourself a credible prior distribution that excludes negative chicken (how about half-normal prior?) and compute a posterior distribution. From that posterior distribution you can conclude real probabilities.
Interpretations of negative confidence interval
To produce a probability for weights you should really apply Bayesian methods in this case. This is not about frequentism against Bayes, but you have some very strong prior information here: You know,
Interpretations of negative confidence interval To produce a probability for weights you should really apply Bayesian methods in this case. This is not about frequentism against Bayes, but you have some very strong prior information here: You know, that a chickens weight is not negative and that it is not .5 kg. Standard frequentist methods are basically open for all results, often presuming normally distributed data and your example is a good example for non-normally distributed data. Find yourself a credible prior distribution that excludes negative chicken (how about half-normal prior?) and compute a posterior distribution. From that posterior distribution you can conclude real probabilities.
Interpretations of negative confidence interval To produce a probability for weights you should really apply Bayesian methods in this case. This is not about frequentism against Bayes, but you have some very strong prior information here: You know,
45,605
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater than 1
The following analysis obtains bounds that hold for sufficiently small $\alpha$ and are expressed in terms of elementary functions. The tail probability, written as a function of $\alpha\gt 0,$ is $$p_{t,\beta}(\alpha) = {\Pr}_{\alpha,\beta}(X\gt t) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\int_t^1 s^{\alpha-1}(1-s)^{\beta-1}\mathrm{d}s.$$ For an upper bound, observe that over the interval of integration $(t,1),$ $$s^{\alpha-1}(1-s)^{\beta-1} \gt t^{\alpha-1}(1-s)^{\beta-1}.$$ The relative error is no greater than $t^{\alpha-1},$ which will be satisfactory for $t\approx 1.$ Integrate that upper bound and use the fundamental relationship $z\Gamma(z)=\Gamma(z+1)$ to produce $$p_{t,\beta}(\alpha) \le \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\frac{t^{\alpha-1}(1-t)^\beta}{\beta} = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha+1)\Gamma(\beta+1)}\,\alpha\, t^{\alpha-1}(1-t)^\beta.$$ This is a multiple of $\alpha.$ The multiplier, although depending on $\alpha,$ is differentiable at $\alpha=0$ where its value is $$\frac{\Gamma(0+\beta)}{\Gamma(0+1)\Gamma(\beta+1)} t^{0-1}(1-t)^\beta = \frac{(1-t)^\beta}{t\beta}.$$ Thus, at least for sufficiently small $\alpha,$ $$p_{t,\beta}(\alpha) \lt \alpha \left(\frac{(1-t)^\beta}{t\beta}\right).\tag{1}$$ Taking $0\lt t\le 1,$ the Binomial expansion of $$\eqalign{s^{\alpha-1}&=(1+(s-1))^{\alpha-1} \\ &= 1 + ({\alpha-1})(s-1) + \cdots + \frac{(\alpha-1)(\alpha-2)\cdots(\alpha-i)}{i!}(s-1)^i + \cdots\\ &= \sum_{i=0}^\infty \binom{\alpha}{i}(s-1)^i}$$ converges absolutely, permitting the integral of $s^{\alpha-1}(1-s)^{\beta-1}$ to be performed term by term as $$\eqalign{ p_{t,\beta}(\alpha) &= \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\sum_{i=0}^\infty \int_t^1 \binom{\alpha}{i}(s-1)^i(1-s)^{\beta-1}\mathrm{d}s\\ &= \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\sum_{i=0}^\infty \binom{\alpha}{i}(-1)^i\frac{1}{i+\beta}(1-t)^{i+\beta}. }$$ The assumption $0\lt \alpha\lt 1$ implies $\alpha-1$ is negative, in which case the Binomial coefficients may be expressed as $$\binom{\alpha}{i}(-1)^i = \frac{(1-\alpha)(2-\alpha)\cdots(i-\alpha)}{i!}=\frac{\Gamma(i+1-\alpha)}{\Gamma(1-\alpha)i!}.$$ Using the formula $$\Gamma(\alpha)\Gamma(1-\alpha) = \frac{\pi}{\sin(\pi\alpha)},$$ re-express $p$ as $$\eqalign{ p_{t,\beta}(\alpha) &= \frac{\sin(\pi\alpha)\Gamma(\alpha+\beta)}{\pi\Gamma(\beta)}(1-t)^\beta\left(\frac{\Gamma(1-\alpha)}{\beta}+\sum_{i=1}^\infty \frac{\Gamma(i+1-\alpha)}{i!}\frac{(1-t)^{i}}{i+\beta}\right). }$$ Since each term is positive, lower bounds can be obtained by truncating the sum. By stopping at the zeroth term (which is explicitly written out) we obtain $$p_{t,\beta}(\alpha) \ge \frac{\sin(\pi\alpha)\Gamma(\alpha+\beta)}{\pi\Gamma(\beta)}(1-t)^\beta\frac{\Gamma(1-\alpha)}{\beta}\approx \alpha \left(\frac{(1-t)^\beta}{\beta}\right).\tag{2}$$ The relative error is on the order of $1-t,$ which will be excellent for $t\approx 1.$ Together, the bounds $(1)$ and $(2)$ give $$\alpha \left(\frac{(1-t)^\beta}{\beta}\right) \lt p_{t,\beta}(\alpha) \lt \alpha \left(\frac{(1-t)^\beta}{t\beta}\right)$$ for $\alpha \approx 0.$ For $t\approx 1$ and $\alpha \approx 0$ these inequalities work well. Here, to illustrate, are plots (colored curves) of $p_{t,\beta}(\alpha)$ for the range $10^{-6}\lt \alpha\lt 10^{-1}.$ The first row is for $\beta=1$ and the second for $\beta=20.$ Both scales are logarithmic. The bounds $(1)$ and $(2)$ are plotted with dotted lines. Evidently they are correct and, for larger $t,$ are very accurate.
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater th
The following analysis obtains bounds that hold for sufficiently small $\alpha$ and are expressed in terms of elementary functions. The tail probability, written as a function of $\alpha\gt 0,$ is $$p
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater than 1 The following analysis obtains bounds that hold for sufficiently small $\alpha$ and are expressed in terms of elementary functions. The tail probability, written as a function of $\alpha\gt 0,$ is $$p_{t,\beta}(\alpha) = {\Pr}_{\alpha,\beta}(X\gt t) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\int_t^1 s^{\alpha-1}(1-s)^{\beta-1}\mathrm{d}s.$$ For an upper bound, observe that over the interval of integration $(t,1),$ $$s^{\alpha-1}(1-s)^{\beta-1} \gt t^{\alpha-1}(1-s)^{\beta-1}.$$ The relative error is no greater than $t^{\alpha-1},$ which will be satisfactory for $t\approx 1.$ Integrate that upper bound and use the fundamental relationship $z\Gamma(z)=\Gamma(z+1)$ to produce $$p_{t,\beta}(\alpha) \le \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\frac{t^{\alpha-1}(1-t)^\beta}{\beta} = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha+1)\Gamma(\beta+1)}\,\alpha\, t^{\alpha-1}(1-t)^\beta.$$ This is a multiple of $\alpha.$ The multiplier, although depending on $\alpha,$ is differentiable at $\alpha=0$ where its value is $$\frac{\Gamma(0+\beta)}{\Gamma(0+1)\Gamma(\beta+1)} t^{0-1}(1-t)^\beta = \frac{(1-t)^\beta}{t\beta}.$$ Thus, at least for sufficiently small $\alpha,$ $$p_{t,\beta}(\alpha) \lt \alpha \left(\frac{(1-t)^\beta}{t\beta}\right).\tag{1}$$ Taking $0\lt t\le 1,$ the Binomial expansion of $$\eqalign{s^{\alpha-1}&=(1+(s-1))^{\alpha-1} \\ &= 1 + ({\alpha-1})(s-1) + \cdots + \frac{(\alpha-1)(\alpha-2)\cdots(\alpha-i)}{i!}(s-1)^i + \cdots\\ &= \sum_{i=0}^\infty \binom{\alpha}{i}(s-1)^i}$$ converges absolutely, permitting the integral of $s^{\alpha-1}(1-s)^{\beta-1}$ to be performed term by term as $$\eqalign{ p_{t,\beta}(\alpha) &= \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\sum_{i=0}^\infty \int_t^1 \binom{\alpha}{i}(s-1)^i(1-s)^{\beta-1}\mathrm{d}s\\ &= \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\sum_{i=0}^\infty \binom{\alpha}{i}(-1)^i\frac{1}{i+\beta}(1-t)^{i+\beta}. }$$ The assumption $0\lt \alpha\lt 1$ implies $\alpha-1$ is negative, in which case the Binomial coefficients may be expressed as $$\binom{\alpha}{i}(-1)^i = \frac{(1-\alpha)(2-\alpha)\cdots(i-\alpha)}{i!}=\frac{\Gamma(i+1-\alpha)}{\Gamma(1-\alpha)i!}.$$ Using the formula $$\Gamma(\alpha)\Gamma(1-\alpha) = \frac{\pi}{\sin(\pi\alpha)},$$ re-express $p$ as $$\eqalign{ p_{t,\beta}(\alpha) &= \frac{\sin(\pi\alpha)\Gamma(\alpha+\beta)}{\pi\Gamma(\beta)}(1-t)^\beta\left(\frac{\Gamma(1-\alpha)}{\beta}+\sum_{i=1}^\infty \frac{\Gamma(i+1-\alpha)}{i!}\frac{(1-t)^{i}}{i+\beta}\right). }$$ Since each term is positive, lower bounds can be obtained by truncating the sum. By stopping at the zeroth term (which is explicitly written out) we obtain $$p_{t,\beta}(\alpha) \ge \frac{\sin(\pi\alpha)\Gamma(\alpha+\beta)}{\pi\Gamma(\beta)}(1-t)^\beta\frac{\Gamma(1-\alpha)}{\beta}\approx \alpha \left(\frac{(1-t)^\beta}{\beta}\right).\tag{2}$$ The relative error is on the order of $1-t,$ which will be excellent for $t\approx 1.$ Together, the bounds $(1)$ and $(2)$ give $$\alpha \left(\frac{(1-t)^\beta}{\beta}\right) \lt p_{t,\beta}(\alpha) \lt \alpha \left(\frac{(1-t)^\beta}{t\beta}\right)$$ for $\alpha \approx 0.$ For $t\approx 1$ and $\alpha \approx 0$ these inequalities work well. Here, to illustrate, are plots (colored curves) of $p_{t,\beta}(\alpha)$ for the range $10^{-6}\lt \alpha\lt 10^{-1}.$ The first row is for $\beta=1$ and the second for $\beta=20.$ Both scales are logarithmic. The bounds $(1)$ and $(2)$ are plotted with dotted lines. Evidently they are correct and, for larger $t,$ are very accurate.
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater th The following analysis obtains bounds that hold for sufficiently small $\alpha$ and are expressed in terms of elementary functions. The tail probability, written as a function of $\alpha\gt 0,$ is $$p
45,606
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater than 1
Graphical comment: CDF plots of $\mathsf{Beta}(a, 2)$ for $a=.01, .05, .1, .15, .2, .25, .3$ (respective colors red through purple).
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater th
Graphical comment: CDF plots of $\mathsf{Beta}(a, 2)$ for $a=.01, .05, .1, .15, .2, .25, .3$ (respective colors red through purple).
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater than 1 Graphical comment: CDF plots of $\mathsf{Beta}(a, 2)$ for $a=.01, .05, .1, .15, .2, .25, .3$ (respective colors red through purple).
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater th Graphical comment: CDF plots of $\mathsf{Beta}(a, 2)$ for $a=.01, .05, .1, .15, .2, .25, .3$ (respective colors red through purple).
45,607
GAM factor smooth interaction--include main effect smooth?
You need to be careful with ordered factors here in mgcv as they aren't doing what I think you want to be fitting. If you pass an ordered factor to by, then gam() etc set up a smooth for all the levels except the reference level, and further more they are set up as smooth differences between the reference level and the level for a specific smooth. What is happening in your first model is that the reference level of age is modelled as a constant term (it is the intercept), with the effect of age for the other levels being smooth differences from this constant. In the second model, you add s(age), which then models the smooth effect of age in the reference level. Now, the by smooths model smooth differences from this no-longer-constant reference smooth. I suspect that in the second model, all the levels of sex respond similarly to age hence there are no large deviations from the smooth for the reference level of sex and hence the terms are not significant. In the first model, the effect of age for the reference level was constant, so the difference smooths picked up the actual non-linear effect of age and hence were significantly different from zero. If you just want to estimate a model with separate smooth function of age for each level of sex I would use an unordered factor (factor(..., ordered = FALSE), not ordered() or factor(..., ordered = TRUE). The the model would be: y ~ fsex + s(age, by = fsex) where fsex is the unordered factor. If you want the model to be explicitly set up like ANOVA contrasts (estimate an effect for the reference level then have differences between individual levels and the reference), then you need to fit the model as per your second example with and ordered factor y ~ osex + s(age) + s(age, by = osex) where osex is the ordered factor. But note that in this model, s(age) is not the main smooth effect of age. It is the smooth effect of age in the reference level of osex.
GAM factor smooth interaction--include main effect smooth?
You need to be careful with ordered factors here in mgcv as they aren't doing what I think you want to be fitting. If you pass an ordered factor to by, then gam() etc set up a smooth for all the level
GAM factor smooth interaction--include main effect smooth? You need to be careful with ordered factors here in mgcv as they aren't doing what I think you want to be fitting. If you pass an ordered factor to by, then gam() etc set up a smooth for all the levels except the reference level, and further more they are set up as smooth differences between the reference level and the level for a specific smooth. What is happening in your first model is that the reference level of age is modelled as a constant term (it is the intercept), with the effect of age for the other levels being smooth differences from this constant. In the second model, you add s(age), which then models the smooth effect of age in the reference level. Now, the by smooths model smooth differences from this no-longer-constant reference smooth. I suspect that in the second model, all the levels of sex respond similarly to age hence there are no large deviations from the smooth for the reference level of sex and hence the terms are not significant. In the first model, the effect of age for the reference level was constant, so the difference smooths picked up the actual non-linear effect of age and hence were significantly different from zero. If you just want to estimate a model with separate smooth function of age for each level of sex I would use an unordered factor (factor(..., ordered = FALSE), not ordered() or factor(..., ordered = TRUE). The the model would be: y ~ fsex + s(age, by = fsex) where fsex is the unordered factor. If you want the model to be explicitly set up like ANOVA contrasts (estimate an effect for the reference level then have differences between individual levels and the reference), then you need to fit the model as per your second example with and ordered factor y ~ osex + s(age) + s(age, by = osex) where osex is the ordered factor. But note that in this model, s(age) is not the main smooth effect of age. It is the smooth effect of age in the reference level of osex.
GAM factor smooth interaction--include main effect smooth? You need to be careful with ordered factors here in mgcv as they aren't doing what I think you want to be fitting. If you pass an ordered factor to by, then gam() etc set up a smooth for all the level
45,608
Normality testing with very large sample size?
Continuation from comment: If you are using simulated normal data from R, then you can be quite confident that what purport to be normal samples really are. So there shouldn't be 'quirks' for the Shapio-Wilk test to detect. Checking 100,000 standard normal samples of size 1000 with the Shapiro-Wilk test, I got rejections just about 5% of the time, which is what one would expect from a test at the 5% level. set.seed(2019) pv = replicate( 10^5, shapiro.test(rnorm(1000))$p.val ) mean(pv <= .05) [1] 0.05009 Addendum. By contrast, the distribution $\mathsf{Beta}(20,20)$ "looks" very much like a normal distribution, but isn't exactly normal. If I do the same simulation for this approximate model, Shapiro-Wilk rejects about 7% of the time. Looked at from the perspective of power, that's not great. But it seems Shapiro-Wilk is sometimes able to detect that the data aren't exactly normal. This is a long way from "always," but I think $\mathsf{Beta}(20,20)$ is closer to normal than a lot of real-life "normal" data are. (And the link says always may be "a bit strongly stated." I suspect the greatest trouble may come with samples a lot bigger than 1000, and for some normal approximations that are quite useful--even if imperfect.) "Not every statistically significant difference is a difference of practical importance." Sometimes, people who should know better seem to forget that when doing goodness-of-fit tests. set.seed(2019) pv = replicate( 10^5, shapiro.test(rbeta(1000, 20,20))$p.val ) mean(pv <= .05) [1] 0.07152
Normality testing with very large sample size?
Continuation from comment: If you are using simulated normal data from R, then you can be quite confident that what purport to be normal samples really are. So there shouldn't be 'quirks' for the Sha
Normality testing with very large sample size? Continuation from comment: If you are using simulated normal data from R, then you can be quite confident that what purport to be normal samples really are. So there shouldn't be 'quirks' for the Shapio-Wilk test to detect. Checking 100,000 standard normal samples of size 1000 with the Shapiro-Wilk test, I got rejections just about 5% of the time, which is what one would expect from a test at the 5% level. set.seed(2019) pv = replicate( 10^5, shapiro.test(rnorm(1000))$p.val ) mean(pv <= .05) [1] 0.05009 Addendum. By contrast, the distribution $\mathsf{Beta}(20,20)$ "looks" very much like a normal distribution, but isn't exactly normal. If I do the same simulation for this approximate model, Shapiro-Wilk rejects about 7% of the time. Looked at from the perspective of power, that's not great. But it seems Shapiro-Wilk is sometimes able to detect that the data aren't exactly normal. This is a long way from "always," but I think $\mathsf{Beta}(20,20)$ is closer to normal than a lot of real-life "normal" data are. (And the link says always may be "a bit strongly stated." I suspect the greatest trouble may come with samples a lot bigger than 1000, and for some normal approximations that are quite useful--even if imperfect.) "Not every statistically significant difference is a difference of practical importance." Sometimes, people who should know better seem to forget that when doing goodness-of-fit tests. set.seed(2019) pv = replicate( 10^5, shapiro.test(rbeta(1000, 20,20))$p.val ) mean(pv <= .05) [1] 0.07152
Normality testing with very large sample size? Continuation from comment: If you are using simulated normal data from R, then you can be quite confident that what purport to be normal samples really are. So there shouldn't be 'quirks' for the Sha
45,609
Normality testing with very large sample size?
As @gg pointed out in a comment, this entire discussion in pointless without defining how normal-like does data have to be for us to consider it "normal enough". In practice, I often like the following criteria: Skewness close to 0, maybe a (-1,1) range, or that you feel more comfortable with depending on "how normal-like is normal enough". Kurtosis close to 3 (or excess kurtosis close to 0) High kurtosis is often a greater issue than low kurtosis as it leads to more outliers. Median not far away from the mean QQ-plots are your friends!
Normality testing with very large sample size?
As @gg pointed out in a comment, this entire discussion in pointless without defining how normal-like does data have to be for us to consider it "normal enough". In practice, I often like the followin
Normality testing with very large sample size? As @gg pointed out in a comment, this entire discussion in pointless without defining how normal-like does data have to be for us to consider it "normal enough". In practice, I often like the following criteria: Skewness close to 0, maybe a (-1,1) range, or that you feel more comfortable with depending on "how normal-like is normal enough". Kurtosis close to 3 (or excess kurtosis close to 0) High kurtosis is often a greater issue than low kurtosis as it leads to more outliers. Median not far away from the mean QQ-plots are your friends!
Normality testing with very large sample size? As @gg pointed out in a comment, this entire discussion in pointless without defining how normal-like does data have to be for us to consider it "normal enough". In practice, I often like the followin
45,610
Normality testing with very large sample size?
...However, if the sample size is very large, the test is extremely "accurate" but practically useless because the confidence interval is too small. They will always reject the null, even if the distribution is reasonably normal enough... What if you take a sub-sample of size 100 or 300 from the large sample consisting of several thousands or more. If I'm not mistaken then taking the sub-samples will reflect the same distribution but will work better with the common normality tests.
Normality testing with very large sample size?
...However, if the sample size is very large, the test is extremely "accurate" but practically useless because the confidence interval is too small. They will always reject the null, even if the distr
Normality testing with very large sample size? ...However, if the sample size is very large, the test is extremely "accurate" but practically useless because the confidence interval is too small. They will always reject the null, even if the distribution is reasonably normal enough... What if you take a sub-sample of size 100 or 300 from the large sample consisting of several thousands or more. If I'm not mistaken then taking the sub-samples will reflect the same distribution but will work better with the common normality tests.
Normality testing with very large sample size? ...However, if the sample size is very large, the test is extremely "accurate" but practically useless because the confidence interval is too small. They will always reject the null, even if the distr
45,611
How does logistic regression "elegantly" handle unbalanced classes?
No, we can't include the prevalence as a feature. After all, this is exactly what we are trying to model! What FH means here is that if there are features that contribute to the prevalence of the target, these will have appropriate parameter estimates in the logistic regression. If a disease is extremely rare, the intercept will be very small (i.e., negative with a large absolute value). If a certain predictor increases the prevalence, then this predictor's parameter estimate will be positive. (Predictors could include, e.g., a gene SNP, or the result of a blood test.) The end result is that logistic regression, if the model is correctly specified, will give you the correct probability for a new sample to be of the target class, even if the target class is overall very rare. This is as it should be. The statistical part of the exercise ends with a probabilistic prediction. What decision should be taken based on this probabilistic prediction is a different matter, which needs to take costs of decisions into account. No, there is no threshold involved in logistic regression. (Nor in any other probabilistic model.) Per above, a threshold (or multiple ones!) may be used later, in weighing the probabilistic prediction against costs. Note the context in which FH discusses re-estimating the intercept: it is one of oversampling to address rare outcomes. Oversampling can be used in logistic regression. One would first fit a model to a sample that oversamples the rare outcome we are interested in. This gives us useful parameter estimates for the predictors we have in the model, but the intercept coefficient will be biased high. Then, in a second step, we can nail down the predictor parameter estimates and re-estimate the intercept coefficient only by refitting the model to the full sample. FH and I would argue that no, we should not aim for a precision/recall tradeoff. Instead, we should be aiming for well-calibrated probabilistic predictions, which can then be used in a decision, along with, and I am repeating myself, the consequences of misclassification and other misdecisions. And as a matter of fact, this is exactly what logistic regression does. It does not care at all about precision or recall. What it cares about is the likelihood. Which is just another way of looking at a probabilistic model. And no, bias is not a desirable trait in this context.
How does logistic regression "elegantly" handle unbalanced classes?
No, we can't include the prevalence as a feature. After all, this is exactly what we are trying to model! What FH means here is that if there are features that contribute to the prevalence of the targ
How does logistic regression "elegantly" handle unbalanced classes? No, we can't include the prevalence as a feature. After all, this is exactly what we are trying to model! What FH means here is that if there are features that contribute to the prevalence of the target, these will have appropriate parameter estimates in the logistic regression. If a disease is extremely rare, the intercept will be very small (i.e., negative with a large absolute value). If a certain predictor increases the prevalence, then this predictor's parameter estimate will be positive. (Predictors could include, e.g., a gene SNP, or the result of a blood test.) The end result is that logistic regression, if the model is correctly specified, will give you the correct probability for a new sample to be of the target class, even if the target class is overall very rare. This is as it should be. The statistical part of the exercise ends with a probabilistic prediction. What decision should be taken based on this probabilistic prediction is a different matter, which needs to take costs of decisions into account. No, there is no threshold involved in logistic regression. (Nor in any other probabilistic model.) Per above, a threshold (or multiple ones!) may be used later, in weighing the probabilistic prediction against costs. Note the context in which FH discusses re-estimating the intercept: it is one of oversampling to address rare outcomes. Oversampling can be used in logistic regression. One would first fit a model to a sample that oversamples the rare outcome we are interested in. This gives us useful parameter estimates for the predictors we have in the model, but the intercept coefficient will be biased high. Then, in a second step, we can nail down the predictor parameter estimates and re-estimate the intercept coefficient only by refitting the model to the full sample. FH and I would argue that no, we should not aim for a precision/recall tradeoff. Instead, we should be aiming for well-calibrated probabilistic predictions, which can then be used in a decision, along with, and I am repeating myself, the consequences of misclassification and other misdecisions. And as a matter of fact, this is exactly what logistic regression does. It does not care at all about precision or recall. What it cares about is the likelihood. Which is just another way of looking at a probabilistic model. And no, bias is not a desirable trait in this context.
How does logistic regression "elegantly" handle unbalanced classes? No, we can't include the prevalence as a feature. After all, this is exactly what we are trying to model! What FH means here is that if there are features that contribute to the prevalence of the targ
45,612
Why does Judea Pearl call his causal graphs Markovian?
He is referring to the Parental Markov Condition (see theorems 1.2.7 and 1.4.1 of Causality). Given a graph $G$, we say a distribution $P$ is Markov relative to $G$ if every variable is independent of all its non descendants conditional on its parents. An acyclic causal model $M$ with jointly independent error terms induce a probability distribution over the observed variables which is Markovian relative to $G(M)$.
Why does Judea Pearl call his causal graphs Markovian?
He is referring to the Parental Markov Condition (see theorems 1.2.7 and 1.4.1 of Causality). Given a graph $G$, we say a distribution $P$ is Markov relative to $G$ if every variable is independent of
Why does Judea Pearl call his causal graphs Markovian? He is referring to the Parental Markov Condition (see theorems 1.2.7 and 1.4.1 of Causality). Given a graph $G$, we say a distribution $P$ is Markov relative to $G$ if every variable is independent of all its non descendants conditional on its parents. An acyclic causal model $M$ with jointly independent error terms induce a probability distribution over the observed variables which is Markovian relative to $G(M)$.
Why does Judea Pearl call his causal graphs Markovian? He is referring to the Parental Markov Condition (see theorems 1.2.7 and 1.4.1 of Causality). Given a graph $G$, we say a distribution $P$ is Markov relative to $G$ if every variable is independent of
45,613
Why does Judea Pearl call his causal graphs Markovian?
These graphs do satisfy the Markov property - once you condition on the parent node, from which the causal arrow comes, the variable is independent of earlier ancestors that causally affect that parent (unless there is a separate arrow directly from the ancestor node to the present node).
Why does Judea Pearl call his causal graphs Markovian?
These graphs do satisfy the Markov property - once you condition on the parent node, from which the causal arrow comes, the variable is independent of earlier ancestors that causally affect that paren
Why does Judea Pearl call his causal graphs Markovian? These graphs do satisfy the Markov property - once you condition on the parent node, from which the causal arrow comes, the variable is independent of earlier ancestors that causally affect that parent (unless there is a separate arrow directly from the ancestor node to the present node).
Why does Judea Pearl call his causal graphs Markovian? These graphs do satisfy the Markov property - once you condition on the parent node, from which the causal arrow comes, the variable is independent of earlier ancestors that causally affect that paren
45,614
Why is the unbiased sample variance estimator so ubiquitous in science?
Dividing by (n+1) minimizes the MSE only for normally distributed data. In general, a variance estimator of the form $$s_k^2 = \frac{1}{k}\sum_{i=1}^n (x_i-\overline{x})^2$$ has minimal MSE for $$k=(n-1)\left[ \frac{1}{n}\left(\frac{\mu_4}{\mu_2^2} - \frac{n-3}{n-1}\right) + 1\right]$$ where $\mu_2$ and $\mu_4$ are the second and fourth central moment of the distribution (see here for a proof), which means that $\mu_4/\mu_2^2$ is the kurtosis. For the normal distribution, it is $\mu_4 = 3\mu_2^2$ and the above formula reduces to $k=n+1$. As you cannot know whether your data is normally distributed, it is thus not clear whether n+1 actually is the best choice. The bias correction by dividing through n-1, however, is universal and does not require normality. This might be a reason why the bias corrected version is preferred. From Jensen's inequality, it follows that $E(X^4)\geq \big(E(X^2)\big)^2$, and therefore $\mu_4/\mu_2^2\geq 1$. The optimal choice for k is thus always greater than n-1: $$k \geq \frac{n-1}{n} \left(1 - \frac{n-3}{n-1}\right) + (n-1) = \frac{2}{n} + (n-1) > n-1$$ There might even be distributions (those restricted to a small range without outliers) for which k=n-1 actually is close to the optimal choice with respect to the MSE. The example of the normal distribution shows, however, that in most practical situations the optimal k is greater than this value and choosing the empirical variance (dividing by n) typically yields an estimate that is on average closer to the true value than the bias corrected empirical variance (dividing by n-1). Unlike the bias correction, this is not guaranteed in all cases, though.
Why is the unbiased sample variance estimator so ubiquitous in science?
Dividing by (n+1) minimizes the MSE only for normally distributed data. In general, a variance estimator of the form $$s_k^2 = \frac{1}{k}\sum_{i=1}^n (x_i-\overline{x})^2$$ has minimal MSE for $$k=(n
Why is the unbiased sample variance estimator so ubiquitous in science? Dividing by (n+1) minimizes the MSE only for normally distributed data. In general, a variance estimator of the form $$s_k^2 = \frac{1}{k}\sum_{i=1}^n (x_i-\overline{x})^2$$ has minimal MSE for $$k=(n-1)\left[ \frac{1}{n}\left(\frac{\mu_4}{\mu_2^2} - \frac{n-3}{n-1}\right) + 1\right]$$ where $\mu_2$ and $\mu_4$ are the second and fourth central moment of the distribution (see here for a proof), which means that $\mu_4/\mu_2^2$ is the kurtosis. For the normal distribution, it is $\mu_4 = 3\mu_2^2$ and the above formula reduces to $k=n+1$. As you cannot know whether your data is normally distributed, it is thus not clear whether n+1 actually is the best choice. The bias correction by dividing through n-1, however, is universal and does not require normality. This might be a reason why the bias corrected version is preferred. From Jensen's inequality, it follows that $E(X^4)\geq \big(E(X^2)\big)^2$, and therefore $\mu_4/\mu_2^2\geq 1$. The optimal choice for k is thus always greater than n-1: $$k \geq \frac{n-1}{n} \left(1 - \frac{n-3}{n-1}\right) + (n-1) = \frac{2}{n} + (n-1) > n-1$$ There might even be distributions (those restricted to a small range without outliers) for which k=n-1 actually is close to the optimal choice with respect to the MSE. The example of the normal distribution shows, however, that in most practical situations the optimal k is greater than this value and choosing the empirical variance (dividing by n) typically yields an estimate that is on average closer to the true value than the bias corrected empirical variance (dividing by n-1). Unlike the bias correction, this is not guaranteed in all cases, though.
Why is the unbiased sample variance estimator so ubiquitous in science? Dividing by (n+1) minimizes the MSE only for normally distributed data. In general, a variance estimator of the form $$s_k^2 = \frac{1}{k}\sum_{i=1}^n (x_i-\overline{x})^2$$ has minimal MSE for $$k=(n
45,615
Why is the unbiased sample variance estimator so ubiquitous in science?
Personally I prefer $\hat\sigma^2$ over $\hat\sigma_{MMSE}^2$ for a reason different from unbiasedness. If the estimation problem were in fact symmetric, i.e., too low values would be as bad as too high values of the same size, I'd think that the MSE would be a good measure, and optimum MSE would be in fact good. (Note that also the expected value, and by extension the concept of unbiasedness, implicitly treats the estimation problem as symmetric, as deviations on both sides count the same also for the computation of the expected value.) But actually variances are bounded from below, and differences between small variances should be taken as more important as the same differences between large variances; also, by implication, there should be more loss for the same deviation in negative direction than in positive direction (if the true $\sigma^2$ is 1, 0.5 should be seen as worse than 1.5; in fact, in robust statistics, $\hat\sigma^2\to 0$ is seen as "breakdown" along with $\hat\sigma^2\to \infty$). So in fact one could think that we'd need an estimator to optimise an asymmetric loss function rather than MSE, and correspondingly there would even need to be an alternative definition to unbiasedness. Now this would be hard to do; as it isn't at all obvious how this loss function should look like, and different choices would have different implications. One could probably do some research work on this and publish a nice paper, but in practical analysis situations, that's not really what we want to do. So in standard routine data analysis I bite the bullet and use $\hat\sigma^2$ as this is implemented everywhere and I can explain its features in a fairly generally understandable way, even though secretly I think that, because of the unbiasedness feature ignoring asymmetry, $\hat\sigma^2$ likely tends to be too small. I wouldn't worry much about any "complication" as a result of using $\hat\sigma_{MMSE}^2$; just using a different factor would be easy to do, and the optimum MSE argument is appealing on the surface, but in fact $\hat\sigma_{MMSE}^2$ is even smaller than $\hat\sigma^2$, which I honestly think is rather too small already, if anything. So no, $\hat\sigma_{MMSE}^2$ is not a good alternative! PS: Of course one could have the same discussion involving the Maximum Likelihood variance estimator with factor $\frac{1}{n}$, which is sometimes used, and has a better MSE than $\hat\sigma^2$. PPS: One could actually interpret the difference between the optimum unbiased, the ML, and the minimum MSE estimator as an expression of the asymmetry of the problem.
Why is the unbiased sample variance estimator so ubiquitous in science?
Personally I prefer $\hat\sigma^2$ over $\hat\sigma_{MMSE}^2$ for a reason different from unbiasedness. If the estimation problem were in fact symmetric, i.e., too low values would be as bad as too hi
Why is the unbiased sample variance estimator so ubiquitous in science? Personally I prefer $\hat\sigma^2$ over $\hat\sigma_{MMSE}^2$ for a reason different from unbiasedness. If the estimation problem were in fact symmetric, i.e., too low values would be as bad as too high values of the same size, I'd think that the MSE would be a good measure, and optimum MSE would be in fact good. (Note that also the expected value, and by extension the concept of unbiasedness, implicitly treats the estimation problem as symmetric, as deviations on both sides count the same also for the computation of the expected value.) But actually variances are bounded from below, and differences between small variances should be taken as more important as the same differences between large variances; also, by implication, there should be more loss for the same deviation in negative direction than in positive direction (if the true $\sigma^2$ is 1, 0.5 should be seen as worse than 1.5; in fact, in robust statistics, $\hat\sigma^2\to 0$ is seen as "breakdown" along with $\hat\sigma^2\to \infty$). So in fact one could think that we'd need an estimator to optimise an asymmetric loss function rather than MSE, and correspondingly there would even need to be an alternative definition to unbiasedness. Now this would be hard to do; as it isn't at all obvious how this loss function should look like, and different choices would have different implications. One could probably do some research work on this and publish a nice paper, but in practical analysis situations, that's not really what we want to do. So in standard routine data analysis I bite the bullet and use $\hat\sigma^2$ as this is implemented everywhere and I can explain its features in a fairly generally understandable way, even though secretly I think that, because of the unbiasedness feature ignoring asymmetry, $\hat\sigma^2$ likely tends to be too small. I wouldn't worry much about any "complication" as a result of using $\hat\sigma_{MMSE}^2$; just using a different factor would be easy to do, and the optimum MSE argument is appealing on the surface, but in fact $\hat\sigma_{MMSE}^2$ is even smaller than $\hat\sigma^2$, which I honestly think is rather too small already, if anything. So no, $\hat\sigma_{MMSE}^2$ is not a good alternative! PS: Of course one could have the same discussion involving the Maximum Likelihood variance estimator with factor $\frac{1}{n}$, which is sometimes used, and has a better MSE than $\hat\sigma^2$. PPS: One could actually interpret the difference between the optimum unbiased, the ML, and the minimum MSE estimator as an expression of the asymmetry of the problem.
Why is the unbiased sample variance estimator so ubiquitous in science? Personally I prefer $\hat\sigma^2$ over $\hat\sigma_{MMSE}^2$ for a reason different from unbiasedness. If the estimation problem were in fact symmetric, i.e., too low values would be as bad as too hi
45,616
Why is the unbiased sample variance estimator so ubiquitous in science?
Calculating sample variance as squared deviation divided by n + 1 (instead of n - 1) will lead to underestimating variance, which will lead to confidence intervals with lower than expected probability coverage. That's probably at least part of the reason that an unbiased estimator of variance is preferred over minimizing MSE.
Why is the unbiased sample variance estimator so ubiquitous in science?
Calculating sample variance as squared deviation divided by n + 1 (instead of n - 1) will lead to underestimating variance, which will lead to confidence intervals with lower than expected probability
Why is the unbiased sample variance estimator so ubiquitous in science? Calculating sample variance as squared deviation divided by n + 1 (instead of n - 1) will lead to underestimating variance, which will lead to confidence intervals with lower than expected probability coverage. That's probably at least part of the reason that an unbiased estimator of variance is preferred over minimizing MSE.
Why is the unbiased sample variance estimator so ubiquitous in science? Calculating sample variance as squared deviation divided by n + 1 (instead of n - 1) will lead to underestimating variance, which will lead to confidence intervals with lower than expected probability
45,617
Can i include the product of two random variables? Or do I risk collinearity?
Operating under your stated assumption that $x_3=x_1x_2$ and $x_4=x_1/x_2$ need to be entertained as possible explanatory variables in a model of a response $Y$ (and therefore not summarily dropped because they might be a little inconvenient), it can be helpful to consider alternative ways of expressing this model. As stated, the model is of the form $$Y \sim F(x_1, x_2, x_3, x_4; \theta) = F(x_1, x_2, x_1x_2, x_1/x_2; \theta)$$ for a given distribution family $F$ involving unknown parameters $\theta$ to be determined. For instance, a linear regression model would involve a five-dimensional parameter $\theta = (\beta_0, \beta_1, \beta_2, \beta_3, \beta_4)$ in the form $$E[Y] = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \beta_4 x_4 = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1x_2 + \beta_4 x_1/x_2.$$ For simplicity of exposition, let's analyse the linear regression model: it will be clear how the analysis extends to other models. One way is to restate the model in terms of $x_4$ and $x_2,$ which algebraically imply $x_1=x_2x_4$ and $x_3=x_2^2x_4:$ $$E[Y] = \beta_0 + \beta_1 x_4x_2 + \beta_2 x_2 + \beta_3 x_4x_2^2 + \beta_4 x_4 = \beta_0 + \beta_2 x_2 + \beta_4 x_4 + x_4\left(\beta_1 x_2 + \beta_3 x_2^2\right).$$ The last term would ordinarily be characterized as an interaction between $x_4$ and a quadratic function of $x_2.$ Since, except in very special circumstances, interactions should be included only when their component terms are included, this suggests you ought to extend to model to include an $x_2^2$ term. It would have the form $$E[Y] = \beta_0 + \left(\beta_2 x_2 + \beta_5 x_2^2\right) + \beta_4 x_4 + x_4\left(\beta_1 x_2 + \beta_3 x_2^2\right).$$ That is a model involving (a) $x_4$ and (b) the simplest possible quadratic spline of $x_2.$ Such models are common: the quadratic terms allow for some amount of nonlinear response in $x_2$ and the interaction allows for the response to change with different values of $x_4$ in a controlled way. These simple algebraic manipulations demonstrate that the proposed model is not at all unusual. They reframe it in terms of standard, well-understood concepts. There remains the question of collinearity. That collinearity could be a problem is demonstrated by the case where both $x_1$ and $x_2$ are binary variables coded as $\pm 1.$ In this case, $x_1/x_2$ and $x_1x_2$ are always equal (not just collinear). On the other hand, that collinearity might not be much of a problem can be demonstrated by exhibiting some sample data with relatively little collinearity. We would want $x_2$ to be orthogonal to $x_2^2,$ of course, and then everything will be ok provided the interactions don't introduce collinearity. Unfortunately, $x_4$ and $x_4x_2^2$ are likely to be positively correlated. But by how much? Consider the data $x_2 = (-1,0,1,\, -1,0,1,\, -1,0,1)$ and $x_4 = (-1,\sqrt{3},-1,\,0,0,0,\,1,-\sqrt{3},1).$ The covariance matrix of the columns $(x_2, x_2^2, x_4x_2, x_4, x_4x_2^2)$ is $$\pmatrix{3&0&0&0&0 \\ 0 & 1 & 0&0&0 \\ 0&0&2&0&0 \\ 0&0&0&5&2 \\ 0&0&0&2&2}/4.$$ It is nearly orthogonal, with correlation only between the last two variables (as expected). (Notice that introducing $x_2^2$ has not changed anything, because this variable is orthogonal to all the others.) The ratio of the largest to the smallest eigenvalue (its condition number) is $6.$ This is not beautiful, but it's not bad, either. One could easily obtain reliable coefficient estimates with such explanatory variables. If you don't have the luxury of choosing the values of $x_2$ and $x_4$ to arrange such near-orthogonality, then you will simply have to proceed as anyone would always do in such cases: investigate the data you have and deal with any collinearity in the usual ways (which would include ignoring it; dropping variables based on scientific considerations; selecting some principal components; using a Lasso; and so on).
Can i include the product of two random variables? Or do I risk collinearity?
Operating under your stated assumption that $x_3=x_1x_2$ and $x_4=x_1/x_2$ need to be entertained as possible explanatory variables in a model of a response $Y$ (and therefore not summarily dropped be
Can i include the product of two random variables? Or do I risk collinearity? Operating under your stated assumption that $x_3=x_1x_2$ and $x_4=x_1/x_2$ need to be entertained as possible explanatory variables in a model of a response $Y$ (and therefore not summarily dropped because they might be a little inconvenient), it can be helpful to consider alternative ways of expressing this model. As stated, the model is of the form $$Y \sim F(x_1, x_2, x_3, x_4; \theta) = F(x_1, x_2, x_1x_2, x_1/x_2; \theta)$$ for a given distribution family $F$ involving unknown parameters $\theta$ to be determined. For instance, a linear regression model would involve a five-dimensional parameter $\theta = (\beta_0, \beta_1, \beta_2, \beta_3, \beta_4)$ in the form $$E[Y] = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_3 + \beta_4 x_4 = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1x_2 + \beta_4 x_1/x_2.$$ For simplicity of exposition, let's analyse the linear regression model: it will be clear how the analysis extends to other models. One way is to restate the model in terms of $x_4$ and $x_2,$ which algebraically imply $x_1=x_2x_4$ and $x_3=x_2^2x_4:$ $$E[Y] = \beta_0 + \beta_1 x_4x_2 + \beta_2 x_2 + \beta_3 x_4x_2^2 + \beta_4 x_4 = \beta_0 + \beta_2 x_2 + \beta_4 x_4 + x_4\left(\beta_1 x_2 + \beta_3 x_2^2\right).$$ The last term would ordinarily be characterized as an interaction between $x_4$ and a quadratic function of $x_2.$ Since, except in very special circumstances, interactions should be included only when their component terms are included, this suggests you ought to extend to model to include an $x_2^2$ term. It would have the form $$E[Y] = \beta_0 + \left(\beta_2 x_2 + \beta_5 x_2^2\right) + \beta_4 x_4 + x_4\left(\beta_1 x_2 + \beta_3 x_2^2\right).$$ That is a model involving (a) $x_4$ and (b) the simplest possible quadratic spline of $x_2.$ Such models are common: the quadratic terms allow for some amount of nonlinear response in $x_2$ and the interaction allows for the response to change with different values of $x_4$ in a controlled way. These simple algebraic manipulations demonstrate that the proposed model is not at all unusual. They reframe it in terms of standard, well-understood concepts. There remains the question of collinearity. That collinearity could be a problem is demonstrated by the case where both $x_1$ and $x_2$ are binary variables coded as $\pm 1.$ In this case, $x_1/x_2$ and $x_1x_2$ are always equal (not just collinear). On the other hand, that collinearity might not be much of a problem can be demonstrated by exhibiting some sample data with relatively little collinearity. We would want $x_2$ to be orthogonal to $x_2^2,$ of course, and then everything will be ok provided the interactions don't introduce collinearity. Unfortunately, $x_4$ and $x_4x_2^2$ are likely to be positively correlated. But by how much? Consider the data $x_2 = (-1,0,1,\, -1,0,1,\, -1,0,1)$ and $x_4 = (-1,\sqrt{3},-1,\,0,0,0,\,1,-\sqrt{3},1).$ The covariance matrix of the columns $(x_2, x_2^2, x_4x_2, x_4, x_4x_2^2)$ is $$\pmatrix{3&0&0&0&0 \\ 0 & 1 & 0&0&0 \\ 0&0&2&0&0 \\ 0&0&0&5&2 \\ 0&0&0&2&2}/4.$$ It is nearly orthogonal, with correlation only between the last two variables (as expected). (Notice that introducing $x_2^2$ has not changed anything, because this variable is orthogonal to all the others.) The ratio of the largest to the smallest eigenvalue (its condition number) is $6.$ This is not beautiful, but it's not bad, either. One could easily obtain reliable coefficient estimates with such explanatory variables. If you don't have the luxury of choosing the values of $x_2$ and $x_4$ to arrange such near-orthogonality, then you will simply have to proceed as anyone would always do in such cases: investigate the data you have and deal with any collinearity in the usual ways (which would include ignoring it; dropping variables based on scientific considerations; selecting some principal components; using a Lasso; and so on).
Can i include the product of two random variables? Or do I risk collinearity? Operating under your stated assumption that $x_3=x_1x_2$ and $x_4=x_1/x_2$ need to be entertained as possible explanatory variables in a model of a response $Y$ (and therefore not summarily dropped be
45,618
Can i include the product of two random variables? Or do I risk collinearity?
You won't have perfect collinearity (as per your question), but you do risk multicollinearity issues with your two additional regressors. While they're not algebraically linear combinations of the two predictors, it can be the case that these variables (x1-x4) in a particular sample might lay close to to a linear subspace - with the typical consequences of near-multicollinearity. For example, if the two original variates both have very small coefficients of variation then their product can be quite closely related to their sum (or some other linear combination if they're dissimilar in size). This can happen even if the original variables are not highly correlated. Similarly, high correlation can happen with the ratio and the difference.
Can i include the product of two random variables? Or do I risk collinearity?
You won't have perfect collinearity (as per your question), but you do risk multicollinearity issues with your two additional regressors. While they're not algebraically linear combinations of the two
Can i include the product of two random variables? Or do I risk collinearity? You won't have perfect collinearity (as per your question), but you do risk multicollinearity issues with your two additional regressors. While they're not algebraically linear combinations of the two predictors, it can be the case that these variables (x1-x4) in a particular sample might lay close to to a linear subspace - with the typical consequences of near-multicollinearity. For example, if the two original variates both have very small coefficients of variation then their product can be quite closely related to their sum (or some other linear combination if they're dissimilar in size). This can happen even if the original variables are not highly correlated. Similarly, high correlation can happen with the ratio and the difference.
Can i include the product of two random variables? Or do I risk collinearity? You won't have perfect collinearity (as per your question), but you do risk multicollinearity issues with your two additional regressors. While they're not algebraically linear combinations of the two
45,619
Can i include the product of two random variables? Or do I risk collinearity?
No, you don’t risk collinearity because $x_i$ are not linearly dependent in general, i.e. the below equation has just one solution holding for all possible $x_i$: $$a_1x_1+a_2x_2+a_3x_3+a_4x_4=0$$ And that is $a_i=0$. In $x_5=x_1+x_2$ case, the following equation has non-zero solutions such that $a_1=a_2=-a_5$: $$a_1x_1+a_2x_2+a_5x_5=0$$
Can i include the product of two random variables? Or do I risk collinearity?
No, you don’t risk collinearity because $x_i$ are not linearly dependent in general, i.e. the below equation has just one solution holding for all possible $x_i$: $$a_1x_1+a_2x_2+a_3x_3+a_4x_4=0$$ And
Can i include the product of two random variables? Or do I risk collinearity? No, you don’t risk collinearity because $x_i$ are not linearly dependent in general, i.e. the below equation has just one solution holding for all possible $x_i$: $$a_1x_1+a_2x_2+a_3x_3+a_4x_4=0$$ And that is $a_i=0$. In $x_5=x_1+x_2$ case, the following equation has non-zero solutions such that $a_1=a_2=-a_5$: $$a_1x_1+a_2x_2+a_5x_5=0$$
Can i include the product of two random variables? Or do I risk collinearity? No, you don’t risk collinearity because $x_i$ are not linearly dependent in general, i.e. the below equation has just one solution holding for all possible $x_i$: $$a_1x_1+a_2x_2+a_3x_3+a_4x_4=0$$ And
45,620
Can i include the product of two random variables? Or do I risk collinearity?
Don't do y = x1 + x2 + x3 + x3 ...which is equivalent in your case to y = x1 + x2 + (x1 * x2) + (x1 / x2) To include the product of the predictors, it is better practice to create your model with this formula: y = x1 * x2. The * sign indicates that you are also using the interaction effect, the equivalent of your x3. Alternate notation for this: y = x1 + x2 + x1:x2 In short, your x3 is already taken into account when you include the interaction term. Do note that this is more a good practice/clarity thing. Using a separate variable to connote the interaction term would also do the trick, in a pinch, but could cause problems later if a function in your statistical software / your reviewers / your advisor require you to explicitly state that x3 is the interaction term. Also note that the previous syntax will not include the x4 variable, unless of course the interaction term happens to be equal to x4. What about x4? For the dividend of the predictors, your x4, as a general principle and not a fixed rule, it is better to avoid adding it. There are exceptions, as shown by whuber's examples in this thread. In some cases, you might also want to use x4 as a kind of multiplicative inverse, for instance x1 * x4 is equivalent to x1 ^ 2 * (1 / x4) and so on. As always, one should take a look at the possible problems when adding a predictor, such as multicolinearity. As for the x3, I would also avoid creating a separate variable if possible and not too inconvenient, and use the model's formula to denote this information instead. Here's an example, assuming a linear model. We create a model with two predictors and the interaction effect. Note that I could also use the formula Sepal.Length ~ Sepal.Width + Petal.Length + Sepal.Width:Petal.Length with the same result: > model <- lm(data = iris, Sepal.Length ~ Sepal.Width * Petal.Length) > summary(model) Call: lm(formula = Sepal.Length ~ Sepal.Width * Petal.Length, data = iris) Residuals: Min 1Q Median 3Q Max -0.99594 -0.21165 -0.01652 0.21244 0.77249 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.40438 0.53253 2.637 0.00926 ** Sepal.Width 0.84996 0.15800 5.379 2.91e-07 *** Petal.Length 0.71846 0.13886 5.174 7.45e-07 *** Sepal.Width:Petal.Length -0.07701 0.04305 -1.789 0.07571 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.3308 on 146 degrees of freedom Multiple R-squared: 0.8436, Adjusted R-squared: 0.8404 F-statistic: 262.5 on 3 and 146 DF, p-value: < 2.2e-16 Then I add a variable for the product and include it in the model instead of the interaction: > > iris$prod <- iris$Sepal.Width * iris$Petal.Length > model.prod <- + lm(data = iris, Sepal.Length ~ Sepal.Width + Petal.Length + prod) > summary(model.prod) Call: lm(formula = Sepal.Length ~ Sepal.Width + Petal.Length + prod, data = iris) Residuals: Min 1Q Median 3Q Max -0.99594 -0.21165 -0.01652 0.21244 0.77249 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.40438 0.53253 2.637 0.00926 ** Sepal.Width 0.84996 0.15800 5.379 2.91e-07 *** Petal.Length 0.71846 0.13886 5.174 7.45e-07 *** prod -0.07701 0.04305 -1.789 0.07571 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.3308 on 146 degrees of freedom Multiple R-squared: 0.8436, Adjusted R-squared: 0.8404 F-statistic: 262.5 on 3 and 146 DF, p-value: < 2.2e-16 So, it's really the same thing. I can emphasize this by explicitely asking R to include both the product variable and the interaction term: > model.inter <- + lm(data = iris, + Sepal.Length ~ Sepal.Width + Petal.Length + Sepal.Width:Petal.Length + prod) > summary(model.inter) Call: lm(formula = Sepal.Length ~ Sepal.Width + Petal.Length + Sepal.Width:Petal.Length + prod, data = iris) Residuals: Min 1Q Median 3Q Max -0.99594 -0.21165 -0.01652 0.21244 0.77249 Coefficients: (1 not defined because of singularities) Estimate Std. Error t value Pr(>|t|) (Intercept) 1.40438 0.53253 2.637 0.00926 ** Sepal.Width 0.84996 0.15800 5.379 2.91e-07 *** Petal.Length 0.71846 0.13886 5.174 7.45e-07 *** prod -0.07701 0.04305 -1.789 0.07571 . Sepal.Width:Petal.Length NA NA NA NA --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.3308 on 146 degrees of freedom Multiple R-squared: 0.8436, Adjusted R-squared: 0.8404 F-statistic: 262.5 on 3 and 146 DF, p-value: < 2.2e-16 Using the dividend (x1 / x2) as well as the interaction actually diminished the adjusted R2 a little bit since it does not contribute much and I had to add another predictor. It did improve the residuals, but just one tiny bit. > model.divid <- + lm(data = iris, Sepal.Length ~ Sepal.Width + Petal.Length + divid + prod) > summary(model.divid) Call: lm(formula = Sepal.Length ~ Sepal.Width + Petal.Length + divid + prod, data = iris) Residuals: Min 1Q Median 3Q Max -0.98673 -0.21684 0.00684 0.21559 0.71138 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.58216 0.53349 2.966 0.00353 ** Sepal.Width 0.55849 0.20998 2.660 0.00870 ** Petal.Length 0.72299 0.13733 5.265 4.97e-07 *** divid 0.28102 0.13526 2.078 0.03951 * prod -0.04455 0.04535 -0.982 0.32750 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.3271 on 145 degrees of freedom Multiple R-squared: 0.8481, Adjusted R-squared: 0.8439 F-statistic: 202.4 on 4 and 145 DF, p-value: < 2.2e-16 So why not use the dividend of x1 and x2 in this example? I won't include the code and output for brevity but with this data the adjusted R2-squared for divid ~ Sepal.Width + Petal.Length was 94%, with a p value < 2.2e-16. For the provided example, adding your x4 would have been a bad idea.
Can i include the product of two random variables? Or do I risk collinearity?
Don't do y = x1 + x2 + x3 + x3 ...which is equivalent in your case to y = x1 + x2 + (x1 * x2) + (x1 / x2) To include the product of the predictors, it is better practice to create your model with t
Can i include the product of two random variables? Or do I risk collinearity? Don't do y = x1 + x2 + x3 + x3 ...which is equivalent in your case to y = x1 + x2 + (x1 * x2) + (x1 / x2) To include the product of the predictors, it is better practice to create your model with this formula: y = x1 * x2. The * sign indicates that you are also using the interaction effect, the equivalent of your x3. Alternate notation for this: y = x1 + x2 + x1:x2 In short, your x3 is already taken into account when you include the interaction term. Do note that this is more a good practice/clarity thing. Using a separate variable to connote the interaction term would also do the trick, in a pinch, but could cause problems later if a function in your statistical software / your reviewers / your advisor require you to explicitly state that x3 is the interaction term. Also note that the previous syntax will not include the x4 variable, unless of course the interaction term happens to be equal to x4. What about x4? For the dividend of the predictors, your x4, as a general principle and not a fixed rule, it is better to avoid adding it. There are exceptions, as shown by whuber's examples in this thread. In some cases, you might also want to use x4 as a kind of multiplicative inverse, for instance x1 * x4 is equivalent to x1 ^ 2 * (1 / x4) and so on. As always, one should take a look at the possible problems when adding a predictor, such as multicolinearity. As for the x3, I would also avoid creating a separate variable if possible and not too inconvenient, and use the model's formula to denote this information instead. Here's an example, assuming a linear model. We create a model with two predictors and the interaction effect. Note that I could also use the formula Sepal.Length ~ Sepal.Width + Petal.Length + Sepal.Width:Petal.Length with the same result: > model <- lm(data = iris, Sepal.Length ~ Sepal.Width * Petal.Length) > summary(model) Call: lm(formula = Sepal.Length ~ Sepal.Width * Petal.Length, data = iris) Residuals: Min 1Q Median 3Q Max -0.99594 -0.21165 -0.01652 0.21244 0.77249 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.40438 0.53253 2.637 0.00926 ** Sepal.Width 0.84996 0.15800 5.379 2.91e-07 *** Petal.Length 0.71846 0.13886 5.174 7.45e-07 *** Sepal.Width:Petal.Length -0.07701 0.04305 -1.789 0.07571 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.3308 on 146 degrees of freedom Multiple R-squared: 0.8436, Adjusted R-squared: 0.8404 F-statistic: 262.5 on 3 and 146 DF, p-value: < 2.2e-16 Then I add a variable for the product and include it in the model instead of the interaction: > > iris$prod <- iris$Sepal.Width * iris$Petal.Length > model.prod <- + lm(data = iris, Sepal.Length ~ Sepal.Width + Petal.Length + prod) > summary(model.prod) Call: lm(formula = Sepal.Length ~ Sepal.Width + Petal.Length + prod, data = iris) Residuals: Min 1Q Median 3Q Max -0.99594 -0.21165 -0.01652 0.21244 0.77249 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.40438 0.53253 2.637 0.00926 ** Sepal.Width 0.84996 0.15800 5.379 2.91e-07 *** Petal.Length 0.71846 0.13886 5.174 7.45e-07 *** prod -0.07701 0.04305 -1.789 0.07571 . --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.3308 on 146 degrees of freedom Multiple R-squared: 0.8436, Adjusted R-squared: 0.8404 F-statistic: 262.5 on 3 and 146 DF, p-value: < 2.2e-16 So, it's really the same thing. I can emphasize this by explicitely asking R to include both the product variable and the interaction term: > model.inter <- + lm(data = iris, + Sepal.Length ~ Sepal.Width + Petal.Length + Sepal.Width:Petal.Length + prod) > summary(model.inter) Call: lm(formula = Sepal.Length ~ Sepal.Width + Petal.Length + Sepal.Width:Petal.Length + prod, data = iris) Residuals: Min 1Q Median 3Q Max -0.99594 -0.21165 -0.01652 0.21244 0.77249 Coefficients: (1 not defined because of singularities) Estimate Std. Error t value Pr(>|t|) (Intercept) 1.40438 0.53253 2.637 0.00926 ** Sepal.Width 0.84996 0.15800 5.379 2.91e-07 *** Petal.Length 0.71846 0.13886 5.174 7.45e-07 *** prod -0.07701 0.04305 -1.789 0.07571 . Sepal.Width:Petal.Length NA NA NA NA --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.3308 on 146 degrees of freedom Multiple R-squared: 0.8436, Adjusted R-squared: 0.8404 F-statistic: 262.5 on 3 and 146 DF, p-value: < 2.2e-16 Using the dividend (x1 / x2) as well as the interaction actually diminished the adjusted R2 a little bit since it does not contribute much and I had to add another predictor. It did improve the residuals, but just one tiny bit. > model.divid <- + lm(data = iris, Sepal.Length ~ Sepal.Width + Petal.Length + divid + prod) > summary(model.divid) Call: lm(formula = Sepal.Length ~ Sepal.Width + Petal.Length + divid + prod, data = iris) Residuals: Min 1Q Median 3Q Max -0.98673 -0.21684 0.00684 0.21559 0.71138 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 1.58216 0.53349 2.966 0.00353 ** Sepal.Width 0.55849 0.20998 2.660 0.00870 ** Petal.Length 0.72299 0.13733 5.265 4.97e-07 *** divid 0.28102 0.13526 2.078 0.03951 * prod -0.04455 0.04535 -0.982 0.32750 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.3271 on 145 degrees of freedom Multiple R-squared: 0.8481, Adjusted R-squared: 0.8439 F-statistic: 202.4 on 4 and 145 DF, p-value: < 2.2e-16 So why not use the dividend of x1 and x2 in this example? I won't include the code and output for brevity but with this data the adjusted R2-squared for divid ~ Sepal.Width + Petal.Length was 94%, with a p value < 2.2e-16. For the provided example, adding your x4 would have been a bad idea.
Can i include the product of two random variables? Or do I risk collinearity? Don't do y = x1 + x2 + x3 + x3 ...which is equivalent in your case to y = x1 + x2 + (x1 * x2) + (x1 / x2) To include the product of the predictors, it is better practice to create your model with t
45,621
Why no variance term in Bayesian logistic regression?
Logistic regression, Bayesian or not, is a model defined in terms of Bernoulli distribution. The distribution is parametrized by "probability of success" $p$ with mean $p$ and variance $p(1-p)$, i.e. the variance directly follows from the mean. So there is no "separate" variance term, this is what the quote seems to say.
Why no variance term in Bayesian logistic regression?
Logistic regression, Bayesian or not, is a model defined in terms of Bernoulli distribution. The distribution is parametrized by "probability of success" $p$ with mean $p$ and variance $p(1-p)$, i.e.
Why no variance term in Bayesian logistic regression? Logistic regression, Bayesian or not, is a model defined in terms of Bernoulli distribution. The distribution is parametrized by "probability of success" $p$ with mean $p$ and variance $p(1-p)$, i.e. the variance directly follows from the mean. So there is no "separate" variance term, this is what the quote seems to say.
Why no variance term in Bayesian logistic regression? Logistic regression, Bayesian or not, is a model defined in terms of Bernoulli distribution. The distribution is parametrized by "probability of success" $p$ with mean $p$ and variance $p(1-p)$, i.e.
45,622
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression require a funky distribution?
When testing $H_0:\rho=\rho_0\,(\ne0)$ against any suitable alternative, one usually resorts to the variance stabilising Fisher transformation of the sample correlation coefficient $r$. The usual t-test you are referring to is mainly reserved for the case $\rho_0=0$. For moderately large $n$ (for example when $n\ge 25$), we have $$\sqrt{n-3}(\tanh^{-1}(r)-\tanh^{-1}(\rho))\stackrel{a}\sim N(0,1)$$ So the appropriate test statistic under $H_0$ would be $$T=\sqrt{n-3}(\tanh^{-1}(r)-\tanh^{-1}(\rho_0))\stackrel{a}\sim N(0,1)$$ In simple linear regression with normality assumption of errors (having variances $\sigma^2$), we have the exact distribution of the least square estimator $\hat\beta$ of the slope $\beta$, given by $\frac{(\hat\beta-\beta)\sqrt{s_{xx}}}{\sigma}\sim N(0,1)$ where $s_{xx}=\sum (x_i-\bar x)^2$. Now if $\sigma$ is known, we test $H_0:\beta=\beta_0$ using the statistic $$T_1=\frac{(\hat\beta-\beta_0)\sqrt{s_{xx}}}{\sigma}\stackrel{H_0}\sim N(0,1)$$ If $\sigma$ is not known we estimate $\sigma$ by the residual sd $s$, where $s^2=\frac{SSE}{n-2}$. The test statistic is now $$T_2=\frac{(\hat\beta-\beta_0)\sqrt{s_{xx}}}{s}\stackrel{H_0}\sim t_{n-2}$$ The variance of $\hat\beta$ in both these cases is independent of $\beta$ (the parameter of interest), so the variance stabilising transformation on $r$ while testing '$\rho=\rho_0$' is not required here. Not to mention that the test for '$\rho=\rho_0$' is a large sample test, while those involving $\beta$ are not.
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression requi
When testing $H_0:\rho=\rho_0\,(\ne0)$ against any suitable alternative, one usually resorts to the variance stabilising Fisher transformation of the sample correlation coefficient $r$. The usual t-te
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression require a funky distribution? When testing $H_0:\rho=\rho_0\,(\ne0)$ against any suitable alternative, one usually resorts to the variance stabilising Fisher transformation of the sample correlation coefficient $r$. The usual t-test you are referring to is mainly reserved for the case $\rho_0=0$. For moderately large $n$ (for example when $n\ge 25$), we have $$\sqrt{n-3}(\tanh^{-1}(r)-\tanh^{-1}(\rho))\stackrel{a}\sim N(0,1)$$ So the appropriate test statistic under $H_0$ would be $$T=\sqrt{n-3}(\tanh^{-1}(r)-\tanh^{-1}(\rho_0))\stackrel{a}\sim N(0,1)$$ In simple linear regression with normality assumption of errors (having variances $\sigma^2$), we have the exact distribution of the least square estimator $\hat\beta$ of the slope $\beta$, given by $\frac{(\hat\beta-\beta)\sqrt{s_{xx}}}{\sigma}\sim N(0,1)$ where $s_{xx}=\sum (x_i-\bar x)^2$. Now if $\sigma$ is known, we test $H_0:\beta=\beta_0$ using the statistic $$T_1=\frac{(\hat\beta-\beta_0)\sqrt{s_{xx}}}{\sigma}\stackrel{H_0}\sim N(0,1)$$ If $\sigma$ is not known we estimate $\sigma$ by the residual sd $s$, where $s^2=\frac{SSE}{n-2}$. The test statistic is now $$T_2=\frac{(\hat\beta-\beta_0)\sqrt{s_{xx}}}{s}\stackrel{H_0}\sim t_{n-2}$$ The variance of $\hat\beta$ in both these cases is independent of $\beta$ (the parameter of interest), so the variance stabilising transformation on $r$ while testing '$\rho=\rho_0$' is not required here. Not to mention that the test for '$\rho=\rho_0$' is a large sample test, while those involving $\beta$ are not.
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression requi When testing $H_0:\rho=\rho_0\,(\ne0)$ against any suitable alternative, one usually resorts to the variance stabilising Fisher transformation of the sample correlation coefficient $r$. The usual t-te
45,623
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression require a funky distribution?
If one is using something like a Wald statistic, a very simple way to do this to test if $H_o: \beta - c = 0$ vs $H_a: \beta - c \neq 0$ Since we know that $\hat \beta \text{ } \dot \sim N(\beta, se)$, then $\hat \beta -c \text{ } \dot \sim N(\beta - c, se)$, since $c$ is just a constant. This gives us a Wald statistic of $ \frac{\hat \beta - c}{ se }$. In the case that we are not using a Wald statistic, we can (hopefully, if the software allows) include an offset in our model. That is, we can simply add $cx^*$, where $x^*$ is the covariate of interest, directly into the linear predictor. Then, if we wanted to do something like a likelihood ratio test, we could have one model with no free parameter associated with $x^*$ and another one that includes a free $x^*\beta^*$ to be estimated.
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression requi
If one is using something like a Wald statistic, a very simple way to do this to test if $H_o: \beta - c = 0$ vs $H_a: \beta - c \neq 0$ Since we know that $\hat \beta \text{ } \dot \sim N(\beta, se)
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression require a funky distribution? If one is using something like a Wald statistic, a very simple way to do this to test if $H_o: \beta - c = 0$ vs $H_a: \beta - c \neq 0$ Since we know that $\hat \beta \text{ } \dot \sim N(\beta, se)$, then $\hat \beta -c \text{ } \dot \sim N(\beta - c, se)$, since $c$ is just a constant. This gives us a Wald statistic of $ \frac{\hat \beta - c}{ se }$. In the case that we are not using a Wald statistic, we can (hopefully, if the software allows) include an offset in our model. That is, we can simply add $cx^*$, where $x^*$ is the covariate of interest, directly into the linear predictor. Then, if we wanted to do something like a likelihood ratio test, we could have one model with no free parameter associated with $x^*$ and another one that includes a free $x^*\beta^*$ to be estimated.
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression requi If one is using something like a Wald statistic, a very simple way to do this to test if $H_o: \beta - c = 0$ vs $H_a: \beta - c \neq 0$ Since we know that $\hat \beta \text{ } \dot \sim N(\beta, se)
45,624
Do studentized residuals follow t-distribution
$\newcommand{\e}{\varepsilon}$$\newcommand{\0}{\mathbf 0}$$\newcommand{\E}{\text E}$$\newcommand{\V}{\text{Var}}$I'll start by working with this in matrix form. Let $y = X\beta + \e$ be our model with $\e \sim \mathcal N(\0, \sigma^2 I)$ and $X \in \mathbb R^{n\times p}$ full rank. Then $\hat y = Hy$ where $H = X(X^TX)^{-1}X^T$ is the hat matrix. I'll use $\e$ for the actual unobserved error and $e = y - \hat y$ for the residuals. Note that $$ \E(e) = \E(y - \hat y) = X\beta - HX\beta = X\beta - X(X^TX)^{-1}X^TX\beta = \0 $$ so $e$ has mean $\0$. Additionally, $$ \V(e) = \V\left[(I - H)y\right] = \sigma^2(I - H). $$ Since $e = (I-H)y$ this means $e$ is a linear transformation of a Gaussian so $e$ is Gaussian too, thus $$ e \sim \mathcal N(\0, \sigma^2 (I-H)) $$ The covariance matrix is positive semidefinite rather than positive definite since this is supported only on the column space of $X$, but when we consider just $e_i$ it'll behave fine. A $t_k$ distribution is defined as $$ \frac{\mathcal N(0, 1)}{\sqrt{\chi^2_k / k}} $$ with independence between. Define $$ t_i = \frac{e_i}{\hat\sigma_{(i)}\sqrt{1 - h_i}} $$ where $$ \hat\sigma_{(i)}^2 = \frac{1}{n - p - 1}e_{(i)}^Te_{(i)} $$ is the error variance estimate computed for the model with observation $i$ dropped out (so the $n- p - 1$ reflects that $n-1$ was the sample size for this). Doing this means I'm considering the external studentized residuals and I'll actually get a $t$ distribution at the end. See the wikipedia article on studentized residuals for more. The numerator is $e_i \sim \mathcal N(0, \sigma^2 (1 - h_i))$ where $h_i$ is the $i$th element of $\text{diag}(H)$. This means $$ \frac{e_i}{\sigma\sqrt{1 - h_i}} \sim \mathcal N(0,1). $$ Next, consider $\hat\sigma_{(i)}^2$. We have $$ y_{(i)}^Ty_{(i)} = y_{(i)}^T(I_{n-1} - H_{(i)} + H_{(i)})y_{(i)} = y_{(i)}^T(I-H_{(i)})y_{(i)} + y_{(i)}^T H_{(i)} y _{(i)} $$ with $H_{(i)}$ and $I-H_{(i)}$ being idempotent and $\text{rank}(I-H_{(i)}) = n-p-1$ so by Cochran's theorem $$ y_{(i)}^T(I-H_{(i)})y_{(i)} / \sigma^2 = e_{(i)}^Te_{(i)} / \sigma^2 \sim \chi^2_{n-p-1}. $$ All together this means $$ t_i = \frac{e_i}{\hat\sigma_{(i)}\sqrt{1 - h_i}} = \frac{\frac{e_i}{\sigma\sqrt{1 - h_i}}}{\sqrt{\frac{e_{(i)}^Te_{(i)}}{\sigma^2(n-p-1)}}} $$ is the ratio of a $\mathcal N(0,1)$ distribution to a $\sqrt{\chi^2_{n-p-1} / (n-p-1)}$. And since observation $i$ does not appear in $\hat\sigma_{(i)}$ I get independence. So that means $$ t_i \sim t_{n-p-1}. $$ I would not be guaranteed independence if I didn't use $\hat\sigma_{(i)}$; if you actually want to use the internal studentized residuals that use the same $\hat\sigma^2 = \frac 1{n-p}e^Te$ for every $t_i$ then you'll get a more complicated distribution. Finally, in your particular case as the wikipedia article says we get $$ 1 - h_i = 1 - \frac 1n - \frac{(x_i - \bar x)^2}{S_{xx}} $$ so we're done. $\newcommand{\1}{\mathbf 1}$Here's a derivation of that. If we're doing simple linear regression then we'll have $X = (\1 \mid x)$ where $x \in \mathbb R^n$ is the non-intercept univariate predictor; $X$ being full rank is equivalent to $x$ not being constant. This means $$ H = X(X^TX)^{-1}X^T = (\1 \mid x)\left(\begin{array}{cc}n & x^T\1 \\ x^T\1 & x^Tx\end{array}\right)^{-1}{\1^T\choose x^T}. $$ We can use the formula for the explicit inverse of a $2\times 2$ matrix to find $$ (X^TX)^{-1} = \frac{1}{nx^Tx - (x^T\1)^2}\left(\begin{array}{cc}x^Tx & -x^T\1 \\ -x^T\1 & n\end{array}\right) $$ so all together we can do the multiplication to get $$ H = \frac{1}{n x^Tx - (\1^T x)^2}\left(x^Tx\cdot \1\1^T - x^T\1 \cdot (\1 x^T + x \1^T) + n xx^T\right). $$ This means $$ h_i = \frac{x^Tx - 2x^T\1\cdot x_i + nx_i^2}{n x^Tx - (\1^T x)^2}. $$ For the numerator, I can use the fact that $\1^Tx = n \bar x$ to rewrite it as $$ x^Tx - 2nx_i\bar x + n x_i^2 = x^Tx + n(x_i^2 - 2 x_i\bar x + \bar x^2 - \bar x^2) \\ = x^Tx - n\bar x^2 + n(x_i - \bar x)^2 $$ and noting $S_{xx} = x^Tx - n \bar x^2$ I have $$ h_i = \frac{S_{xx} + n(x_i - \bar x)^2}{nS_{xx}} = \frac 1n + \frac{(x_i - \bar x)^2}{S_{xx}}. $$ This means $$ 1 - h_i = 1 - \frac 1n - \frac{(x_i - \bar x)^2}{S_{xx}} $$ as desired. $\square$
Do studentized residuals follow t-distribution
$\newcommand{\e}{\varepsilon}$$\newcommand{\0}{\mathbf 0}$$\newcommand{\E}{\text E}$$\newcommand{\V}{\text{Var}}$I'll start by working with this in matrix form. Let $y = X\beta + \e$ be our model with
Do studentized residuals follow t-distribution $\newcommand{\e}{\varepsilon}$$\newcommand{\0}{\mathbf 0}$$\newcommand{\E}{\text E}$$\newcommand{\V}{\text{Var}}$I'll start by working with this in matrix form. Let $y = X\beta + \e$ be our model with $\e \sim \mathcal N(\0, \sigma^2 I)$ and $X \in \mathbb R^{n\times p}$ full rank. Then $\hat y = Hy$ where $H = X(X^TX)^{-1}X^T$ is the hat matrix. I'll use $\e$ for the actual unobserved error and $e = y - \hat y$ for the residuals. Note that $$ \E(e) = \E(y - \hat y) = X\beta - HX\beta = X\beta - X(X^TX)^{-1}X^TX\beta = \0 $$ so $e$ has mean $\0$. Additionally, $$ \V(e) = \V\left[(I - H)y\right] = \sigma^2(I - H). $$ Since $e = (I-H)y$ this means $e$ is a linear transformation of a Gaussian so $e$ is Gaussian too, thus $$ e \sim \mathcal N(\0, \sigma^2 (I-H)) $$ The covariance matrix is positive semidefinite rather than positive definite since this is supported only on the column space of $X$, but when we consider just $e_i$ it'll behave fine. A $t_k$ distribution is defined as $$ \frac{\mathcal N(0, 1)}{\sqrt{\chi^2_k / k}} $$ with independence between. Define $$ t_i = \frac{e_i}{\hat\sigma_{(i)}\sqrt{1 - h_i}} $$ where $$ \hat\sigma_{(i)}^2 = \frac{1}{n - p - 1}e_{(i)}^Te_{(i)} $$ is the error variance estimate computed for the model with observation $i$ dropped out (so the $n- p - 1$ reflects that $n-1$ was the sample size for this). Doing this means I'm considering the external studentized residuals and I'll actually get a $t$ distribution at the end. See the wikipedia article on studentized residuals for more. The numerator is $e_i \sim \mathcal N(0, \sigma^2 (1 - h_i))$ where $h_i$ is the $i$th element of $\text{diag}(H)$. This means $$ \frac{e_i}{\sigma\sqrt{1 - h_i}} \sim \mathcal N(0,1). $$ Next, consider $\hat\sigma_{(i)}^2$. We have $$ y_{(i)}^Ty_{(i)} = y_{(i)}^T(I_{n-1} - H_{(i)} + H_{(i)})y_{(i)} = y_{(i)}^T(I-H_{(i)})y_{(i)} + y_{(i)}^T H_{(i)} y _{(i)} $$ with $H_{(i)}$ and $I-H_{(i)}$ being idempotent and $\text{rank}(I-H_{(i)}) = n-p-1$ so by Cochran's theorem $$ y_{(i)}^T(I-H_{(i)})y_{(i)} / \sigma^2 = e_{(i)}^Te_{(i)} / \sigma^2 \sim \chi^2_{n-p-1}. $$ All together this means $$ t_i = \frac{e_i}{\hat\sigma_{(i)}\sqrt{1 - h_i}} = \frac{\frac{e_i}{\sigma\sqrt{1 - h_i}}}{\sqrt{\frac{e_{(i)}^Te_{(i)}}{\sigma^2(n-p-1)}}} $$ is the ratio of a $\mathcal N(0,1)$ distribution to a $\sqrt{\chi^2_{n-p-1} / (n-p-1)}$. And since observation $i$ does not appear in $\hat\sigma_{(i)}$ I get independence. So that means $$ t_i \sim t_{n-p-1}. $$ I would not be guaranteed independence if I didn't use $\hat\sigma_{(i)}$; if you actually want to use the internal studentized residuals that use the same $\hat\sigma^2 = \frac 1{n-p}e^Te$ for every $t_i$ then you'll get a more complicated distribution. Finally, in your particular case as the wikipedia article says we get $$ 1 - h_i = 1 - \frac 1n - \frac{(x_i - \bar x)^2}{S_{xx}} $$ so we're done. $\newcommand{\1}{\mathbf 1}$Here's a derivation of that. If we're doing simple linear regression then we'll have $X = (\1 \mid x)$ where $x \in \mathbb R^n$ is the non-intercept univariate predictor; $X$ being full rank is equivalent to $x$ not being constant. This means $$ H = X(X^TX)^{-1}X^T = (\1 \mid x)\left(\begin{array}{cc}n & x^T\1 \\ x^T\1 & x^Tx\end{array}\right)^{-1}{\1^T\choose x^T}. $$ We can use the formula for the explicit inverse of a $2\times 2$ matrix to find $$ (X^TX)^{-1} = \frac{1}{nx^Tx - (x^T\1)^2}\left(\begin{array}{cc}x^Tx & -x^T\1 \\ -x^T\1 & n\end{array}\right) $$ so all together we can do the multiplication to get $$ H = \frac{1}{n x^Tx - (\1^T x)^2}\left(x^Tx\cdot \1\1^T - x^T\1 \cdot (\1 x^T + x \1^T) + n xx^T\right). $$ This means $$ h_i = \frac{x^Tx - 2x^T\1\cdot x_i + nx_i^2}{n x^Tx - (\1^T x)^2}. $$ For the numerator, I can use the fact that $\1^Tx = n \bar x$ to rewrite it as $$ x^Tx - 2nx_i\bar x + n x_i^2 = x^Tx + n(x_i^2 - 2 x_i\bar x + \bar x^2 - \bar x^2) \\ = x^Tx - n\bar x^2 + n(x_i - \bar x)^2 $$ and noting $S_{xx} = x^Tx - n \bar x^2$ I have $$ h_i = \frac{S_{xx} + n(x_i - \bar x)^2}{nS_{xx}} = \frac 1n + \frac{(x_i - \bar x)^2}{S_{xx}}. $$ This means $$ 1 - h_i = 1 - \frac 1n - \frac{(x_i - \bar x)^2}{S_{xx}} $$ as desired. $\square$
Do studentized residuals follow t-distribution $\newcommand{\e}{\varepsilon}$$\newcommand{\0}{\mathbf 0}$$\newcommand{\E}{\text E}$$\newcommand{\V}{\text{Var}}$I'll start by working with this in matrix form. Let $y = X\beta + \e$ be our model with
45,625
Do studentized residuals follow t-distribution
jld's answer (+1) describes the construction of a $t$ random variable, but does not mention why independence is violated, so I figured I would chime in. The numerator $$ \frac{e_i}{\sigma\sqrt{1 - h_i}} \sim \mathcal N(0,1) $$ and the chi-squared random variable in the denominator $$ e^Te / \sigma^2 \sim \chi^2_{n-k-1} $$ of the internally studentized residuals are not independent because there exist some integrable functions $f$ and $g$ such that $$ E[f(e_1)g(e^Te)] \neq E[f(e_1)]E[g(e^Te)]. $$ Pick $f(x) = x^2$ and $g$ as the identity mapping. Then the left hand side of the display above is \begin{align*} E[e_i^2 e^Te] &= \sum_{j \neq i } E[ e_j^2] E[e_i^2 ] + E\left[ e_i^2 e_i^2 \right] \\ &= \sigma^4(1-h_{ii})\sum_{j \neq i} (1 - h_{jj}) + E\left[ e_i^4 \right] \\ &= \sigma^4\left[ (1-h_{ii})\sum_{j \neq i} (1 - h_{jj}) + 3(1-h_{ii})^2 \right]\\ &= \sigma^4(1-h_{ii})\left[ \sum_{j } (1 - h_{jj}) + 2(1-h_{ii}) \right] \\ &= \sigma^4(1-h_{ii})\left[ \text{trace}(I - H) + 2(1-h_{ii}) \right] \\ &= \sigma^4(1-h_{ii})\left[ \text{rank}(I - H) + 2(1-h_{ii}) \right] \\ &= \sigma^4(1-h_{ii})\left[(n - k - 1) + 2(1-h_{ii}) \right] , \end{align*} but the right hand side is $$ E[e_i^2]E[e^Te] = \sigma^4(1 - h_{ii})(n-k-1) $$ because $e^Te \sim \sigma^2 \chi^2_{n-k-1}$. What's interesting, though, is that they aren't correlated: $$ \text{Cov}\left(\frac{e_i}{\sigma\sqrt{1 - h_i}}, \frac{e^T e}{\sigma^2}\right) \propto E[e_i e^T e] = E\left[ \sum_{j \neq i} e_j^2 e_i + e_i^3 \right] = 0. $$
Do studentized residuals follow t-distribution
jld's answer (+1) describes the construction of a $t$ random variable, but does not mention why independence is violated, so I figured I would chime in. The numerator $$ \frac{e_i}{\sigma\sqrt{1 -
Do studentized residuals follow t-distribution jld's answer (+1) describes the construction of a $t$ random variable, but does not mention why independence is violated, so I figured I would chime in. The numerator $$ \frac{e_i}{\sigma\sqrt{1 - h_i}} \sim \mathcal N(0,1) $$ and the chi-squared random variable in the denominator $$ e^Te / \sigma^2 \sim \chi^2_{n-k-1} $$ of the internally studentized residuals are not independent because there exist some integrable functions $f$ and $g$ such that $$ E[f(e_1)g(e^Te)] \neq E[f(e_1)]E[g(e^Te)]. $$ Pick $f(x) = x^2$ and $g$ as the identity mapping. Then the left hand side of the display above is \begin{align*} E[e_i^2 e^Te] &= \sum_{j \neq i } E[ e_j^2] E[e_i^2 ] + E\left[ e_i^2 e_i^2 \right] \\ &= \sigma^4(1-h_{ii})\sum_{j \neq i} (1 - h_{jj}) + E\left[ e_i^4 \right] \\ &= \sigma^4\left[ (1-h_{ii})\sum_{j \neq i} (1 - h_{jj}) + 3(1-h_{ii})^2 \right]\\ &= \sigma^4(1-h_{ii})\left[ \sum_{j } (1 - h_{jj}) + 2(1-h_{ii}) \right] \\ &= \sigma^4(1-h_{ii})\left[ \text{trace}(I - H) + 2(1-h_{ii}) \right] \\ &= \sigma^4(1-h_{ii})\left[ \text{rank}(I - H) + 2(1-h_{ii}) \right] \\ &= \sigma^4(1-h_{ii})\left[(n - k - 1) + 2(1-h_{ii}) \right] , \end{align*} but the right hand side is $$ E[e_i^2]E[e^Te] = \sigma^4(1 - h_{ii})(n-k-1) $$ because $e^Te \sim \sigma^2 \chi^2_{n-k-1}$. What's interesting, though, is that they aren't correlated: $$ \text{Cov}\left(\frac{e_i}{\sigma\sqrt{1 - h_i}}, \frac{e^T e}{\sigma^2}\right) \propto E[e_i e^T e] = E\left[ \sum_{j \neq i} e_j^2 e_i + e_i^3 \right] = 0. $$
Do studentized residuals follow t-distribution jld's answer (+1) describes the construction of a $t$ random variable, but does not mention why independence is violated, so I figured I would chime in. The numerator $$ \frac{e_i}{\sigma\sqrt{1 -
45,626
Wilcoxon Test - non normality, non equal variances, sample size not the same
tl;dr if you want to interpret the rejection of the null hypothesis as evidence that prices for women are greater than those for men, then you do need the assumption of equal variance (in fact, equal distributions) between the two populations. If you are satisfied with showing that the distribution of prices for women differs in some way from that of men, then you don't need the extra assumption. You don't need to worry about unequal sample size (this will affect the power of the test, but not its validity) or Normality. For what it's worth, testing whether one group's values are larger on average than another group's when their variances also differ is a surprisingly deep question, even for Normally distributed data (where it's known as the Behrens-Fisher problem). Referring to the Wikipedia page: the "very general formulation" says: Under the null hypothesis H0, the distributions of both populations are equal.[3] The alternative hypothesis H1 is that the distributions are not equal. The next paragraph says: Under more strict assumptions than the general formulation above, e.g., if the responses are assumed to be continuous and the alternative is restricted to a shift in location, i.e., $F_1(x) = F_2(x + δ)$, we can interpret a significant Mann–Whitney U test as showing a difference in medians ... (emphasis added) Note to technical readers: I think this is a reasonable summary, but if anyone wants to be more rigorous, feel free to comment or edit or post an alternative answer ...
Wilcoxon Test - non normality, non equal variances, sample size not the same
tl;dr if you want to interpret the rejection of the null hypothesis as evidence that prices for women are greater than those for men, then you do need the assumption of equal variance (in fact, equal
Wilcoxon Test - non normality, non equal variances, sample size not the same tl;dr if you want to interpret the rejection of the null hypothesis as evidence that prices for women are greater than those for men, then you do need the assumption of equal variance (in fact, equal distributions) between the two populations. If you are satisfied with showing that the distribution of prices for women differs in some way from that of men, then you don't need the extra assumption. You don't need to worry about unequal sample size (this will affect the power of the test, but not its validity) or Normality. For what it's worth, testing whether one group's values are larger on average than another group's when their variances also differ is a surprisingly deep question, even for Normally distributed data (where it's known as the Behrens-Fisher problem). Referring to the Wikipedia page: the "very general formulation" says: Under the null hypothesis H0, the distributions of both populations are equal.[3] The alternative hypothesis H1 is that the distributions are not equal. The next paragraph says: Under more strict assumptions than the general formulation above, e.g., if the responses are assumed to be continuous and the alternative is restricted to a shift in location, i.e., $F_1(x) = F_2(x + δ)$, we can interpret a significant Mann–Whitney U test as showing a difference in medians ... (emphasis added) Note to technical readers: I think this is a reasonable summary, but if anyone wants to be more rigorous, feel free to comment or edit or post an alternative answer ...
Wilcoxon Test - non normality, non equal variances, sample size not the same tl;dr if you want to interpret the rejection of the null hypothesis as evidence that prices for women are greater than those for men, then you do need the assumption of equal variance (in fact, equal
45,627
Wilcoxon Test - non normality, non equal variances, sample size not the same
Ben Bolker's answer is great. I just wanted to add an answer to the "Would another test be better?" part. If you want to be able to conclude that prices for women are greater than those for men without the assumptions of equal distributions under the null, Brunner-Munzel's test can be recommended. For a full technical description of this test, see Chapter 3 in https://link.springer.com/content/pdf/10.1007/978-3-030-02914-2.pdf. For a non-technical introduction and comparison to the Wilcoxon test, see https://journals.sagepub.com/doi/full/10.1177/2515245921999602 (Disclaimer: I am the author of this one). A disclaimer: this test is not optimal in any strict sense. However, it seems to be the most reasonable test for the nonparametric Behrens-Fisher problem available at the moment, just as Welch's t test is not optimal in any sense but most reasonable if you want to test equal means. Indeed, Brunner-Munzel's test is almost Welch's t test on ranked data, while the Wilcoxon test is close to Student's t test on ranked data.
Wilcoxon Test - non normality, non equal variances, sample size not the same
Ben Bolker's answer is great. I just wanted to add an answer to the "Would another test be better?" part. If you want to be able to conclude that prices for women are greater than those for men withou
Wilcoxon Test - non normality, non equal variances, sample size not the same Ben Bolker's answer is great. I just wanted to add an answer to the "Would another test be better?" part. If you want to be able to conclude that prices for women are greater than those for men without the assumptions of equal distributions under the null, Brunner-Munzel's test can be recommended. For a full technical description of this test, see Chapter 3 in https://link.springer.com/content/pdf/10.1007/978-3-030-02914-2.pdf. For a non-technical introduction and comparison to the Wilcoxon test, see https://journals.sagepub.com/doi/full/10.1177/2515245921999602 (Disclaimer: I am the author of this one). A disclaimer: this test is not optimal in any strict sense. However, it seems to be the most reasonable test for the nonparametric Behrens-Fisher problem available at the moment, just as Welch's t test is not optimal in any sense but most reasonable if you want to test equal means. Indeed, Brunner-Munzel's test is almost Welch's t test on ranked data, while the Wilcoxon test is close to Student's t test on ranked data.
Wilcoxon Test - non normality, non equal variances, sample size not the same Ben Bolker's answer is great. I just wanted to add an answer to the "Would another test be better?" part. If you want to be able to conclude that prices for women are greater than those for men withou
45,628
What would be a Bayesian equivalent of this mixed-effects logistic regression model
Stan is the state-of-the-art in Bayesian model fitting. It has an official R interface through rstan. With rstan you would need to learn how to write your models in the Stan language. Alternatively, Stan also provides the rstanarm package (hat-tip to @ben-bolker for pointing out the omission), through which you can write your models in the familiar lme4-style syntax. An equally user-friendly interface for Stan is the R package brms which is in addition very flexible to handle models that should satisfy basic and moderately advanced users. For example, in your case the syntax would be exactly the same: m <- brm(Shop ~ Time + Group + Time:Group + (1 | subj), data = Shopping, family = binomial) or more concretely (same would work with glmer as well) m <- brm(Shop ~ Time*Group + (1 | subj), data = Shopping, family = binomial) This model in brms will assume reasonable defaults for the prior distributions but you are encouraged to select your own. The syntax for basic models such as the one you give as an example is going to be the same between rstanarm and brms. The advantage of using rstanarmto fit these basic models is that it comes with pre-compiled Stan code so it is going to run faster than brms that needs to compile its Stan code for every model. To name a few distinguishing features, brms shines due to its extended support for different distributions (e.g. "zero-inflated beta", "von Mises", categorical), its extended syntax to cover cases where the user needs to model e.g. predictor or outcome (as in meta-analyses) measurement error, and its ability to fit distributional regressions, non-linear models, or mixture models. For a more extensive comparison of R packages for Bayesian analysis have a look at Bürkner 2018. Since you are a newcomer to Bayesian models, I would also highly encourage you to read the book "Statistical Rethinking" which also comes with its own R package, rethinking that is also an excellent choice, although not as remarkably user-friendly and flexible as brms. There's even a version of the book adapted for brms. **References** [Paul-Christian Bürkner, The R Journal (2018) 10:1, pages 395-411.][1]
What would be a Bayesian equivalent of this mixed-effects logistic regression model
Stan is the state-of-the-art in Bayesian model fitting. It has an official R interface through rstan. With rstan you would need to learn how to write your models in the Stan language. Alternatively, S
What would be a Bayesian equivalent of this mixed-effects logistic regression model Stan is the state-of-the-art in Bayesian model fitting. It has an official R interface through rstan. With rstan you would need to learn how to write your models in the Stan language. Alternatively, Stan also provides the rstanarm package (hat-tip to @ben-bolker for pointing out the omission), through which you can write your models in the familiar lme4-style syntax. An equally user-friendly interface for Stan is the R package brms which is in addition very flexible to handle models that should satisfy basic and moderately advanced users. For example, in your case the syntax would be exactly the same: m <- brm(Shop ~ Time + Group + Time:Group + (1 | subj), data = Shopping, family = binomial) or more concretely (same would work with glmer as well) m <- brm(Shop ~ Time*Group + (1 | subj), data = Shopping, family = binomial) This model in brms will assume reasonable defaults for the prior distributions but you are encouraged to select your own. The syntax for basic models such as the one you give as an example is going to be the same between rstanarm and brms. The advantage of using rstanarmto fit these basic models is that it comes with pre-compiled Stan code so it is going to run faster than brms that needs to compile its Stan code for every model. To name a few distinguishing features, brms shines due to its extended support for different distributions (e.g. "zero-inflated beta", "von Mises", categorical), its extended syntax to cover cases where the user needs to model e.g. predictor or outcome (as in meta-analyses) measurement error, and its ability to fit distributional regressions, non-linear models, or mixture models. For a more extensive comparison of R packages for Bayesian analysis have a look at Bürkner 2018. Since you are a newcomer to Bayesian models, I would also highly encourage you to read the book "Statistical Rethinking" which also comes with its own R package, rethinking that is also an excellent choice, although not as remarkably user-friendly and flexible as brms. There's even a version of the book adapted for brms. **References** [Paul-Christian Bürkner, The R Journal (2018) 10:1, pages 395-411.][1]
What would be a Bayesian equivalent of this mixed-effects logistic regression model Stan is the state-of-the-art in Bayesian model fitting. It has an official R interface through rstan. With rstan you would need to learn how to write your models in the Stan language. Alternatively, S
45,629
Searching for a weekly rhythm in weight loss/gain
Using differences between succeeding days (more susceptible to noise) Compute for every week $i$ the seven values $$\begin{array}{rcl} d_{1i} &=& weight_{Monday}-weight_{Sunday} \\ d_{2i} &=& weight_{Tuesday}-weight_{Monday} \\ d_{3i} &=& weight_{Wednesday}-weight_{Tuesday} \\ d_{4i} &=& weight_{Thursday}-weight_{Wednesday} \\ d_{5i} &=& weight_{Friday}-weight_{Thursday} \\ d_{6i} &=& weight_{Saturday}-weight_{Friday} \\ d_{7i} &=& weight_{Sunday}-weight_{Saturday} \end{array}$$ giving you for $n$ weeks 7x$n$ values. (note the expressions such as $weight_{Monday}-weight_{Sunday}$ must be like the Sunday as in the day before Monday and not six days after Monday. You could avoid this ambiguity by adding subscripts for the week number but I wanted to avoid this clutter, and also this would require defining which day is the beginning of the week). You can plot these values $d_{ji}$ as a function of the day of the week $i$. This scatter-plot can be made as well with descriptive statistics such as a box-plot or just the computed mean value with an expression of the error of the mean. If this approach does not give you satisfying results (e.g. the variance is too large to see clear differences between weekdays) then you might try more advanced models. For instance possibly the weight loss may vary from week to week and you could separate this variance (different ways to do this) such that the differences between the weekdays becomes more clear. Using differences from a trendline (better with relation to noise) Personally I would end up using some model for the overall trend, some function of time $f(t)$ (a linear function would be most easy, in the article that you mention it is a moving average), and then add specific (systematic) "error"/effect terms for the weekdays $g(weekday)$ in addition to the random error $\epsilon$. For example: $$weight(t) = f(t) + g(weekday) + \epsilon $$ In this case you will plot the residuals (the difference of the measurement with the trend line), as a function of weekday. I feel that this is a better way than looking at daily differences since taking differences will emphasize the noise. Example For this example I needed a dataset of day-to-day body weight measurements. I took the data from https://medium.com/technology-liberal-arts/the-data-diet-how-i-lost-60-pounds-using-a-google-docs-spreadsheet-80adce62cf5c which links to a google-docs measurement where I saved the yearly measurements columns as a csv. Then using the R-code below (you can just as well do this in Excel, but it will be a bit more laborious) you will get the image below # get data and packages require(signal) data <- read.csv("~/Desktop/weight data.csv", header=TRUE) # data for analysis w <- data$X2011[-60] t <- 1:365 t2 <- t^2 d <- 1+(t+4)%%7 # monday = 1 .... jan 1 2011 starts is a saturday = 6 # plotting the entire year and models layout(matrix(1:2,1)) plot(t,w, xlab = "time [day of year 2011]", ylab = "weight [pounds]") title("plot of data and trend for entire year 2011") # linear model mod <- lm(w~1+t+t2) lines(t,predict(mod)) # moving average psg <- sgolayfilt(w, p=1, n=7) lines(t,psg) # plotting the weekdays # boxplot(residuals(mod) ~ d) # this quadratic model is not so good # (large variance in weekdays due to temporal variations with period larger than a week) dt <- as.factor(d) levels(dt) <- c("Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday") boxplot(w-psg ~ dt, outline = FALSE, ylim = c(-4,4), ylab = "weight difference from trend [pounds]" ) points(jitter(d,0.5) ,w-psg , pch = 21, cex = 0.7, bg = "white", col = "black" ) title ("plot of variation from trend as function of weekday") # you could do some analysis of significance for the weekday factor mod2 <- lm(w-psg ~ 0 + dt) summary(mod2) # this is what you would get when you use the differential between succeeding days dw <- w[-1]-w[-365] dd <- d[-1] boxplot(dw~dd) summary(lm(dw~as.factor(dd))) Off-topic note about the interpretation The idea is that if certain weekdays can be identified as problematic then the cause can be identified and behavio(u)r changed. This may be problematic, and at least very tricky/difficult, as variations in weight may not need to be associated with variations in fat (which is, I imagine, the main target) and could be weight losses due to intestinal content, salt and water, muscle glycogen and water. For instance, if your friend makes a long distance run (>2hrs) he will lose a lot of muscle glycogen and associated water (will be about 2kg) and the next day this will be higher again. That does not make this next day a bad day. If your friend has a day of eating only bananas or only juice (what those trendy diets advocate) then he will lose a lot of sodium salts and associated water that is being held by the salt, as well (in the case of the juices) a lot of dietary fiber in the intestines. This will record as a day with a lot of weight loss, but it is not a good kind of weight loss. If your friend has particular days of eating a lot of vegetables (which is good because of low energy and high nutritional value), then those days will actually relate to an increase of weight. That is because vegetables contain a lot of dietary fiber and will make the intestinal content more heavy.
Searching for a weekly rhythm in weight loss/gain
Using differences between succeeding days (more susceptible to noise) Compute for every week $i$ the seven values $$\begin{array}{rcl} d_{1i} &=& weight_{Monday}-weight_{Sunday} \\ d_{2i} &=& weight_
Searching for a weekly rhythm in weight loss/gain Using differences between succeeding days (more susceptible to noise) Compute for every week $i$ the seven values $$\begin{array}{rcl} d_{1i} &=& weight_{Monday}-weight_{Sunday} \\ d_{2i} &=& weight_{Tuesday}-weight_{Monday} \\ d_{3i} &=& weight_{Wednesday}-weight_{Tuesday} \\ d_{4i} &=& weight_{Thursday}-weight_{Wednesday} \\ d_{5i} &=& weight_{Friday}-weight_{Thursday} \\ d_{6i} &=& weight_{Saturday}-weight_{Friday} \\ d_{7i} &=& weight_{Sunday}-weight_{Saturday} \end{array}$$ giving you for $n$ weeks 7x$n$ values. (note the expressions such as $weight_{Monday}-weight_{Sunday}$ must be like the Sunday as in the day before Monday and not six days after Monday. You could avoid this ambiguity by adding subscripts for the week number but I wanted to avoid this clutter, and also this would require defining which day is the beginning of the week). You can plot these values $d_{ji}$ as a function of the day of the week $i$. This scatter-plot can be made as well with descriptive statistics such as a box-plot or just the computed mean value with an expression of the error of the mean. If this approach does not give you satisfying results (e.g. the variance is too large to see clear differences between weekdays) then you might try more advanced models. For instance possibly the weight loss may vary from week to week and you could separate this variance (different ways to do this) such that the differences between the weekdays becomes more clear. Using differences from a trendline (better with relation to noise) Personally I would end up using some model for the overall trend, some function of time $f(t)$ (a linear function would be most easy, in the article that you mention it is a moving average), and then add specific (systematic) "error"/effect terms for the weekdays $g(weekday)$ in addition to the random error $\epsilon$. For example: $$weight(t) = f(t) + g(weekday) + \epsilon $$ In this case you will plot the residuals (the difference of the measurement with the trend line), as a function of weekday. I feel that this is a better way than looking at daily differences since taking differences will emphasize the noise. Example For this example I needed a dataset of day-to-day body weight measurements. I took the data from https://medium.com/technology-liberal-arts/the-data-diet-how-i-lost-60-pounds-using-a-google-docs-spreadsheet-80adce62cf5c which links to a google-docs measurement where I saved the yearly measurements columns as a csv. Then using the R-code below (you can just as well do this in Excel, but it will be a bit more laborious) you will get the image below # get data and packages require(signal) data <- read.csv("~/Desktop/weight data.csv", header=TRUE) # data for analysis w <- data$X2011[-60] t <- 1:365 t2 <- t^2 d <- 1+(t+4)%%7 # monday = 1 .... jan 1 2011 starts is a saturday = 6 # plotting the entire year and models layout(matrix(1:2,1)) plot(t,w, xlab = "time [day of year 2011]", ylab = "weight [pounds]") title("plot of data and trend for entire year 2011") # linear model mod <- lm(w~1+t+t2) lines(t,predict(mod)) # moving average psg <- sgolayfilt(w, p=1, n=7) lines(t,psg) # plotting the weekdays # boxplot(residuals(mod) ~ d) # this quadratic model is not so good # (large variance in weekdays due to temporal variations with period larger than a week) dt <- as.factor(d) levels(dt) <- c("Monday","Tuesday","Wednesday","Thursday","Friday","Saturday","Sunday") boxplot(w-psg ~ dt, outline = FALSE, ylim = c(-4,4), ylab = "weight difference from trend [pounds]" ) points(jitter(d,0.5) ,w-psg , pch = 21, cex = 0.7, bg = "white", col = "black" ) title ("plot of variation from trend as function of weekday") # you could do some analysis of significance for the weekday factor mod2 <- lm(w-psg ~ 0 + dt) summary(mod2) # this is what you would get when you use the differential between succeeding days dw <- w[-1]-w[-365] dd <- d[-1] boxplot(dw~dd) summary(lm(dw~as.factor(dd))) Off-topic note about the interpretation The idea is that if certain weekdays can be identified as problematic then the cause can be identified and behavio(u)r changed. This may be problematic, and at least very tricky/difficult, as variations in weight may not need to be associated with variations in fat (which is, I imagine, the main target) and could be weight losses due to intestinal content, salt and water, muscle glycogen and water. For instance, if your friend makes a long distance run (>2hrs) he will lose a lot of muscle glycogen and associated water (will be about 2kg) and the next day this will be higher again. That does not make this next day a bad day. If your friend has a day of eating only bananas or only juice (what those trendy diets advocate) then he will lose a lot of sodium salts and associated water that is being held by the salt, as well (in the case of the juices) a lot of dietary fiber in the intestines. This will record as a day with a lot of weight loss, but it is not a good kind of weight loss. If your friend has particular days of eating a lot of vegetables (which is good because of low energy and high nutritional value), then those days will actually relate to an increase of weight. That is because vegetables contain a lot of dietary fiber and will make the intestinal content more heavy.
Searching for a weekly rhythm in weight loss/gain Using differences between succeeding days (more susceptible to noise) Compute for every week $i$ the seven values $$\begin{array}{rcl} d_{1i} &=& weight_{Monday}-weight_{Sunday} \\ d_{2i} &=& weight_
45,630
Searching for a weekly rhythm in weight loss/gain
I would plot this with the $x$-axis showing the days of the week and plot a separate line for each week with weight on the $y$-axis. I might also compute the mean for Mondays, Tuesday, and so on and plot that as well with a contrasting symbol. If I was interested in finding days which stood out from the trend I would subtract the mean for Monday from each Monday and so on and plot those differences. There are also lots of analyses you could do but (a) you asked for display ideas, (b) a picture is worth a thousand words. Even if I was going to wheel up the statistical heavy artillery I would plot the data first.
Searching for a weekly rhythm in weight loss/gain
I would plot this with the $x$-axis showing the days of the week and plot a separate line for each week with weight on the $y$-axis. I might also compute the mean for Mondays, Tuesday, and so on and p
Searching for a weekly rhythm in weight loss/gain I would plot this with the $x$-axis showing the days of the week and plot a separate line for each week with weight on the $y$-axis. I might also compute the mean for Mondays, Tuesday, and so on and plot that as well with a contrasting symbol. If I was interested in finding days which stood out from the trend I would subtract the mean for Monday from each Monday and so on and plot those differences. There are also lots of analyses you could do but (a) you asked for display ideas, (b) a picture is worth a thousand words. Even if I was going to wheel up the statistical heavy artillery I would plot the data first.
Searching for a weekly rhythm in weight loss/gain I would plot this with the $x$-axis showing the days of the week and plot a separate line for each week with weight on the $y$-axis. I might also compute the mean for Mondays, Tuesday, and so on and p
45,631
Evaluating the hazard function when the CDF is close to 1?
If the matter is numerical stability, you could look at the log of the hazard function: $$log(h(t; \theta)) = log(f(t;\theta)) - log(1-F(t;\theta))$$ You could use the log / log.p = TRUE flag in R for log values and the lower.tail flag for obtaining $log(1 - F(t;\theta))$ values: dweibull(100,1,1, log = T) # -100 pweibull(100, 1, 1, log.p = TRUE, lower.tail = FALSE) # -100 Which gives you an estimate: $h(t;\theta) = exp(-100 + 100) = 1$ Edit: By the way, when you have a $Weibull(1, 1)$ distribution, I believe that this is an $Exponential(1)$, so it has a constant hazard function.
Evaluating the hazard function when the CDF is close to 1?
If the matter is numerical stability, you could look at the log of the hazard function: $$log(h(t; \theta)) = log(f(t;\theta)) - log(1-F(t;\theta))$$ You could use the log / log.p = TRUE flag in R for
Evaluating the hazard function when the CDF is close to 1? If the matter is numerical stability, you could look at the log of the hazard function: $$log(h(t; \theta)) = log(f(t;\theta)) - log(1-F(t;\theta))$$ You could use the log / log.p = TRUE flag in R for log values and the lower.tail flag for obtaining $log(1 - F(t;\theta))$ values: dweibull(100,1,1, log = T) # -100 pweibull(100, 1, 1, log.p = TRUE, lower.tail = FALSE) # -100 Which gives you an estimate: $h(t;\theta) = exp(-100 + 100) = 1$ Edit: By the way, when you have a $Weibull(1, 1)$ distribution, I believe that this is an $Exponential(1)$, so it has a constant hazard function.
Evaluating the hazard function when the CDF is close to 1? If the matter is numerical stability, you could look at the log of the hazard function: $$log(h(t; \theta)) = log(f(t;\theta)) - log(1-F(t;\theta))$$ You could use the log / log.p = TRUE flag in R for
45,632
Evaluating the hazard function when the CDF is close to 1?
For a survival curve based on a parametric distribution, often the hazard is an explicit function of the parameters. For example, this link provides several hazard functions for different distribtuons. So when we know the values of parameters and want to calculate the hazard, as asked in this question, the best way is to use the hazard function directly, instead of going through CDF and PDF. For example, the hazard function is $h(x)=γx^{(γ−1)}$ for the standard Weibull distribution.
Evaluating the hazard function when the CDF is close to 1?
For a survival curve based on a parametric distribution, often the hazard is an explicit function of the parameters. For example, this link provides several hazard functions for different distribtuons
Evaluating the hazard function when the CDF is close to 1? For a survival curve based on a parametric distribution, often the hazard is an explicit function of the parameters. For example, this link provides several hazard functions for different distribtuons. So when we know the values of parameters and want to calculate the hazard, as asked in this question, the best way is to use the hazard function directly, instead of going through CDF and PDF. For example, the hazard function is $h(x)=γx^{(γ−1)}$ for the standard Weibull distribution.
Evaluating the hazard function when the CDF is close to 1? For a survival curve based on a parametric distribution, often the hazard is an explicit function of the parameters. For example, this link provides several hazard functions for different distribtuons
45,633
About Sampling and Random Variables
The random variable $Y$ describes a relationship between events and the corresponding probabilities of those events. In more practical terms, a random variable describes a data-generating process. When you generate a random data point that is described by the random variable $Y$, the probability distribution of $Y$ describes the probability distribution of values that can result. You can think of a "population" as an infinite reservoir of values drawn from $Y$. Sampling from a population is analogous to repeatedly drawing new values from $Y$. A sample of size $N$ is a size-$N$ collection of individual draws from $Y$. The sample is clearly not the same thing as the random variable itself, so we need a different notation for it. Let's call it $s = \{y_1, y_2, \dots, y_N \}$. Each $y_n$ is a single draw from $Y$. The sample mean is a single number. Let's call it $\bar s$. It is the mean of the sequence $s$, i.e. $\bar s = \frac{y_1 + y_2 + \dots + y_N}{N}$. We can make an interesting observation here! $N$ independent, identical draws from a random variable $Y$ is the same thing as one draw from each of $N$ independent, identical random variables $Y_n$. Now, we can talk about the sample itself as a random variable $S = \{ Y_1, \dots, Y_N \}$. Note the difference between $$ s = \{ y_1, \dots, y_N \} $$ and $$ S = \{ Y_1, \dots, Y_N \} $$ $S$ is random: it is a sequence of random variables. $s$ is not random. It is the realized value of a draw from $S$, i.e. a sequence of realized values of draws from $Y_1, \dots, Y_N$. Therefore the sample mean itself can be restated as a random variable $\bar S$. Compare $$ \bar s = \frac{ y_1 + \cdots + y_N}{N} $$ with $$ \bar S = \frac{ Y_1 + \cdots + Y_N}{N} $$ $\bar s$ is just a number: it is the mean of a sequence of numbers $y_1, \cdots, y_N$. But $\bar S$ is a random variable! Specifically, it is a statistic, a single quantity that is calculated from a sample. The value of a statistic for a specific sample is a realization of the distribution for that statistic. Being a random variable, draws from $\bar S$ are described by a probability distribution. The distribution of sample means, across all possible samples, is described by the distribution of $\bar S$. This distribution is the sampling distribution of the mean. With regard to your first question, you are probably confused between the random variable $Y$ and the matrix $Y$. It is an unfortunate clash in notation that random variables and matrices are both conventionally written with capital letters. It is often mathematically convenient to express samples as matrices, so that you can do linear algebra operations on observed data (to generate estimates from that data, e.g. with ordinary least squares). The matrix $Y$ would be a matrix of observed values. Take care to observe the context, to avoid this confusion. To address your 2nd question, there are many ways to derive or describe a sampling distribution. One possible technique is called resampling: repeatedly draw samples from a population that is distributed according to $Y$, and measure the sample mean in each of those samples. The distribution of those sample means should follow the sampling distribution of the mean.
About Sampling and Random Variables
The random variable $Y$ describes a relationship between events and the corresponding probabilities of those events. In more practical terms, a random variable describes a data-generating process. Whe
About Sampling and Random Variables The random variable $Y$ describes a relationship between events and the corresponding probabilities of those events. In more practical terms, a random variable describes a data-generating process. When you generate a random data point that is described by the random variable $Y$, the probability distribution of $Y$ describes the probability distribution of values that can result. You can think of a "population" as an infinite reservoir of values drawn from $Y$. Sampling from a population is analogous to repeatedly drawing new values from $Y$. A sample of size $N$ is a size-$N$ collection of individual draws from $Y$. The sample is clearly not the same thing as the random variable itself, so we need a different notation for it. Let's call it $s = \{y_1, y_2, \dots, y_N \}$. Each $y_n$ is a single draw from $Y$. The sample mean is a single number. Let's call it $\bar s$. It is the mean of the sequence $s$, i.e. $\bar s = \frac{y_1 + y_2 + \dots + y_N}{N}$. We can make an interesting observation here! $N$ independent, identical draws from a random variable $Y$ is the same thing as one draw from each of $N$ independent, identical random variables $Y_n$. Now, we can talk about the sample itself as a random variable $S = \{ Y_1, \dots, Y_N \}$. Note the difference between $$ s = \{ y_1, \dots, y_N \} $$ and $$ S = \{ Y_1, \dots, Y_N \} $$ $S$ is random: it is a sequence of random variables. $s$ is not random. It is the realized value of a draw from $S$, i.e. a sequence of realized values of draws from $Y_1, \dots, Y_N$. Therefore the sample mean itself can be restated as a random variable $\bar S$. Compare $$ \bar s = \frac{ y_1 + \cdots + y_N}{N} $$ with $$ \bar S = \frac{ Y_1 + \cdots + Y_N}{N} $$ $\bar s$ is just a number: it is the mean of a sequence of numbers $y_1, \cdots, y_N$. But $\bar S$ is a random variable! Specifically, it is a statistic, a single quantity that is calculated from a sample. The value of a statistic for a specific sample is a realization of the distribution for that statistic. Being a random variable, draws from $\bar S$ are described by a probability distribution. The distribution of sample means, across all possible samples, is described by the distribution of $\bar S$. This distribution is the sampling distribution of the mean. With regard to your first question, you are probably confused between the random variable $Y$ and the matrix $Y$. It is an unfortunate clash in notation that random variables and matrices are both conventionally written with capital letters. It is often mathematically convenient to express samples as matrices, so that you can do linear algebra operations on observed data (to generate estimates from that data, e.g. with ordinary least squares). The matrix $Y$ would be a matrix of observed values. Take care to observe the context, to avoid this confusion. To address your 2nd question, there are many ways to derive or describe a sampling distribution. One possible technique is called resampling: repeatedly draw samples from a population that is distributed according to $Y$, and measure the sample mean in each of those samples. The distribution of those sample means should follow the sampling distribution of the mean.
About Sampling and Random Variables The random variable $Y$ describes a relationship between events and the corresponding probabilities of those events. In more practical terms, a random variable describes a data-generating process. Whe
45,634
About Sampling and Random Variables
For the sake of redundancy and addition, a random value is (the Mathematical modeling of the process of having a ) a measurement or experiment whose value is not predictable/deterministic ; its value can only be understood probabilistically, meaning that it can be tested over the longer run. A standard example is that of throwing a fair coin *: one cannot tell for any throw whether it will land heads or tails, but a pattern should emerge over the long run of limiting values for the probability P(Heads)=P(Tails)=1/2. Re your $Y$, yes, the layout is a bit confusing. Like Shadowtalker said, $Y_1,Y_2,...,Y_n$ are the implementations of the same process $Y$ , where $Y$ may represent throwing a die, flipping a coin, etc. Then $Y_1, Y_2,..,Y_n$ , if independent, are said to be IID RVs , Independent , Identically-Distributed Random Variables. And, yes, the sampling mean is the random variable that takes sample (quantitative) values $Y_1, Y_2,....,Y_n $ and assigns to them the value $\frac {X_1 +X_2+...+X_n}{n}$. There are many other possible sample statistics: Sample variance, Sample error, etc. An important result to note, I think, is the CLT: Central Limit Theorem which tells you that , no matter what the distribution , if $Z_1,Z_2,....,Z_m$ are independent and identically - distributed, then the sample mean will approach a normal distribution as n becomes large-enough ( $n>30 ; n>40$ for higher accuracy). Assume we know it is fair to avoid a rabbit hole.
About Sampling and Random Variables
For the sake of redundancy and addition, a random value is (the Mathematical modeling of the process of having a ) a measurement or experiment whose value is not predictable/deterministic ; its value
About Sampling and Random Variables For the sake of redundancy and addition, a random value is (the Mathematical modeling of the process of having a ) a measurement or experiment whose value is not predictable/deterministic ; its value can only be understood probabilistically, meaning that it can be tested over the longer run. A standard example is that of throwing a fair coin *: one cannot tell for any throw whether it will land heads or tails, but a pattern should emerge over the long run of limiting values for the probability P(Heads)=P(Tails)=1/2. Re your $Y$, yes, the layout is a bit confusing. Like Shadowtalker said, $Y_1,Y_2,...,Y_n$ are the implementations of the same process $Y$ , where $Y$ may represent throwing a die, flipping a coin, etc. Then $Y_1, Y_2,..,Y_n$ , if independent, are said to be IID RVs , Independent , Identically-Distributed Random Variables. And, yes, the sampling mean is the random variable that takes sample (quantitative) values $Y_1, Y_2,....,Y_n $ and assigns to them the value $\frac {X_1 +X_2+...+X_n}{n}$. There are many other possible sample statistics: Sample variance, Sample error, etc. An important result to note, I think, is the CLT: Central Limit Theorem which tells you that , no matter what the distribution , if $Z_1,Z_2,....,Z_m$ are independent and identically - distributed, then the sample mean will approach a normal distribution as n becomes large-enough ( $n>30 ; n>40$ for higher accuracy). Assume we know it is fair to avoid a rabbit hole.
About Sampling and Random Variables For the sake of redundancy and addition, a random value is (the Mathematical modeling of the process of having a ) a measurement or experiment whose value is not predictable/deterministic ; its value
45,635
Can Cramér's V be used an effect size measure for McNemar's test?
In the context of a 2x2 table, Cramer's $V$ is equivalent to the phi coefficient. Moreover, phi is equivalent to Pearson's product moment correlation of the two columns of $1$'s and $0$'s when the 2x2 table is disaggregated. That corresponds to a different magnitude than the one that McNemar's test is testing. So, no, I don't think it is a good choice. With McNemar's test, you are comparing two marginal proportions (in your case, $76\%$ true before, and $36\%$ true after). Presenting the complete table is pretty easy, as it's just four numbers. But it may be psychologically helpful to present those proportions and whatever magnitude derived from them makes the most sense to people in your field (e.g., the odds ratio). You could even explicitly refer to the cells that are used by McNemar's test. For instance: There were 10 cases where people changed their minds; in all such cases, people switched from true to false.
Can Cramér's V be used an effect size measure for McNemar's test?
In the context of a 2x2 table, Cramer's $V$ is equivalent to the phi coefficient. Moreover, phi is equivalent to Pearson's product moment correlation of the two columns of $1$'s and $0$'s when the 2x
Can Cramér's V be used an effect size measure for McNemar's test? In the context of a 2x2 table, Cramer's $V$ is equivalent to the phi coefficient. Moreover, phi is equivalent to Pearson's product moment correlation of the two columns of $1$'s and $0$'s when the 2x2 table is disaggregated. That corresponds to a different magnitude than the one that McNemar's test is testing. So, no, I don't think it is a good choice. With McNemar's test, you are comparing two marginal proportions (in your case, $76\%$ true before, and $36\%$ true after). Presenting the complete table is pretty easy, as it's just four numbers. But it may be psychologically helpful to present those proportions and whatever magnitude derived from them makes the most sense to people in your field (e.g., the odds ratio). You could even explicitly refer to the cells that are used by McNemar's test. For instance: There were 10 cases where people changed their minds; in all such cases, people switched from true to false.
Can Cramér's V be used an effect size measure for McNemar's test? In the context of a 2x2 table, Cramer's $V$ is equivalent to the phi coefficient. Moreover, phi is equivalent to Pearson's product moment correlation of the two columns of $1$'s and $0$'s when the 2x
45,636
Can Cramér's V be used an effect size measure for McNemar's test?
Cramér's $V$ doesn't correspond well to what is tested by McNemar's test. Edit: Disclosure: the webpage and R package cited below are mine. Probably the most common effect size statistic for McNemar's test is odds ratio, though Cohen's $g$ could be used. Cohen (1988) also uses a statistic he calls $P$. For defintions, quoting from here: Considering a 2 x 2 table, with $a$ and $d$ being the concordant cells and $b$ and $c$ being the discordant cells, the odds ratio is simply the greater of $(b/c)$ or $(c/b)$, and $P$ is the greater of $(b/(b+c))$ or $(c/(b+c))$. Cohen’s $g$ is $P – 0.5$. Cohen (1988) also gives interpretations ("small", "medium", "large") for his $g$ statistic. Because $g$ is monotonically related to odds ratio, these interpretations can be extended to the odds ratio (same link). Edit: Interpretations of effect size statistics are always relative to the field of study, specific experiment, and practical considerations. Cohen's interpretations should not be considered universal. Obviously, not much coding is needed to calculate any of these statistics for the 2 x 2 case. However, if they are extended to larger tables, the math can get tricky. In R, the cohenG() function in the rcompanion package makes this relatively easy. Links include R code. Reference: Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences, 2nd Edition. Routledge.
Can Cramér's V be used an effect size measure for McNemar's test?
Cramér's $V$ doesn't correspond well to what is tested by McNemar's test. Edit: Disclosure: the webpage and R package cited below are mine. Probably the most common effect size statistic for McNemar's
Can Cramér's V be used an effect size measure for McNemar's test? Cramér's $V$ doesn't correspond well to what is tested by McNemar's test. Edit: Disclosure: the webpage and R package cited below are mine. Probably the most common effect size statistic for McNemar's test is odds ratio, though Cohen's $g$ could be used. Cohen (1988) also uses a statistic he calls $P$. For defintions, quoting from here: Considering a 2 x 2 table, with $a$ and $d$ being the concordant cells and $b$ and $c$ being the discordant cells, the odds ratio is simply the greater of $(b/c)$ or $(c/b)$, and $P$ is the greater of $(b/(b+c))$ or $(c/(b+c))$. Cohen’s $g$ is $P – 0.5$. Cohen (1988) also gives interpretations ("small", "medium", "large") for his $g$ statistic. Because $g$ is monotonically related to odds ratio, these interpretations can be extended to the odds ratio (same link). Edit: Interpretations of effect size statistics are always relative to the field of study, specific experiment, and practical considerations. Cohen's interpretations should not be considered universal. Obviously, not much coding is needed to calculate any of these statistics for the 2 x 2 case. However, if they are extended to larger tables, the math can get tricky. In R, the cohenG() function in the rcompanion package makes this relatively easy. Links include R code. Reference: Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences, 2nd Edition. Routledge.
Can Cramér's V be used an effect size measure for McNemar's test? Cramér's $V$ doesn't correspond well to what is tested by McNemar's test. Edit: Disclosure: the webpage and R package cited below are mine. Probably the most common effect size statistic for McNemar's
45,637
Equivalence test for binominal data
While one can use the t test to test for proportion difference, the z test is a tad more precise, since it uses an estimate of the standard deviation formulated specifically for binomial (i.e. dichotomous, nominal, etc.) data. The same applies to the z test for proportion equivalence. First, the z test for difference in proportions of two independent samples is pretty straightforward: About z tests for unpaired proportion difference The null hypothesis is $H_{0}\text{: }p_{1} - p_{2} = 0$ (i.e. $H_{0}\text{: }p_{1} = p_{2}$), with $H_{\text{A}}\text{: }p_{1} - p_{2} \ne 0$. $z = \frac{\hat{p}_{1}-\hat{p}_{2}}{\sqrt{\hat{p}\left(1-\hat{p}\right)\left[\frac{1}{n_{1}} + \frac{1}{n_{2}}\right]}}$, where: $\hat{p}_{1}$ and $\hat{p}_{1}$ are the sample proportions in group 1 and group 2; $n_{1}$ and $n_{2}$ are the sample sizes in group 1 and group 2; and $\hat{p}$ is the estimate of the sample means if $H_{0}$ is true, the best guess of which is simply the overall sample proportion (i.e. of all the data, ignoring which group an observation is from). You might want to consider a continuity correction. For example, Hauck and Anderson's (1986) correction gives: $c_{\text{HA}} = \frac{1}{2\min{(n_{1},n_{2})}}$, and a redefined $s_{\hat{p}}$: $s_{\hat{p}}= \sqrt{ \frac{\hat{p}_{1}(1-\hat{p}_{1})}{n_{1}-1} + \frac{\hat{p}_{2}(1-\hat{p}_{2})}{n_{2}-1}}$, so that $z = \frac{\left|\hat{p}_{1} - \hat{p}_{2}\right| - c_{\text{HA}}}{\sqrt{ \frac{\hat{p}_{1}(1-\hat{p}_{1})}{n_{1}-1} + \frac{\hat{p}_{2}(1-\hat{p}_{2})}{n_{2}-1}}}$ The appropriate $p$-value for this $z$-statistic is then calculated or looked up in a table, and compared to $\alpha/2$ (two-tailed test). About z tests for unpaired proportion equivalence Because all differences are "statistically significant" given a large enough sample size, it is a good idea to decide beforehand what the smallest relevant difference in proportions is to you, and then look for evidence of such relevance. You find such evidence by combining the inferences from the test for difference just described, with a test for equivalence. Suppose you decide beforehand that a meaningful difference in proportion for your purposes is on that is at least 0.05 (i.e. $|p_{1} - p_{2}| \ge 0.05$), then the corresponding test for equivalence of proportions for two independent groups is: $H^{-}_{0}\text{: }|p_{1} - p_{2}| \ge 0.05$, which translates into two one-sided null hypotheses: $H^{-}_{01}\text{: }p_{1} - p_{2} \ge 0.05$ $H^{-}_{02}\text{: }p_{1} - p_{2} \le -0.05$ These two one-sided null hypotheses can be tested with (these test statistics have been constructed both for upper tail one-sided tests): $z_{1} = \frac{0.05 - \left(\hat{p}_{1}-\hat{p}_{2}\right)}{\sqrt{\hat{p}\left(1-\hat{p}\right)\left[\frac{1}{n_{1}} + \frac{1}{n_{2}}\right]}}$, and $z_{2} = \frac{\left(\hat{p}_{1}-\hat{p}_{2}\right)+0.05}{\sqrt{\hat{p}\left(1-\hat{p}\right)\left[\frac{1}{n_{1}} + \frac{1}{n_{2}}\right]}}$. With a continuity correction $z_{1}$ and $z_{2}$ instead become (see Tu, 1997): $z_{1} = \frac{0.05 - \left(\hat{p}_{1}-\hat{p}_{2}\right) + c_{\text{HA}}}{\sqrt{ \frac{\hat{p}_{1}(1-\hat{p}_{1})}{n_{1}-1} + \frac{\hat{p}_{2}(1-\hat{p}_{2})}{n_{2}-1}}}$, and $z_{2} = \frac{\left(\hat{p}_{1}-\hat{p}_{2}\right)+0.05-c_{\text{HA}}}{\sqrt{ \frac{\hat{p}_{1}(1-\hat{p}_{1})}{n_{1}-1} + \frac{\hat{p}_{2}(1-\hat{p}_{2})}{n_{2}-1}}}$. If you reject both $H^{-}_{01}$ and $H^{-}_{02}$ (both tested at $\alpha$, not $\alpha/2$, and both tested with right tail rejection regions), then you can conclude that you have evidence of equivalence. **About *relevance tests*** *Finally*... if you combine inference from tests of $H_{0}$ *and* $H^{-}_{0}$ (i.e. test for difference and test for equivalence), then you get one of the following possibilities: reject $H_{0}$ and reject $H^{-}_{0}$: conclude trivial difference between proportions (i.e. yes there is a difference, but it's too small for you to care about because it is smaller than 0.05); reject $H_{0}$ and not reject $H^{-}_{0}$: conclude relevant difference between proportions (i.e. larger than 0.05); not reject $H_{0}$ and reject $H^{-}_{0}$: conclude equivalence of proportions; or not reject $H_{0}$ and not reject $H^{-}_{0}$: conclude indeterminate (i.e. underpowered tests). R code First the test for difference: Assume g1 and g2 are vectors containing the binomial data for group 1 and group 2 respectively. n1 <- length(g1) #sample size group 1 n2 <- length(g2) #sample size group 2 p1 <- sum(g1)/n1 #p1 hat p2 <- sum(g2)/n2 #p2 hat n <- n1 + n2 #overall sample size p <- sum(g1,g2)/n #p hat cHA <- 1/(2*min(n1,n2)) # without continuity correction z <- (p1 - p2)/sqrt(p*(1-p)*(1/n1 + 1/n2)) #test statistic pval <- 1 - pnorm(abs(z)) #p-value reject H0 if it is #<= alpha/2 (two-tailed) # with continuity correction zHA <- (abs(p1 - p2) - cHA)/sqrt((p1*(1-p1)/(n1-1)) + (p2*(1-p2)/(n2-1))) #with continuity correction pvalHA <- 1 - pnorm(abs(zHA)) #p-value reject H0 if it is #<= alpha/2 (two-tailed) Next the test for equivalence: Delta <- 0.05 #Equivalence threshold of +/- 5%. # You will want to carefully think about and select your own # value for Delta before you conduct your test. Again, assume g1 and g2 are vectors containing the binomial data for group 1 and group 2 respectively. n1 <- length(g1) #sample size group 1 n2 <- length(g2) #sample size group 2 p1 <- sum(g1)/n1 #p1 hat p2 <- sum(g2)/n2 #p2 hat n <- n1 + n2 #overall sample size p <- sum(g1, g2)/n #p hat cHAeq <- sign(p1-p2)* (1/(2*min(n1, n2))) # without continuity correction z1 <- (Delta - (p1 - p2))/sqrt(p*(1-p)*(1/n1 + 1/n2)) #test statistic for H01 z2 <- ((p1 - p2) + Delta)/sqrt(p*(1-p)*(1/n1 + 1/n2)) #test statistic for H02 pval1 <- 1 - pnorm(z1) #p-value (upper tail) reject H0 if it is <= alpha #(one tail) pval2 <- 1 - pnorm(z2) #p-value (upper tail) reject H0 #if it is <= alpha (one tail) # with continuity correction zHA1 <- (Delta - abs(p1 - p2) + cHAeq)/sqrt((p1*(1-p1)/(n1-1)) + (p2*(1-p2)/(n2-1))) #with continuity correction zHA2 <- (abs(p1 - p2) + Delta - cHAeq)/sqrt((p1*(1- p1)/(n1-1)) + (p2*(1-p2)/(n2-1))) #with continuity correction pvalHA1 <- 1 - pnorm(zHA1) #p-value (upper tail) reject H0 #if it is <= alpha (one tail) pvalHA2 <- 1 - pnorm(zHA2) #p-value (upper tail) reject H0 #if it is <= alpha (one tail) References Hauck, W. W. and Anderson, S. (1986). A comparison of large-sample confidence interval methods for the difference of two binomial probabilities. The American Statistician, 40(4):318–322. Tu, D. (1997). Two one-sided tests procedures in establishing therapeutic equivalence with binary clinical endpoints: fixed sample performances and sample size determination. Journal of Statistical Computation and Simulation, 59(3):271–290.
Equivalence test for binominal data
While one can use the t test to test for proportion difference, the z test is a tad more precise, since it uses an estimate of the standard deviation formulated specifically for binomial (i.e. dichoto
Equivalence test for binominal data While one can use the t test to test for proportion difference, the z test is a tad more precise, since it uses an estimate of the standard deviation formulated specifically for binomial (i.e. dichotomous, nominal, etc.) data. The same applies to the z test for proportion equivalence. First, the z test for difference in proportions of two independent samples is pretty straightforward: About z tests for unpaired proportion difference The null hypothesis is $H_{0}\text{: }p_{1} - p_{2} = 0$ (i.e. $H_{0}\text{: }p_{1} = p_{2}$), with $H_{\text{A}}\text{: }p_{1} - p_{2} \ne 0$. $z = \frac{\hat{p}_{1}-\hat{p}_{2}}{\sqrt{\hat{p}\left(1-\hat{p}\right)\left[\frac{1}{n_{1}} + \frac{1}{n_{2}}\right]}}$, where: $\hat{p}_{1}$ and $\hat{p}_{1}$ are the sample proportions in group 1 and group 2; $n_{1}$ and $n_{2}$ are the sample sizes in group 1 and group 2; and $\hat{p}$ is the estimate of the sample means if $H_{0}$ is true, the best guess of which is simply the overall sample proportion (i.e. of all the data, ignoring which group an observation is from). You might want to consider a continuity correction. For example, Hauck and Anderson's (1986) correction gives: $c_{\text{HA}} = \frac{1}{2\min{(n_{1},n_{2})}}$, and a redefined $s_{\hat{p}}$: $s_{\hat{p}}= \sqrt{ \frac{\hat{p}_{1}(1-\hat{p}_{1})}{n_{1}-1} + \frac{\hat{p}_{2}(1-\hat{p}_{2})}{n_{2}-1}}$, so that $z = \frac{\left|\hat{p}_{1} - \hat{p}_{2}\right| - c_{\text{HA}}}{\sqrt{ \frac{\hat{p}_{1}(1-\hat{p}_{1})}{n_{1}-1} + \frac{\hat{p}_{2}(1-\hat{p}_{2})}{n_{2}-1}}}$ The appropriate $p$-value for this $z$-statistic is then calculated or looked up in a table, and compared to $\alpha/2$ (two-tailed test). About z tests for unpaired proportion equivalence Because all differences are "statistically significant" given a large enough sample size, it is a good idea to decide beforehand what the smallest relevant difference in proportions is to you, and then look for evidence of such relevance. You find such evidence by combining the inferences from the test for difference just described, with a test for equivalence. Suppose you decide beforehand that a meaningful difference in proportion for your purposes is on that is at least 0.05 (i.e. $|p_{1} - p_{2}| \ge 0.05$), then the corresponding test for equivalence of proportions for two independent groups is: $H^{-}_{0}\text{: }|p_{1} - p_{2}| \ge 0.05$, which translates into two one-sided null hypotheses: $H^{-}_{01}\text{: }p_{1} - p_{2} \ge 0.05$ $H^{-}_{02}\text{: }p_{1} - p_{2} \le -0.05$ These two one-sided null hypotheses can be tested with (these test statistics have been constructed both for upper tail one-sided tests): $z_{1} = \frac{0.05 - \left(\hat{p}_{1}-\hat{p}_{2}\right)}{\sqrt{\hat{p}\left(1-\hat{p}\right)\left[\frac{1}{n_{1}} + \frac{1}{n_{2}}\right]}}$, and $z_{2} = \frac{\left(\hat{p}_{1}-\hat{p}_{2}\right)+0.05}{\sqrt{\hat{p}\left(1-\hat{p}\right)\left[\frac{1}{n_{1}} + \frac{1}{n_{2}}\right]}}$. With a continuity correction $z_{1}$ and $z_{2}$ instead become (see Tu, 1997): $z_{1} = \frac{0.05 - \left(\hat{p}_{1}-\hat{p}_{2}\right) + c_{\text{HA}}}{\sqrt{ \frac{\hat{p}_{1}(1-\hat{p}_{1})}{n_{1}-1} + \frac{\hat{p}_{2}(1-\hat{p}_{2})}{n_{2}-1}}}$, and $z_{2} = \frac{\left(\hat{p}_{1}-\hat{p}_{2}\right)+0.05-c_{\text{HA}}}{\sqrt{ \frac{\hat{p}_{1}(1-\hat{p}_{1})}{n_{1}-1} + \frac{\hat{p}_{2}(1-\hat{p}_{2})}{n_{2}-1}}}$. If you reject both $H^{-}_{01}$ and $H^{-}_{02}$ (both tested at $\alpha$, not $\alpha/2$, and both tested with right tail rejection regions), then you can conclude that you have evidence of equivalence. **About *relevance tests*** *Finally*... if you combine inference from tests of $H_{0}$ *and* $H^{-}_{0}$ (i.e. test for difference and test for equivalence), then you get one of the following possibilities: reject $H_{0}$ and reject $H^{-}_{0}$: conclude trivial difference between proportions (i.e. yes there is a difference, but it's too small for you to care about because it is smaller than 0.05); reject $H_{0}$ and not reject $H^{-}_{0}$: conclude relevant difference between proportions (i.e. larger than 0.05); not reject $H_{0}$ and reject $H^{-}_{0}$: conclude equivalence of proportions; or not reject $H_{0}$ and not reject $H^{-}_{0}$: conclude indeterminate (i.e. underpowered tests). R code First the test for difference: Assume g1 and g2 are vectors containing the binomial data for group 1 and group 2 respectively. n1 <- length(g1) #sample size group 1 n2 <- length(g2) #sample size group 2 p1 <- sum(g1)/n1 #p1 hat p2 <- sum(g2)/n2 #p2 hat n <- n1 + n2 #overall sample size p <- sum(g1,g2)/n #p hat cHA <- 1/(2*min(n1,n2)) # without continuity correction z <- (p1 - p2)/sqrt(p*(1-p)*(1/n1 + 1/n2)) #test statistic pval <- 1 - pnorm(abs(z)) #p-value reject H0 if it is #<= alpha/2 (two-tailed) # with continuity correction zHA <- (abs(p1 - p2) - cHA)/sqrt((p1*(1-p1)/(n1-1)) + (p2*(1-p2)/(n2-1))) #with continuity correction pvalHA <- 1 - pnorm(abs(zHA)) #p-value reject H0 if it is #<= alpha/2 (two-tailed) Next the test for equivalence: Delta <- 0.05 #Equivalence threshold of +/- 5%. # You will want to carefully think about and select your own # value for Delta before you conduct your test. Again, assume g1 and g2 are vectors containing the binomial data for group 1 and group 2 respectively. n1 <- length(g1) #sample size group 1 n2 <- length(g2) #sample size group 2 p1 <- sum(g1)/n1 #p1 hat p2 <- sum(g2)/n2 #p2 hat n <- n1 + n2 #overall sample size p <- sum(g1, g2)/n #p hat cHAeq <- sign(p1-p2)* (1/(2*min(n1, n2))) # without continuity correction z1 <- (Delta - (p1 - p2))/sqrt(p*(1-p)*(1/n1 + 1/n2)) #test statistic for H01 z2 <- ((p1 - p2) + Delta)/sqrt(p*(1-p)*(1/n1 + 1/n2)) #test statistic for H02 pval1 <- 1 - pnorm(z1) #p-value (upper tail) reject H0 if it is <= alpha #(one tail) pval2 <- 1 - pnorm(z2) #p-value (upper tail) reject H0 #if it is <= alpha (one tail) # with continuity correction zHA1 <- (Delta - abs(p1 - p2) + cHAeq)/sqrt((p1*(1-p1)/(n1-1)) + (p2*(1-p2)/(n2-1))) #with continuity correction zHA2 <- (abs(p1 - p2) + Delta - cHAeq)/sqrt((p1*(1- p1)/(n1-1)) + (p2*(1-p2)/(n2-1))) #with continuity correction pvalHA1 <- 1 - pnorm(zHA1) #p-value (upper tail) reject H0 #if it is <= alpha (one tail) pvalHA2 <- 1 - pnorm(zHA2) #p-value (upper tail) reject H0 #if it is <= alpha (one tail) References Hauck, W. W. and Anderson, S. (1986). A comparison of large-sample confidence interval methods for the difference of two binomial probabilities. The American Statistician, 40(4):318–322. Tu, D. (1997). Two one-sided tests procedures in establishing therapeutic equivalence with binary clinical endpoints: fixed sample performances and sample size determination. Journal of Statistical Computation and Simulation, 59(3):271–290.
Equivalence test for binominal data While one can use the t test to test for proportion difference, the z test is a tad more precise, since it uses an estimate of the standard deviation formulated specifically for binomial (i.e. dichoto
45,638
Efficient random generation from truncated Laplace distribution
A straightforward method that is reasonably efficient if the left truncation point is below the median is to just generate a Laplace random variate, then reject it if it falls to the left of the truncation point and generate another, repeating until one is generated that falls above the truncation point. If the Laplace random variate generation algorithm requires $n$ uniform variate generations on average for one Laplace variate generation, the truncated Laplace algorithm requires $n/(1-F(\alpha))$ uniform variate generations on average, where $\alpha$ is the truncation point, and therefore never requires more (on average) than twice the uniform variate generations as the original algorithm regardless of the truncation point - and if the truncation point is well into the lower tail, e.g., at the 10th percentile of the distribution, is almost as efficient as the original algorithm. If the left truncation point is above the median, then you have an exponential distribution for the sampling distribution with lower bound equal to the truncation point, so plenty of efficient algorithms there. Another approach, useful if your Laplace random variate generation algorithm uses inverse transform sampling, is to shift and rescale the initial $\text{U}(0,1)$ variate to fall into the range $U(\alpha,1)$, where $\alpha$ is the percentile of the Laplace distribution where the left truncation occurs, then just use the inverse transform as usual, without regard for truncation. The resulting algorithm requires one addition and multiplication more than the original, so is essentially just as efficient as the inverse transform method for the un-truncated distribution.
Efficient random generation from truncated Laplace distribution
A straightforward method that is reasonably efficient if the left truncation point is below the median is to just generate a Laplace random variate, then reject it if it falls to the left of the trunc
Efficient random generation from truncated Laplace distribution A straightforward method that is reasonably efficient if the left truncation point is below the median is to just generate a Laplace random variate, then reject it if it falls to the left of the truncation point and generate another, repeating until one is generated that falls above the truncation point. If the Laplace random variate generation algorithm requires $n$ uniform variate generations on average for one Laplace variate generation, the truncated Laplace algorithm requires $n/(1-F(\alpha))$ uniform variate generations on average, where $\alpha$ is the truncation point, and therefore never requires more (on average) than twice the uniform variate generations as the original algorithm regardless of the truncation point - and if the truncation point is well into the lower tail, e.g., at the 10th percentile of the distribution, is almost as efficient as the original algorithm. If the left truncation point is above the median, then you have an exponential distribution for the sampling distribution with lower bound equal to the truncation point, so plenty of efficient algorithms there. Another approach, useful if your Laplace random variate generation algorithm uses inverse transform sampling, is to shift and rescale the initial $\text{U}(0,1)$ variate to fall into the range $U(\alpha,1)$, where $\alpha$ is the percentile of the Laplace distribution where the left truncation occurs, then just use the inverse transform as usual, without regard for truncation. The resulting algorithm requires one addition and multiplication more than the original, so is essentially just as efficient as the inverse transform method for the un-truncated distribution.
Efficient random generation from truncated Laplace distribution A straightforward method that is reasonably efficient if the left truncation point is below the median is to just generate a Laplace random variate, then reject it if it falls to the left of the trunc
45,639
Efficient random generation from truncated Laplace distribution
If you need extreme efficiency and don't mind increased code complexity, you could adapt this ziggurat-like rejection sampling technique to the standard laplace distribution directly and use shifts and scaling to produce distributions with arbitrary parameters.
Efficient random generation from truncated Laplace distribution
If you need extreme efficiency and don't mind increased code complexity, you could adapt this ziggurat-like rejection sampling technique to the standard laplace distribution directly and use shifts an
Efficient random generation from truncated Laplace distribution If you need extreme efficiency and don't mind increased code complexity, you could adapt this ziggurat-like rejection sampling technique to the standard laplace distribution directly and use shifts and scaling to produce distributions with arbitrary parameters.
Efficient random generation from truncated Laplace distribution If you need extreme efficiency and don't mind increased code complexity, you could adapt this ziggurat-like rejection sampling technique to the standard laplace distribution directly and use shifts an
45,640
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square)
No reconciliation is needed. In one case you are referring to the sampling distribution of the maximum likelihood estimator, which is a function of the data. In the other, you are referring to the posterior distribution of the actual model parameter. Two different referents; two different solutions. The advantage of conjugacy is that we get nice closed-form solutions out for the posterior distribution, and this ability depends on the form of the likelihood function, not on the sampling distribution of the maximum likelihood estimator. If we look at the Normal likelihood function, we see: $$\mathcal{L}(\sigma^2) \propto \sigma^{-n}\exp\left(-{\sum X_i^2\over 2\sigma^2}\right)$$ Note how the $\sigma^2$ terms are all in the various denominators of ratios, not in the numerators. In order to maintain conjugacy, we need to find a distribution that looks similar: $$p(\sigma^2) \propto \sigma^{-a}\exp\left(-{b\over\sigma^2}\right)$$ which will lead to a posterior that has the same form: $$p(\sigma^2|X) \propto \sigma^{-(a+n)}\exp\left(-{b+\sum X_i^2\over\sigma^2}\right)$$ ... and that distribution is the inverse-Gamma. If we were to use the precision $\beta = 1/\sigma^2$ as our parameter of choice, we'd have: $$\mathcal{L}(\beta) \propto \beta^{n}\exp\left(-{\sum X_i^2\over 2}\beta\right)$$ and evidently the conjugate prior would be a Gamma distribution. Note that in the former case the $\sum X_i^2$ term and $\sigma^2$ terms are in the numerator and denominator of a ratio, respectively; this leads (well, you have to do the math) to the Gamma and inverse-Gamma distributions respectively. In the latter case, the $\beta$ is in the numerator along with $\sum X_i^2$, and we have Gamma distributions in both cases.
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square)
No reconciliation is needed. In one case you are referring to the sampling distribution of the maximum likelihood estimator, which is a function of the data. In the other, you are referring to the p
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square) No reconciliation is needed. In one case you are referring to the sampling distribution of the maximum likelihood estimator, which is a function of the data. In the other, you are referring to the posterior distribution of the actual model parameter. Two different referents; two different solutions. The advantage of conjugacy is that we get nice closed-form solutions out for the posterior distribution, and this ability depends on the form of the likelihood function, not on the sampling distribution of the maximum likelihood estimator. If we look at the Normal likelihood function, we see: $$\mathcal{L}(\sigma^2) \propto \sigma^{-n}\exp\left(-{\sum X_i^2\over 2\sigma^2}\right)$$ Note how the $\sigma^2$ terms are all in the various denominators of ratios, not in the numerators. In order to maintain conjugacy, we need to find a distribution that looks similar: $$p(\sigma^2) \propto \sigma^{-a}\exp\left(-{b\over\sigma^2}\right)$$ which will lead to a posterior that has the same form: $$p(\sigma^2|X) \propto \sigma^{-(a+n)}\exp\left(-{b+\sum X_i^2\over\sigma^2}\right)$$ ... and that distribution is the inverse-Gamma. If we were to use the precision $\beta = 1/\sigma^2$ as our parameter of choice, we'd have: $$\mathcal{L}(\beta) \propto \beta^{n}\exp\left(-{\sum X_i^2\over 2}\beta\right)$$ and evidently the conjugate prior would be a Gamma distribution. Note that in the former case the $\sum X_i^2$ term and $\sigma^2$ terms are in the numerator and denominator of a ratio, respectively; this leads (well, you have to do the math) to the Gamma and inverse-Gamma distributions respectively. In the latter case, the $\beta$ is in the numerator along with $\sum X_i^2$, and we have Gamma distributions in both cases.
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square) No reconciliation is needed. In one case you are referring to the sampling distribution of the maximum likelihood estimator, which is a function of the data. In the other, you are referring to the p
45,641
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square)
Eventhough this question was posted 4 years ago, I will make a post since it seems that there is a misconception in the comments of the other post. The question "So, the fact that Gamma is the sampling distribution of the MLE estimator, has nothing to do with the fact that we use the InverseGamma as a prior on the model parameter?" From the perspective of jbowman's answer then the answer to the above question seems to be no. But there is actually a deeper explanation. As LeastSquaresWonderer mentions in his question the maximum likelihood estimator of the variance is gamma distributed: \begin{align*} \sigma^2_{MLE} := \sum^N_{i=1} \frac{X_i^2}{N} \sim \Gamma\left(\frac{N}{2},\frac{2 \sigma^2}{N}\right) \ . \end{align*} The missing part is that you are looking at the distribution of $\sigma^2_{MLE}$, but in Bayesian statistics you are treating the parameters them selfs as the random variables. Thus in the above expression you want to have $\sigma^2$ on the left hand side treating it as a random variable. For a gamma distributed random variable $X \sim \Gamma(k,\theta)$ we have for $c>0$ that $cX \sim \Gamma(k,c\theta)$ in our case we replace $c$ with $\frac{1}{\sigma^2 \sigma^2_{MLE}}$ then we get: \begin{align*} \frac{1}{\sigma^2 \sigma^2_{MLE}} \sigma^2_{MLE} = \frac{1}{\sigma^2} \sim \Gamma\left(\frac{N}{2},\frac{2}{N\sigma^2_{MLE}}\right) \ . \end{align*} Now if a random variable $X \sim \Gamma(k,\theta)$ is gamma distributed then $\frac{1}{X}\sim \Gamma^{-1}(k,1/\theta)$ which is the inverse gamma distribution. Thus we get: \begin{align*} \sigma^2 \sim \Gamma^{-1}\left(\frac{N}{2},\frac{N\sigma^2_{MLE}}{2}\right) \ . \end{align*} This together with the fact that the inverse gamma distribution is a conjugate prior of a normal distribution motivates the use of the inverse gamma as a prior.
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square)
Eventhough this question was posted 4 years ago, I will make a post since it seems that there is a misconception in the comments of the other post. The question "So, the fact that Gamma is the samplin
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square) Eventhough this question was posted 4 years ago, I will make a post since it seems that there is a misconception in the comments of the other post. The question "So, the fact that Gamma is the sampling distribution of the MLE estimator, has nothing to do with the fact that we use the InverseGamma as a prior on the model parameter?" From the perspective of jbowman's answer then the answer to the above question seems to be no. But there is actually a deeper explanation. As LeastSquaresWonderer mentions in his question the maximum likelihood estimator of the variance is gamma distributed: \begin{align*} \sigma^2_{MLE} := \sum^N_{i=1} \frac{X_i^2}{N} \sim \Gamma\left(\frac{N}{2},\frac{2 \sigma^2}{N}\right) \ . \end{align*} The missing part is that you are looking at the distribution of $\sigma^2_{MLE}$, but in Bayesian statistics you are treating the parameters them selfs as the random variables. Thus in the above expression you want to have $\sigma^2$ on the left hand side treating it as a random variable. For a gamma distributed random variable $X \sim \Gamma(k,\theta)$ we have for $c>0$ that $cX \sim \Gamma(k,c\theta)$ in our case we replace $c$ with $\frac{1}{\sigma^2 \sigma^2_{MLE}}$ then we get: \begin{align*} \frac{1}{\sigma^2 \sigma^2_{MLE}} \sigma^2_{MLE} = \frac{1}{\sigma^2} \sim \Gamma\left(\frac{N}{2},\frac{2}{N\sigma^2_{MLE}}\right) \ . \end{align*} Now if a random variable $X \sim \Gamma(k,\theta)$ is gamma distributed then $\frac{1}{X}\sim \Gamma^{-1}(k,1/\theta)$ which is the inverse gamma distribution. Thus we get: \begin{align*} \sigma^2 \sim \Gamma^{-1}\left(\frac{N}{2},\frac{N\sigma^2_{MLE}}{2}\right) \ . \end{align*} This together with the fact that the inverse gamma distribution is a conjugate prior of a normal distribution motivates the use of the inverse gamma as a prior.
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square) Eventhough this question was posted 4 years ago, I will make a post since it seems that there is a misconception in the comments of the other post. The question "So, the fact that Gamma is the samplin
45,642
How to identify if a problem is a good candidate for applying machine learning?
Prof. Yaser Abu-Mostafa talks briefly about this in his Caltech course on machine learning during the first lecture. He identifies 3 essential points you have to consider before considering applying machine learning to your problem: 1st. a pattern exists In order to be able to use your features for predicting anything there has to be some relationship between those features and the thing you are predicting. An example of this might be trying to predict the persons height by using data about what he ate yesterday. There is probably no relation between these two so machine learning wouldn't apply. 2nd. the pattern cannot be written down mathematically If you can solve the relation between input variables and prediction using a mathematical formula then there is no need to apply machine learning. Example of this point might be using machine learning in trying to predict the odds in a game of roulette. You can do that by calculating all the probabilities using equations from probability theory. The calculated odds would be exact and machine learning would only produce less reliable solutions. 3rd. you have data Machine learning tries to estimate parameters based on examples. And without data you cannot start using machine learning. Example of this might be trying to predict who will win a war by using various data about political climate, technology each side has, the spending on military, etc. If you had data for a lot of wars you might be able to do this. But since wars are pretty rare and there is no way to produce more of it on demand - machine learning will not work. These are the main requirements - the essence of machine learning. Briefly the examples in the question: 1) We have fairly high-accuracy ground-truth labels in our dataset. This seems highly subjective and context dependant. Consider predicting the age of death for a person, when the data you have only has "a best guess" from their doctor. The data would be very noisy, but if we can reduce the unknown factor by 5% or so after applying machine learning - it might be worth while: the algorithm would be as good as the guess by professional. 2) The distribution from which the data is sampled stays relatively constant. This is not a hard requirement. There is a sub area of machine learning that tries to deal with problems like these, called Concept Drift 3) The output we are trying to learn is actually a function of the inputs we are given. This is same as the 1st one mentioned by prof. Abu-Mostafa. That the "Pattern Exists". 4) The effective number of independent samples in our dataset is high enough for the levels of noise in the dataset. This is very relevant but at the same time subjective, just like the 1st point mentioned in the question. For some problems improvement of a few percent might be considered good enough. 5) The metric we would like our model to optimize is quantifiable. Not sure if I understand this one. From the comments it seems it is talking about comparison of different solutions in order to select the better one. I cannot quickly think of a scenario where this would not be satisfied. Unless the practitioner doesn't really have a clear goal in mind.
How to identify if a problem is a good candidate for applying machine learning?
Prof. Yaser Abu-Mostafa talks briefly about this in his Caltech course on machine learning during the first lecture. He identifies 3 essential points you have to consider before considering applying m
How to identify if a problem is a good candidate for applying machine learning? Prof. Yaser Abu-Mostafa talks briefly about this in his Caltech course on machine learning during the first lecture. He identifies 3 essential points you have to consider before considering applying machine learning to your problem: 1st. a pattern exists In order to be able to use your features for predicting anything there has to be some relationship between those features and the thing you are predicting. An example of this might be trying to predict the persons height by using data about what he ate yesterday. There is probably no relation between these two so machine learning wouldn't apply. 2nd. the pattern cannot be written down mathematically If you can solve the relation between input variables and prediction using a mathematical formula then there is no need to apply machine learning. Example of this point might be using machine learning in trying to predict the odds in a game of roulette. You can do that by calculating all the probabilities using equations from probability theory. The calculated odds would be exact and machine learning would only produce less reliable solutions. 3rd. you have data Machine learning tries to estimate parameters based on examples. And without data you cannot start using machine learning. Example of this might be trying to predict who will win a war by using various data about political climate, technology each side has, the spending on military, etc. If you had data for a lot of wars you might be able to do this. But since wars are pretty rare and there is no way to produce more of it on demand - machine learning will not work. These are the main requirements - the essence of machine learning. Briefly the examples in the question: 1) We have fairly high-accuracy ground-truth labels in our dataset. This seems highly subjective and context dependant. Consider predicting the age of death for a person, when the data you have only has "a best guess" from their doctor. The data would be very noisy, but if we can reduce the unknown factor by 5% or so after applying machine learning - it might be worth while: the algorithm would be as good as the guess by professional. 2) The distribution from which the data is sampled stays relatively constant. This is not a hard requirement. There is a sub area of machine learning that tries to deal with problems like these, called Concept Drift 3) The output we are trying to learn is actually a function of the inputs we are given. This is same as the 1st one mentioned by prof. Abu-Mostafa. That the "Pattern Exists". 4) The effective number of independent samples in our dataset is high enough for the levels of noise in the dataset. This is very relevant but at the same time subjective, just like the 1st point mentioned in the question. For some problems improvement of a few percent might be considered good enough. 5) The metric we would like our model to optimize is quantifiable. Not sure if I understand this one. From the comments it seems it is talking about comparison of different solutions in order to select the better one. I cannot quickly think of a scenario where this would not be satisfied. Unless the practitioner doesn't really have a clear goal in mind.
How to identify if a problem is a good candidate for applying machine learning? Prof. Yaser Abu-Mostafa talks briefly about this in his Caltech course on machine learning during the first lecture. He identifies 3 essential points you have to consider before considering applying m
45,643
How to identify if a problem is a good candidate for applying machine learning?
I would consider updating #5, as quantifiable metrics are not necessarily easy to optimize. For instance, directly optimizing 0-1 loss is NP-hard. So #5 could instead say: The metric we would like our model to optimize is quantifiable and is feasibly solvable (or has an appropriate surrogate). Other than that, your list looks pretty good to me. I wish more people would sit down and have this conversation before applying machine learning to a problem!
How to identify if a problem is a good candidate for applying machine learning?
I would consider updating #5, as quantifiable metrics are not necessarily easy to optimize. For instance, directly optimizing 0-1 loss is NP-hard. So #5 could instead say: The metric we would like ou
How to identify if a problem is a good candidate for applying machine learning? I would consider updating #5, as quantifiable metrics are not necessarily easy to optimize. For instance, directly optimizing 0-1 loss is NP-hard. So #5 could instead say: The metric we would like our model to optimize is quantifiable and is feasibly solvable (or has an appropriate surrogate). Other than that, your list looks pretty good to me. I wish more people would sit down and have this conversation before applying machine learning to a problem!
How to identify if a problem is a good candidate for applying machine learning? I would consider updating #5, as quantifiable metrics are not necessarily easy to optimize. For instance, directly optimizing 0-1 loss is NP-hard. So #5 could instead say: The metric we would like ou
45,644
How to identify if a problem is a good candidate for applying machine learning?
1-5 are ideal, but I do not believe are 100% required. Some will rightfully cringe at that sentence. The most important thing to remember is the "no free lunch theorem", which reminds us that ML is based on theories, hypotheses, and/or assumptions at every stage of the pipeline. We can only define whether ML will help us in our task if we assume that some set of input features are independent of our target (dependent variable). I won't make this an essay (nvm), so I'll comment on each item and add a few to your list. My answer is written as a discussion, not do's and don'ts. Please at least upvote if you find it helpful. (Disclaimer: All notes below were written with supervised ML in mind). 1) We must have some assumed ground truth, but we cannot always quantify accuracy of our labels. The labels may reflect perception and opinion (e.g. data came from human input) or unexplainable randomness from nature. Random and balanced sampling is preferred for fair experimental setup, but not required to use ML. Some algorithms can handle minority labels (e.g. by adding weights), although most will do poorly if you do not reasonably balance (up/down sample) the data. Some algorithms (e.g. Breiman's Random Forest and the derivative Extra Trees) are designed to be robust against unexplained variance and others (Logistic Regression and Naive Bayes) are designed to be probabilistic. 2) In easier problems, the distribution of input/output will remain constant. In many hard problems, it is not. Image, audio, text and time series are great examples. Stock market predictions are heavily influenced by recent data and are not likely to respect the global distribution. That doesn't mean we can't make good predictions. e.g. Amazon stock has grown slightly faster than linear over the last 5 years and it will probably continue this rate of growth for a long while. 3) Ideally, the target will completely depend on the input. For example, Y = f(x) = 2x+1. In reality, we are modeling something like Y = f(x,z) = 2x+1 + g(z), where z represents independent signals (just think of this as features if you are not sure what this means) that would explain the error of our model, but are not available and may only exist in theory (e.g. a person's thought process at an instance of time caused an action that effected the result of the target, i.e. string theory). It might be more correct to say that the input must have some correlation to the output and we will assume that the output is a function of the input. 4) Yes, good description. Simpler, we just need "enough" data to make a reasonable prediction. How much is enough? It could actually be five samples for all we know. If we create a learning curve and see that the performance meets our objective when we have N samples and is not much better with N+M samples, that is enough. There are many scenarios where we would like more data and it would make significant improvements in the result, but it's too expensive to either collect or to process. So in this case, the number of samples is also reflected by cost, even if that means our data science objective becomes extremely difficult. So the requirement comes down whether the output can be inferred from the input to some degree and we will assume that there is enough data because we want a system that quickly, cheaply, automatically and consistently does inference. 5) Yes and @schem is also right. The output must be somewhat measurable against our desired outcome, otherwise how do we optimize and know how well did we did? "Measurable" does not have to mean a numeric loss function by the way. I could make a loss function with fuzzy metrics that are ordinal, but not numeric, for example my not-fun-to-use loss function outputs "the estimate is [too cold, cold, just right, hot, too hot]" or [good, ok, bad]. That is why I mean "somewhat measurable". Added items: 6, 7, 8, ...) Ideally, the cost (time, money, compute, ...) of building a ML solution and using it should be significantly less than humans or programming an explicit solution. The value to be achieved by deploying ML should improve quality, consistency or cost. You should read about the four V's (http://www.ibmbigdatahub.com/infographic/extracting-business-value-4-vs-big-data) and the absurd followup, the 42 V's (https://www.elderresearch.com/blog/42-v-of-big-data), which also has many good points related to this discussion. I'll edit this later as I remember what else I wanted to write. If this question is just for you, then I recommend reading a bit from the "Data Mining" book by Witten, Frank and Hall. It gives great discussion on this topic. If you will be making training material, giving a presentation or similar communications, I recommend simplifying this list a lot for a new to ML audience. A creative data scientist will deal with what they have, invent solutions (e.g. build logical simulations of the problem when you have too little data to directly use ML and use the available real data to validate the simulation exercise) and break the rules. Our assumptions, our satisfaction with experimental results and 5-8 govern whether ML was appropriate. The quality of the data is not super high importance * (grain of salt) if you remember the law of "garbage in, garbage out".
How to identify if a problem is a good candidate for applying machine learning?
1-5 are ideal, but I do not believe are 100% required. Some will rightfully cringe at that sentence. The most important thing to remember is the "no free lunch theorem", which reminds us that ML is ba
How to identify if a problem is a good candidate for applying machine learning? 1-5 are ideal, but I do not believe are 100% required. Some will rightfully cringe at that sentence. The most important thing to remember is the "no free lunch theorem", which reminds us that ML is based on theories, hypotheses, and/or assumptions at every stage of the pipeline. We can only define whether ML will help us in our task if we assume that some set of input features are independent of our target (dependent variable). I won't make this an essay (nvm), so I'll comment on each item and add a few to your list. My answer is written as a discussion, not do's and don'ts. Please at least upvote if you find it helpful. (Disclaimer: All notes below were written with supervised ML in mind). 1) We must have some assumed ground truth, but we cannot always quantify accuracy of our labels. The labels may reflect perception and opinion (e.g. data came from human input) or unexplainable randomness from nature. Random and balanced sampling is preferred for fair experimental setup, but not required to use ML. Some algorithms can handle minority labels (e.g. by adding weights), although most will do poorly if you do not reasonably balance (up/down sample) the data. Some algorithms (e.g. Breiman's Random Forest and the derivative Extra Trees) are designed to be robust against unexplained variance and others (Logistic Regression and Naive Bayes) are designed to be probabilistic. 2) In easier problems, the distribution of input/output will remain constant. In many hard problems, it is not. Image, audio, text and time series are great examples. Stock market predictions are heavily influenced by recent data and are not likely to respect the global distribution. That doesn't mean we can't make good predictions. e.g. Amazon stock has grown slightly faster than linear over the last 5 years and it will probably continue this rate of growth for a long while. 3) Ideally, the target will completely depend on the input. For example, Y = f(x) = 2x+1. In reality, we are modeling something like Y = f(x,z) = 2x+1 + g(z), where z represents independent signals (just think of this as features if you are not sure what this means) that would explain the error of our model, but are not available and may only exist in theory (e.g. a person's thought process at an instance of time caused an action that effected the result of the target, i.e. string theory). It might be more correct to say that the input must have some correlation to the output and we will assume that the output is a function of the input. 4) Yes, good description. Simpler, we just need "enough" data to make a reasonable prediction. How much is enough? It could actually be five samples for all we know. If we create a learning curve and see that the performance meets our objective when we have N samples and is not much better with N+M samples, that is enough. There are many scenarios where we would like more data and it would make significant improvements in the result, but it's too expensive to either collect or to process. So in this case, the number of samples is also reflected by cost, even if that means our data science objective becomes extremely difficult. So the requirement comes down whether the output can be inferred from the input to some degree and we will assume that there is enough data because we want a system that quickly, cheaply, automatically and consistently does inference. 5) Yes and @schem is also right. The output must be somewhat measurable against our desired outcome, otherwise how do we optimize and know how well did we did? "Measurable" does not have to mean a numeric loss function by the way. I could make a loss function with fuzzy metrics that are ordinal, but not numeric, for example my not-fun-to-use loss function outputs "the estimate is [too cold, cold, just right, hot, too hot]" or [good, ok, bad]. That is why I mean "somewhat measurable". Added items: 6, 7, 8, ...) Ideally, the cost (time, money, compute, ...) of building a ML solution and using it should be significantly less than humans or programming an explicit solution. The value to be achieved by deploying ML should improve quality, consistency or cost. You should read about the four V's (http://www.ibmbigdatahub.com/infographic/extracting-business-value-4-vs-big-data) and the absurd followup, the 42 V's (https://www.elderresearch.com/blog/42-v-of-big-data), which also has many good points related to this discussion. I'll edit this later as I remember what else I wanted to write. If this question is just for you, then I recommend reading a bit from the "Data Mining" book by Witten, Frank and Hall. It gives great discussion on this topic. If you will be making training material, giving a presentation or similar communications, I recommend simplifying this list a lot for a new to ML audience. A creative data scientist will deal with what they have, invent solutions (e.g. build logical simulations of the problem when you have too little data to directly use ML and use the available real data to validate the simulation exercise) and break the rules. Our assumptions, our satisfaction with experimental results and 5-8 govern whether ML was appropriate. The quality of the data is not super high importance * (grain of salt) if you remember the law of "garbage in, garbage out".
How to identify if a problem is a good candidate for applying machine learning? 1-5 are ideal, but I do not believe are 100% required. Some will rightfully cringe at that sentence. The most important thing to remember is the "no free lunch theorem", which reminds us that ML is ba
45,645
Optimizing the ridge regression loss function with unpenalized intercept
I'd suggest assimilating via $v = (b, w)$ and doing $$ L(v) = \|y - Xv\|^2 + \lambda v^T \Omega v $$ where $$ \Omega = \text{diag}\left(0, 1, \dots, 1\right) $$ so $$ v^T\Omega v = \sum_{j \geq 2} v_j^2 = w^T w. $$ Now we have $$ L(v) = y^Ty - 2v^TX^Ty + v^T(X^TX + \lambda \Omega)v $$ so $$ \nabla L = -2X^Ty + 2 (X^TX + \lambda \Omega)v $$ and then $$ \nabla L \stackrel{\text{set}}= 0 \implies (X^TX + \lambda \Omega)v = X^Ty. $$ Regarding invertibility of $X^TX + \lambda \Omega$, let $z \in \mathbb R^p \backslash \{0\}$ and consider $$ z^T (X^TX + \lambda \Omega)z. $$ Let's try to make this $0$ to see if $X^TX + \lambda \Omega$ is PSD (positive semidefinite) or PD (positive definite). $z^T \Omega z = \sum_{j\geq 2}z_j^2$ so the only way to get $\lambda z^T \Omega z = 0$ is to have $z \propto e_1$, the first standard basis vector. So WLOG take $z = e_1$. Then $$ e_1^TX^TXe_1 = \|Xe_1\|^2 = \|X_1\|^2 $$ where $X_1$ is the first column of $X$. So this means that $X^TX + \lambda \Omega$ will be properly PD and not just PSD exactly when the first column of $X$, which we've made our unpenalized column by our choice of $\Omega$, is "full rank", which in this case just means non-zero. That's easy to check (just look at your data) so we don't have to worry about non-invertibility even though we're not doing a full ridge regression. One slightly unsatisfactory part of this penalty is that it makes the Bayesian interpretation less nice because to recover this as a MAP estimate we'd need $$ v\sim \mathcal N\left(0, \frac{\sigma^2}\lambda \Omega^{-1}\right) $$ except $\Omega$ is singular, so we don't actually have a pdf for this (so e.g. if you wanted to choose $\lambda$ by maximizing the marginal likelihood as with a Gaussian process, now you can't, or at least you won't get a Gaussian likelihood since it's not defined). One possible way to get this back is to instead view this as a mixed model $$ y = b\mathbf 1 + Zw + \varepsilon $$ where $w\sim \mathcal N(0, \frac{\sigma^2}{\lambda} I)$ now plays the role of a random effect, and $Z$ is $X$ without the first intercept-giving column. The actual estimation process may change here depending on how much you want this to actually be a mixed model but I think it's an interesting alternative. In particular, we can now integrate $w$ out and still end up with a valid Gaussian likelihood, and in the spirit of REML we could integrate $b$ out too if we put an improper uniform prior on it.
Optimizing the ridge regression loss function with unpenalized intercept
I'd suggest assimilating via $v = (b, w)$ and doing $$ L(v) = \|y - Xv\|^2 + \lambda v^T \Omega v $$ where $$ \Omega = \text{diag}\left(0, 1, \dots, 1\right) $$ so $$ v^T\Omega v = \sum_{j \geq 2} v_j
Optimizing the ridge regression loss function with unpenalized intercept I'd suggest assimilating via $v = (b, w)$ and doing $$ L(v) = \|y - Xv\|^2 + \lambda v^T \Omega v $$ where $$ \Omega = \text{diag}\left(0, 1, \dots, 1\right) $$ so $$ v^T\Omega v = \sum_{j \geq 2} v_j^2 = w^T w. $$ Now we have $$ L(v) = y^Ty - 2v^TX^Ty + v^T(X^TX + \lambda \Omega)v $$ so $$ \nabla L = -2X^Ty + 2 (X^TX + \lambda \Omega)v $$ and then $$ \nabla L \stackrel{\text{set}}= 0 \implies (X^TX + \lambda \Omega)v = X^Ty. $$ Regarding invertibility of $X^TX + \lambda \Omega$, let $z \in \mathbb R^p \backslash \{0\}$ and consider $$ z^T (X^TX + \lambda \Omega)z. $$ Let's try to make this $0$ to see if $X^TX + \lambda \Omega$ is PSD (positive semidefinite) or PD (positive definite). $z^T \Omega z = \sum_{j\geq 2}z_j^2$ so the only way to get $\lambda z^T \Omega z = 0$ is to have $z \propto e_1$, the first standard basis vector. So WLOG take $z = e_1$. Then $$ e_1^TX^TXe_1 = \|Xe_1\|^2 = \|X_1\|^2 $$ where $X_1$ is the first column of $X$. So this means that $X^TX + \lambda \Omega$ will be properly PD and not just PSD exactly when the first column of $X$, which we've made our unpenalized column by our choice of $\Omega$, is "full rank", which in this case just means non-zero. That's easy to check (just look at your data) so we don't have to worry about non-invertibility even though we're not doing a full ridge regression. One slightly unsatisfactory part of this penalty is that it makes the Bayesian interpretation less nice because to recover this as a MAP estimate we'd need $$ v\sim \mathcal N\left(0, \frac{\sigma^2}\lambda \Omega^{-1}\right) $$ except $\Omega$ is singular, so we don't actually have a pdf for this (so e.g. if you wanted to choose $\lambda$ by maximizing the marginal likelihood as with a Gaussian process, now you can't, or at least you won't get a Gaussian likelihood since it's not defined). One possible way to get this back is to instead view this as a mixed model $$ y = b\mathbf 1 + Zw + \varepsilon $$ where $w\sim \mathcal N(0, \frac{\sigma^2}{\lambda} I)$ now plays the role of a random effect, and $Z$ is $X$ without the first intercept-giving column. The actual estimation process may change here depending on how much you want this to actually be a mixed model but I think it's an interesting alternative. In particular, we can now integrate $w$ out and still end up with a valid Gaussian likelihood, and in the spirit of REML we could integrate $b$ out too if we put an improper uniform prior on it.
Optimizing the ridge regression loss function with unpenalized intercept I'd suggest assimilating via $v = (b, w)$ and doing $$ L(v) = \|y - Xv\|^2 + \lambda v^T \Omega v $$ where $$ \Omega = \text{diag}\left(0, 1, \dots, 1\right) $$ so $$ v^T\Omega v = \sum_{j \geq 2} v_j
45,646
Sum of predicted values to the power of 10 [closed]
Let's work with natural logarithms, instead of base 10. You are tripped up by a common pitfall in the lognormal distribution: the expectation of the lognormal is not the exponential of the expectation $\mu$ on the log scale. You need to account for the heavy-tailedness by including the residual variance and calculate $e^{\mu+\sigma^2/2}$. In R: sum(exp(y))/sum(exp(p)) sum(exp(y))/sum(exp(p+summary(m)$sigma^2/2)) The last expression will come out around 1.
Sum of predicted values to the power of 10 [closed]
Let's work with natural logarithms, instead of base 10. You are tripped up by a common pitfall in the lognormal distribution: the expectation of the lognormal is not the exponential of the expectation
Sum of predicted values to the power of 10 [closed] Let's work with natural logarithms, instead of base 10. You are tripped up by a common pitfall in the lognormal distribution: the expectation of the lognormal is not the exponential of the expectation $\mu$ on the log scale. You need to account for the heavy-tailedness by including the residual variance and calculate $e^{\mu+\sigma^2/2}$. In R: sum(exp(y))/sum(exp(p)) sum(exp(y))/sum(exp(p+summary(m)$sigma^2/2)) The last expression will come out around 1.
Sum of predicted values to the power of 10 [closed] Let's work with natural logarithms, instead of base 10. You are tripped up by a common pitfall in the lognormal distribution: the expectation of the lognormal is not the exponential of the expectation
45,647
Sum of predicted values to the power of 10 [closed]
Differences in averages You have a function f(x) such that $f(\overline{x}) \neq \overline{f(x)}$. So like the mean squared is not equal to the square of the means you also have that the mean of a power is not a power of the mean. You estimate the model $y_i = \hat{y}_i +e_i$. Then you compare $\overline{10^{y_i}}$ with $\overline{10^{\hat{y}_i}}$. Or differently written you compare $10^{y_i} = 10^{\hat{y}_i+e_i}$ with $10^{\hat{y}_i}$. The residual terms $e_i$ are obtained such that they average to zero for $y_i = \hat{y}_i +e_i$ but you do not get $10^{y_i} = 10^{\hat{y}_i} +e_i$ with $e_i$ average to zero. You will get that there is different scaling for negative and positive values because $10^{y+a}-10^y$ is a bigger difference than $10^{y-a}-10^y$. So most often $\overline{10^{y_i}} > \overline{10^{\hat{y}_i}}$(even while $\overline{y_i} = \overline{\hat{y}_i}$ ), because the residuals do not 'count' the same after taking the power. Simple example $2^1+2^{-1} = 0.5+2 = 2.5 > 2 = 2^0 + 2^0$ You will always get $\overline{10^{y_i}} > \overline{10^{\hat{y}_i}}$ when all $\hat{y}_i$ are the same. E.g. when you just model $\hat{y}_i = a$ instead of $\hat{y}_i = a + b x_i$. Example when you do not get $\overline{10^{y_i}} > \overline{10^{\hat{y}_i}}$ is (note the extra data point with x=100): set.seed(1) n = 10000 x = c(rnorm(n),100) y = x + rnorm(n+1) m = lm(y ~ x) p = predict(m) sum(10^y)/sum(10^p) giving 0.76 which is due to the data point at x=100 falling far below the line (the other 10000 points have much more weight) but contributing a lot when the power of 10 is taken (then the other 10000 points have much less weight) What model/average to choose The choice of the two different averages or the choice of model ($10^{\hat{y}_i} = 10^{a + b x_i + e_i}$ versus $10^{\hat{y}_i} = 10^{a + b x_i} + e_i$) will vary based on what weight you want to give to the different points (high versus low values). See the below image for another example with the extra data points. set.seed(1) n = 200 x = c(rnorm(n),log(100*c(1:5))) y = x + c(rnorm(n),rnorm(5,-1,0.1)) m = lm(y ~ x) p = predict(m) sum(10^y)/sum(10^p) One of the fit lines is according to a linear model: $$y_i = a x_i + b + e_i$$ The other is according to a non linear model: $$(10^{y_i}) = 10^b (10^{x_i})^a +e_i $$ or rewriting for simplicity $v_i = (10^{y_i})$, $u_i = (10^{x_i})$, and $c=10^b$ $$v_i = c u_i^a +e_i$$ You see how the lines place different weights on different regions. In the first/left graph you see how the five points on the right have little weight on the linear model. In the second/right graph you see how the five points now have a much larger values (while the 200 points on the left are barely visible) and the residual terms get more weight. It depends a lot on your goals which representation/model/average you want to choose as well on the original model that generates the data (how are the errors distributed). Say you want to have a fitted curve to make predictions of $10^{Y}$ in the (entire) range $10^{X}$, then the non-linear model might be better, since the linear model put's more weight on the residuals of the smaller values. What you want to do with the average of all $y_i$ or $10^{y_i}$ is unclear. To me it makes no sense because they depend on the $x_i$ which may differ from test to test (you say you are computing a population size, but what population is that if there are many $x_i$?) . The model parameters seem to be more relevant, but then again I don't know what you are doing with the average.
Sum of predicted values to the power of 10 [closed]
Differences in averages You have a function f(x) such that $f(\overline{x}) \neq \overline{f(x)}$. So like the mean squared is not equal to the square of the means you also have that the mean of a pow
Sum of predicted values to the power of 10 [closed] Differences in averages You have a function f(x) such that $f(\overline{x}) \neq \overline{f(x)}$. So like the mean squared is not equal to the square of the means you also have that the mean of a power is not a power of the mean. You estimate the model $y_i = \hat{y}_i +e_i$. Then you compare $\overline{10^{y_i}}$ with $\overline{10^{\hat{y}_i}}$. Or differently written you compare $10^{y_i} = 10^{\hat{y}_i+e_i}$ with $10^{\hat{y}_i}$. The residual terms $e_i$ are obtained such that they average to zero for $y_i = \hat{y}_i +e_i$ but you do not get $10^{y_i} = 10^{\hat{y}_i} +e_i$ with $e_i$ average to zero. You will get that there is different scaling for negative and positive values because $10^{y+a}-10^y$ is a bigger difference than $10^{y-a}-10^y$. So most often $\overline{10^{y_i}} > \overline{10^{\hat{y}_i}}$(even while $\overline{y_i} = \overline{\hat{y}_i}$ ), because the residuals do not 'count' the same after taking the power. Simple example $2^1+2^{-1} = 0.5+2 = 2.5 > 2 = 2^0 + 2^0$ You will always get $\overline{10^{y_i}} > \overline{10^{\hat{y}_i}}$ when all $\hat{y}_i$ are the same. E.g. when you just model $\hat{y}_i = a$ instead of $\hat{y}_i = a + b x_i$. Example when you do not get $\overline{10^{y_i}} > \overline{10^{\hat{y}_i}}$ is (note the extra data point with x=100): set.seed(1) n = 10000 x = c(rnorm(n),100) y = x + rnorm(n+1) m = lm(y ~ x) p = predict(m) sum(10^y)/sum(10^p) giving 0.76 which is due to the data point at x=100 falling far below the line (the other 10000 points have much more weight) but contributing a lot when the power of 10 is taken (then the other 10000 points have much less weight) What model/average to choose The choice of the two different averages or the choice of model ($10^{\hat{y}_i} = 10^{a + b x_i + e_i}$ versus $10^{\hat{y}_i} = 10^{a + b x_i} + e_i$) will vary based on what weight you want to give to the different points (high versus low values). See the below image for another example with the extra data points. set.seed(1) n = 200 x = c(rnorm(n),log(100*c(1:5))) y = x + c(rnorm(n),rnorm(5,-1,0.1)) m = lm(y ~ x) p = predict(m) sum(10^y)/sum(10^p) One of the fit lines is according to a linear model: $$y_i = a x_i + b + e_i$$ The other is according to a non linear model: $$(10^{y_i}) = 10^b (10^{x_i})^a +e_i $$ or rewriting for simplicity $v_i = (10^{y_i})$, $u_i = (10^{x_i})$, and $c=10^b$ $$v_i = c u_i^a +e_i$$ You see how the lines place different weights on different regions. In the first/left graph you see how the five points on the right have little weight on the linear model. In the second/right graph you see how the five points now have a much larger values (while the 200 points on the left are barely visible) and the residual terms get more weight. It depends a lot on your goals which representation/model/average you want to choose as well on the original model that generates the data (how are the errors distributed). Say you want to have a fitted curve to make predictions of $10^{Y}$ in the (entire) range $10^{X}$, then the non-linear model might be better, since the linear model put's more weight on the residuals of the smaller values. What you want to do with the average of all $y_i$ or $10^{y_i}$ is unclear. To me it makes no sense because they depend on the $x_i$ which may differ from test to test (you say you are computing a population size, but what population is that if there are many $x_i$?) . The model parameters seem to be more relevant, but then again I don't know what you are doing with the average.
Sum of predicted values to the power of 10 [closed] Differences in averages You have a function f(x) such that $f(\overline{x}) \neq \overline{f(x)}$. So like the mean squared is not equal to the square of the means you also have that the mean of a pow
45,648
Fisher's Information for Laplace distribution
Your notation is ridiculously over-complicated for what you're doing. For the Laplace distribution with unit scale (which is the density you have given) you have $l_x(\theta) = - \ln 2 - |x - \theta|$, which has the (weak) derivative: $$\frac{\partial l_x}{\partial \theta}(\theta) = \text{sgn}(x- \theta) \text{ } \text{ } \text{ } \text{ } \text{ } \text{ for } x \neq \theta.$$ Hence, the Fisher information for the location parameter is: $$\mathcal{I}(\theta) = \mathbb{E} \Bigg[ \Big( \frac{\partial l_X}{\partial \theta}(\theta) \Big)^2 \Bigg| \theta \Bigg] = \mathbb{E} \Big[ \text{sgn}(X-\theta)^2 \Big| \theta \Big] = \mathbb{E} [ 1 | \theta ] = 1.$$ (The fact that the derivative is undefined at $x = \theta$ does not affect this calculation, since this occurs with probability zero.)
Fisher's Information for Laplace distribution
Your notation is ridiculously over-complicated for what you're doing. For the Laplace distribution with unit scale (which is the density you have given) you have $l_x(\theta) = - \ln 2 - |x - \theta|
Fisher's Information for Laplace distribution Your notation is ridiculously over-complicated for what you're doing. For the Laplace distribution with unit scale (which is the density you have given) you have $l_x(\theta) = - \ln 2 - |x - \theta|$, which has the (weak) derivative: $$\frac{\partial l_x}{\partial \theta}(\theta) = \text{sgn}(x- \theta) \text{ } \text{ } \text{ } \text{ } \text{ } \text{ for } x \neq \theta.$$ Hence, the Fisher information for the location parameter is: $$\mathcal{I}(\theta) = \mathbb{E} \Bigg[ \Big( \frac{\partial l_X}{\partial \theta}(\theta) \Big)^2 \Bigg| \theta \Bigg] = \mathbb{E} \Big[ \text{sgn}(X-\theta)^2 \Big| \theta \Big] = \mathbb{E} [ 1 | \theta ] = 1.$$ (The fact that the derivative is undefined at $x = \theta$ does not affect this calculation, since this occurs with probability zero.)
Fisher's Information for Laplace distribution Your notation is ridiculously over-complicated for what you're doing. For the Laplace distribution with unit scale (which is the density you have given) you have $l_x(\theta) = - \ln 2 - |x - \theta|
45,649
Meaning of "identically distributed" when there's only one variable
You have $n$ observations, $y\in\mathbb{R}^n$. You correspondingly have $n$ noise terms, $\epsilon\in\mathbb{R}^n$. The last sentence means that each separate noise term $\epsilon_i$ is identically distributed and that they are independent. (In more general situations, the $\epsilon_i$ may not be identically distributed, or dependent, e.g., in time series analysis. You need more complex tools in such a situation.)
Meaning of "identically distributed" when there's only one variable
You have $n$ observations, $y\in\mathbb{R}^n$. You correspondingly have $n$ noise terms, $\epsilon\in\mathbb{R}^n$. The last sentence means that each separate noise term $\epsilon_i$ is identically di
Meaning of "identically distributed" when there's only one variable You have $n$ observations, $y\in\mathbb{R}^n$. You correspondingly have $n$ noise terms, $\epsilon\in\mathbb{R}^n$. The last sentence means that each separate noise term $\epsilon_i$ is identically distributed and that they are independent. (In more general situations, the $\epsilon_i$ may not be identically distributed, or dependent, e.g., in time series analysis. You need more complex tools in such a situation.)
Meaning of "identically distributed" when there's only one variable You have $n$ observations, $y\in\mathbb{R}^n$. You correspondingly have $n$ noise terms, $\epsilon\in\mathbb{R}^n$. The last sentence means that each separate noise term $\epsilon_i$ is identically di
45,650
Meaning of "identically distributed" when there's only one variable
Identically distributed generally means that each observation of a variable was sampled independently from a distribution identical to every other observation on that variable. A simple random walk where $y_{y} = y_{t-1} + 2\mathcal{Bernouli}\left(0.5\right)-1$ is an example of a variable ($y_{t}$) that is not i.i.d.: each value of $y_{t}$ depends quite directly on it's immediately prior value, and in fact, the process "remembers" all perturbations to it infinitely. More, the variance of $y_{t}$ is a function of $t$ (in fact, $\sigma^{2}_{y_{t}}=t$), meaning that different values in this time series cannot possible have "the same" distribution.
Meaning of "identically distributed" when there's only one variable
Identically distributed generally means that each observation of a variable was sampled independently from a distribution identical to every other observation on that variable. A simple random walk wh
Meaning of "identically distributed" when there's only one variable Identically distributed generally means that each observation of a variable was sampled independently from a distribution identical to every other observation on that variable. A simple random walk where $y_{y} = y_{t-1} + 2\mathcal{Bernouli}\left(0.5\right)-1$ is an example of a variable ($y_{t}$) that is not i.i.d.: each value of $y_{t}$ depends quite directly on it's immediately prior value, and in fact, the process "remembers" all perturbations to it infinitely. More, the variance of $y_{t}$ is a function of $t$ (in fact, $\sigma^{2}_{y_{t}}=t$), meaning that different values in this time series cannot possible have "the same" distribution.
Meaning of "identically distributed" when there's only one variable Identically distributed generally means that each observation of a variable was sampled independently from a distribution identical to every other observation on that variable. A simple random walk wh
45,651
Proof that $K(x,y) = f(x)f(y)$ is a kernel
$\sum_{i=1}^n\sum_{j=1}^nK(x_i, x_j)c_ic_j=\sum_{i=1}^n\sum_{j=1}^nf(x_i)f(x_j)c_ic_j = (\sum_{i=1}^nf(x_i)c_i)^2 \geq 0$
Proof that $K(x,y) = f(x)f(y)$ is a kernel
$\sum_{i=1}^n\sum_{j=1}^nK(x_i, x_j)c_ic_j=\sum_{i=1}^n\sum_{j=1}^nf(x_i)f(x_j)c_ic_j = (\sum_{i=1}^nf(x_i)c_i)^2 \geq 0$
Proof that $K(x,y) = f(x)f(y)$ is a kernel $\sum_{i=1}^n\sum_{j=1}^nK(x_i, x_j)c_ic_j=\sum_{i=1}^n\sum_{j=1}^nf(x_i)f(x_j)c_ic_j = (\sum_{i=1}^nf(x_i)c_i)^2 \geq 0$
Proof that $K(x,y) = f(x)f(y)$ is a kernel $\sum_{i=1}^n\sum_{j=1}^nK(x_i, x_j)c_ic_j=\sum_{i=1}^n\sum_{j=1}^nf(x_i)f(x_j)c_ic_j = (\sum_{i=1}^nf(x_i)c_i)^2 \geq 0$
45,652
Gradient and hessian of the MAPE
The Mean Absolute Percentage Error (MAPE) is defined as $$\text{MAPE} := \frac{1}{N}\sum_{i=1}^N\frac{|\hat{y}_i-y_i|}{y_i},$$ where the $y_i$ are actuals and the $\hat{y}_i$ are predictions. The gradient is the vector collecting the first derivatives: $$\frac{\partial\text{MAPE}}{\partial\hat{y}_i} = \begin{cases} -\frac{1}{Ny_i}, & \text{ if } \hat{y}_i<y_i \\ \text{undefined}, & \text{ if } \hat{y}_i=y_i \\ \frac{1}{Ny_i}, & \text{ if } \hat{y}_i>y_i \\ \end{cases} $$ The interpretation is that if you are underestimating ($\hat{y}_i<y_i$), then increasing $\hat{y}_i$ by one unit will reduce your MAPE by $\frac{1}{Ny_i}$, and the converse if you reduce $ \hat{y}_i$ by one unit. The Hessian is the matrix containing the mixed second derivatives. Since the gradient does not contain the predictions any more, taking second derivatives will result in zeros everywhere that it is defined: $$\frac{\partial^2\text{MAPE}}{\partial\hat{y}_i\partial\hat{y}_j} = \begin{cases} 0, & \text{ if } \hat{y}_i\neq y_i \text{ and }\hat{y}_j\neq y_j \\ \text{undefined} & \text{ else} \end{cases} $$
Gradient and hessian of the MAPE
The Mean Absolute Percentage Error (MAPE) is defined as $$\text{MAPE} := \frac{1}{N}\sum_{i=1}^N\frac{|\hat{y}_i-y_i|}{y_i},$$ where the $y_i$ are actuals and the $\hat{y}_i$ are predictions. The grad
Gradient and hessian of the MAPE The Mean Absolute Percentage Error (MAPE) is defined as $$\text{MAPE} := \frac{1}{N}\sum_{i=1}^N\frac{|\hat{y}_i-y_i|}{y_i},$$ where the $y_i$ are actuals and the $\hat{y}_i$ are predictions. The gradient is the vector collecting the first derivatives: $$\frac{\partial\text{MAPE}}{\partial\hat{y}_i} = \begin{cases} -\frac{1}{Ny_i}, & \text{ if } \hat{y}_i<y_i \\ \text{undefined}, & \text{ if } \hat{y}_i=y_i \\ \frac{1}{Ny_i}, & \text{ if } \hat{y}_i>y_i \\ \end{cases} $$ The interpretation is that if you are underestimating ($\hat{y}_i<y_i$), then increasing $\hat{y}_i$ by one unit will reduce your MAPE by $\frac{1}{Ny_i}$, and the converse if you reduce $ \hat{y}_i$ by one unit. The Hessian is the matrix containing the mixed second derivatives. Since the gradient does not contain the predictions any more, taking second derivatives will result in zeros everywhere that it is defined: $$\frac{\partial^2\text{MAPE}}{\partial\hat{y}_i\partial\hat{y}_j} = \begin{cases} 0, & \text{ if } \hat{y}_i\neq y_i \text{ and }\hat{y}_j\neq y_j \\ \text{undefined} & \text{ else} \end{cases} $$
Gradient and hessian of the MAPE The Mean Absolute Percentage Error (MAPE) is defined as $$\text{MAPE} := \frac{1}{N}\sum_{i=1}^N\frac{|\hat{y}_i-y_i|}{y_i},$$ where the $y_i$ are actuals and the $\hat{y}_i$ are predictions. The grad
45,653
Object detection - how to annotate negative samples
Short answer: You don't need negative samples to train your model. Focus on improving your model. Long answer: Ideally your detector after being trained on your 2 objects would detect them and place a bounding box around them. When you test it and get wrong results this could be caused by a variety of reasons: A wrong object is being detected if your detector just misinterpreted as belonging to one of the 2 objects you trained it on. When an image of your 2 objects is being thrown to the detector but it's not detected, then this also means your detector failed to detect your object. Both cases could of course happen in the same image and also occur more than once. As far as I know images with no bounding box in them which would be your "negative examples" cannot be processed by the current model because tfrecord reader just fails. Negative samples do exist implicitly though. All region of your images that do not correspond to a bounding box is a "negative sample". Defining explicitly "negative samples" by selecting them in a bounding box will create a new class with name 'none'. You will have 3 classes then. So, to make it simpler focus on your positive examples. If your model fails greatly then something is wrong. Check if: Your test images differ significantly from the trained ones. If so, try to find images similar to your test images and use them along your previous images to retrain your model. Examine that you have enough training samples to train your model. Enough is not defined strictly but may mean at least a few hundred images. Examine that your train model contains enough samples from both objects. If not the misrepresented class will be detected poorly compared to the other one. Examine that you don't have implicit negative examples of your object that could hinder training process. This means if you detect cars make sure at least all visually qualitative appearances in your image are being labels as cars. If you only label 1 car and also left 2 unlabeled ones then those two cars will hamper your training.
Object detection - how to annotate negative samples
Short answer: You don't need negative samples to train your model. Focus on improving your model. Long answer: Ideally your detector after being trained on your 2 objects would detect them and place
Object detection - how to annotate negative samples Short answer: You don't need negative samples to train your model. Focus on improving your model. Long answer: Ideally your detector after being trained on your 2 objects would detect them and place a bounding box around them. When you test it and get wrong results this could be caused by a variety of reasons: A wrong object is being detected if your detector just misinterpreted as belonging to one of the 2 objects you trained it on. When an image of your 2 objects is being thrown to the detector but it's not detected, then this also means your detector failed to detect your object. Both cases could of course happen in the same image and also occur more than once. As far as I know images with no bounding box in them which would be your "negative examples" cannot be processed by the current model because tfrecord reader just fails. Negative samples do exist implicitly though. All region of your images that do not correspond to a bounding box is a "negative sample". Defining explicitly "negative samples" by selecting them in a bounding box will create a new class with name 'none'. You will have 3 classes then. So, to make it simpler focus on your positive examples. If your model fails greatly then something is wrong. Check if: Your test images differ significantly from the trained ones. If so, try to find images similar to your test images and use them along your previous images to retrain your model. Examine that you have enough training samples to train your model. Enough is not defined strictly but may mean at least a few hundred images. Examine that your train model contains enough samples from both objects. If not the misrepresented class will be detected poorly compared to the other one. Examine that you don't have implicit negative examples of your object that could hinder training process. This means if you detect cars make sure at least all visually qualitative appearances in your image are being labels as cars. If you only label 1 car and also left 2 unlabeled ones then those two cars will hamper your training.
Object detection - how to annotate negative samples Short answer: You don't need negative samples to train your model. Focus on improving your model. Long answer: Ideally your detector after being trained on your 2 objects would detect them and place
45,654
Object detection - how to annotate negative samples
You can simply use the verify feature in LabelImg and it will create a file without annotations you can run through your process. Reading through this article it seems negative images are indeed needed.
Object detection - how to annotate negative samples
You can simply use the verify feature in LabelImg and it will create a file without annotations you can run through your process. Reading through this article it seems negative images are indeed neede
Object detection - how to annotate negative samples You can simply use the verify feature in LabelImg and it will create a file without annotations you can run through your process. Reading through this article it seems negative images are indeed needed.
Object detection - how to annotate negative samples You can simply use the verify feature in LabelImg and it will create a file without annotations you can run through your process. Reading through this article it seems negative images are indeed neede
45,655
Object detection - how to annotate negative samples
The OP asked about negative samples in Tensorflow Object detection API. I agree that we do not specific images for negative samples in NNs-based Object Detection. All negatives samples are implicitly available when some areas of the images are not labelled (no bounding box on it).
Object detection - how to annotate negative samples
The OP asked about negative samples in Tensorflow Object detection API. I agree that we do not specific images for negative samples in NNs-based Object Detection. All negatives samples are implicitly
Object detection - how to annotate negative samples The OP asked about negative samples in Tensorflow Object detection API. I agree that we do not specific images for negative samples in NNs-based Object Detection. All negatives samples are implicitly available when some areas of the images are not labelled (no bounding box on it).
Object detection - how to annotate negative samples The OP asked about negative samples in Tensorflow Object detection API. I agree that we do not specific images for negative samples in NNs-based Object Detection. All negatives samples are implicitly
45,656
Why does VGG16 double number of features after each maxpooling layer?
You should really ask in the course forum :) or contact Jeremy on Twitter, he's a great guy. Having said that, the idea is this: subsampling, aka pooling (max pooling, mean pooling, etc.: currently max pooling is the most common choice in CNNs) has three main advantages: it makes your net more robust to noise: if you alter slightly each neighborhood in your input layer, then the mean of each neighborhood won't change a lot (the smoothing effect of the sample mean). The max doesn't have this smoothing effect, however since it's the largest activation value, the relative variation due to noise is (on average) smaller than for other pixels. it introduces some level of translation invariance. By reducing the number of features in the output layer, if you move slightly the input image, chances are the output of the subsampling layer won't change, or it will change less. See here for a nice picture also, by reducing the number of features, the computational effort in training and predicting is reduced. Also overfitting becomes less likely. However, not everyone agrees with point 3. In the famous Alexnet paper, which can be considered as the "rebirth" of CNNs, the authors used overlapping neighborhoods (i.e., strides along x and y smaller than the extension of the subsampling neighborhood along x and y respectively) in order to get the same number of features for the input and the output of the subsampling layer. This makes the model more flexible, which is what Jeremy was hinting at. You get a more flexible model, at the risk of more overfitting - but you can use other Deep Learning tools to fight overfitting. It's really a design choice - you'll typically need validation data sets to try different architectures and see what works best. EDIT: It just occured to me that I have misunderstood what you were asking for. VGG16, unlike Alexnet, uses nonoverlapping max pooling (see chapter 2.1 of the paper I linked, right at the end of the first paragraph). Thus the size of the channels does reduce by 50% after each pooling layer. This is compensated by the doubling of the width of the convolutional layers: actually, it doesn't happen always - after the penultimate maxpool, the width remains 512, the same as before pooling. Again, this is a design choice: it's not set in stone, as confirmed by the fact that they don't follow this rule for the last convolutional layer. However, it's by far the most common design choice: for example, both LeNet and Alexnet follow this rule, even though LeNet uses nonoverlapping pooling (the size of each channel is halved, as for VGG16), while Alexnet uses overlapping pooling. The idea is simple - you introduce maxpooling to add robustness to noise and to help making the CNN translation equivariant, as I said before. However, you also don't want to throw away information contained in the image, together with the noise. To do that, for each convolutional layer you double the number of channels. This means that you have twice as many "high level features", so to speak, even if each of them contains half as many pixels. If your input image activates one of this high level features, their activation will be passed to following layers. Granted, this added flexibility adds a risk of overfitting, which they combat with the usual techniques (see chapter 3.1): $L_2$ regularization and dropout for the last two layers, learning rate decay for the whole net.
Why does VGG16 double number of features after each maxpooling layer?
You should really ask in the course forum :) or contact Jeremy on Twitter, he's a great guy. Having said that, the idea is this: subsampling, aka pooling (max pooling, mean pooling, etc.: currently ma
Why does VGG16 double number of features after each maxpooling layer? You should really ask in the course forum :) or contact Jeremy on Twitter, he's a great guy. Having said that, the idea is this: subsampling, aka pooling (max pooling, mean pooling, etc.: currently max pooling is the most common choice in CNNs) has three main advantages: it makes your net more robust to noise: if you alter slightly each neighborhood in your input layer, then the mean of each neighborhood won't change a lot (the smoothing effect of the sample mean). The max doesn't have this smoothing effect, however since it's the largest activation value, the relative variation due to noise is (on average) smaller than for other pixels. it introduces some level of translation invariance. By reducing the number of features in the output layer, if you move slightly the input image, chances are the output of the subsampling layer won't change, or it will change less. See here for a nice picture also, by reducing the number of features, the computational effort in training and predicting is reduced. Also overfitting becomes less likely. However, not everyone agrees with point 3. In the famous Alexnet paper, which can be considered as the "rebirth" of CNNs, the authors used overlapping neighborhoods (i.e., strides along x and y smaller than the extension of the subsampling neighborhood along x and y respectively) in order to get the same number of features for the input and the output of the subsampling layer. This makes the model more flexible, which is what Jeremy was hinting at. You get a more flexible model, at the risk of more overfitting - but you can use other Deep Learning tools to fight overfitting. It's really a design choice - you'll typically need validation data sets to try different architectures and see what works best. EDIT: It just occured to me that I have misunderstood what you were asking for. VGG16, unlike Alexnet, uses nonoverlapping max pooling (see chapter 2.1 of the paper I linked, right at the end of the first paragraph). Thus the size of the channels does reduce by 50% after each pooling layer. This is compensated by the doubling of the width of the convolutional layers: actually, it doesn't happen always - after the penultimate maxpool, the width remains 512, the same as before pooling. Again, this is a design choice: it's not set in stone, as confirmed by the fact that they don't follow this rule for the last convolutional layer. However, it's by far the most common design choice: for example, both LeNet and Alexnet follow this rule, even though LeNet uses nonoverlapping pooling (the size of each channel is halved, as for VGG16), while Alexnet uses overlapping pooling. The idea is simple - you introduce maxpooling to add robustness to noise and to help making the CNN translation equivariant, as I said before. However, you also don't want to throw away information contained in the image, together with the noise. To do that, for each convolutional layer you double the number of channels. This means that you have twice as many "high level features", so to speak, even if each of them contains half as many pixels. If your input image activates one of this high level features, their activation will be passed to following layers. Granted, this added flexibility adds a risk of overfitting, which they combat with the usual techniques (see chapter 3.1): $L_2$ regularization and dropout for the last two layers, learning rate decay for the whole net.
Why does VGG16 double number of features after each maxpooling layer? You should really ask in the course forum :) or contact Jeremy on Twitter, he's a great guy. Having said that, the idea is this: subsampling, aka pooling (max pooling, mean pooling, etc.: currently ma
45,657
Why does VGG16 double number of features after each maxpooling layer?
If you think that first convolutional layer extracts simple features, like lines, than the next convolutional layer has to combine these lines into more complected structure. Basically, each channel in the next layer responsible for the combination of different lines into more complicated structure. Since structure getting more complicated there are getting more features that you can construct from simple lines. If you think of lego-building classifier. First layer learns basic building lego-blocks, the second one looks for what things you can construct using these blocks and clearly there are way more things that you can build from basic building blocks. Another analogy is a digital number classifier. You have 7 LEDs in one matrix, which means that you can learn using first layer all 7 lines (or maybe 2 if you learn that there is one horizontal and one vertical line, just in different places). Using just 7 features you can construct 10 different digits. In addition to it, you can construct letters, like L, A, H and so on. So in the next convolutional layer, you should have more than 7 layers in order to be able to learn different letters and digits. These are toy-examples, but the same intuition you can extend to more realistic problems that large convolutional neural networks are trying to solve
Why does VGG16 double number of features after each maxpooling layer?
If you think that first convolutional layer extracts simple features, like lines, than the next convolutional layer has to combine these lines into more complected structure. Basically, each channel i
Why does VGG16 double number of features after each maxpooling layer? If you think that first convolutional layer extracts simple features, like lines, than the next convolutional layer has to combine these lines into more complected structure. Basically, each channel in the next layer responsible for the combination of different lines into more complicated structure. Since structure getting more complicated there are getting more features that you can construct from simple lines. If you think of lego-building classifier. First layer learns basic building lego-blocks, the second one looks for what things you can construct using these blocks and clearly there are way more things that you can build from basic building blocks. Another analogy is a digital number classifier. You have 7 LEDs in one matrix, which means that you can learn using first layer all 7 lines (or maybe 2 if you learn that there is one horizontal and one vertical line, just in different places). Using just 7 features you can construct 10 different digits. In addition to it, you can construct letters, like L, A, H and so on. So in the next convolutional layer, you should have more than 7 layers in order to be able to learn different letters and digits. These are toy-examples, but the same intuition you can extend to more realistic problems that large convolutional neural networks are trying to solve
Why does VGG16 double number of features after each maxpooling layer? If you think that first convolutional layer extracts simple features, like lines, than the next convolutional layer has to combine these lines into more complected structure. Basically, each channel i
45,658
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$?
Corrected after @Glen_b pointed out a glaring error. Sloppy proof, but should work. I think we can prove this using characteristic functions. Let X, Y be iid. Let Z = Y-X Then, $\phi_{X-Y}(t) = E[e^{it(X-Y)}] = \phi_X(t)\phi_{-Y}(t)$. Similar to the CDF, the characteristic function of X uniquely characterizes the distribution of X, and it exists for any real-valued random variable. This implies that $\phi_X(t) \equiv \phi_Y(t)$. This, along with properties of the characteristic function under linear transformation implies that $\phi_{-X}(t) =\phi_{X}(-t) = \phi_{Y}(-t)=\phi_{-Y}(t) $. In turn, this implies that $\phi_X(t)\phi_{-Y}(t) = \phi_X(t)\phi_{-X}(t) = \phi_Y(t)\phi_{-X}(t) =\phi_{Y-X}(t) $, so that $\phi_{X-Y}(t)=\phi_{Y-X}(t)$ and $Z \sim -Z$.
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$?
Corrected after @Glen_b pointed out a glaring error. Sloppy proof, but should work. I think we can prove this using characteristic functions. Let X, Y be iid. Let Z = Y-X Then, $\phi_{X-Y}(t) = E[e^{
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$? Corrected after @Glen_b pointed out a glaring error. Sloppy proof, but should work. I think we can prove this using characteristic functions. Let X, Y be iid. Let Z = Y-X Then, $\phi_{X-Y}(t) = E[e^{it(X-Y)}] = \phi_X(t)\phi_{-Y}(t)$. Similar to the CDF, the characteristic function of X uniquely characterizes the distribution of X, and it exists for any real-valued random variable. This implies that $\phi_X(t) \equiv \phi_Y(t)$. This, along with properties of the characteristic function under linear transformation implies that $\phi_{-X}(t) =\phi_{X}(-t) = \phi_{Y}(-t)=\phi_{-Y}(t) $. In turn, this implies that $\phi_X(t)\phi_{-Y}(t) = \phi_X(t)\phi_{-X}(t) = \phi_Y(t)\phi_{-X}(t) =\phi_{Y-X}(t) $, so that $\phi_{X-Y}(t)=\phi_{Y-X}(t)$ and $Z \sim -Z$.
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$? Corrected after @Glen_b pointed out a glaring error. Sloppy proof, but should work. I think we can prove this using characteristic functions. Let X, Y be iid. Let Z = Y-X Then, $\phi_{X-Y}(t) = E[e^{
45,659
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$?
Just to clear up the source of my own confusion, I managed to coax just enough (about 4 lines!) out of Google books to resolve the origin of my doubt. It was from Romano and Siegel* and what they actually have there is: 4.34 Identically distributed random variables such that their difference does not have a symmetric distribution If $X$ and $Y$ are independent and identically distributed, then $X-Y$ has a symmetric distribution about zero. The independence assumption cannot be dropped in general. (However, if $X$ and $Y$ are exchangeable, then $X-Y$ does have a symmetric distribution.) A simple counterexample I came up with is $X\sim U[0,3)$ and $Y=(X-1) \text{ mod } 3$, so $Y$ is also uniform on $[0,3)$ and for which $X-Y$ takes the value $1$ with probability $\frac23$ and $-2$ with probability $\frac13$. This isn't the one I thought of just after I posted the question, but once you have one, further counterexamples are easy to think up and this one is simpler to explain. (Edit: in the end I managed to see their counterexample for the dependent case; it's fine - a simple bivariate example on $\{-1,0,1\}^2$ - but mine's simpler to express so I'll leave it there.) Note here that $(X,Y)$ doesn't have the same distribution as $(Y,X)$ -- so we don't have the exchangeability that R&S mention, which is why the asymmetry is possible. Note also that the informal argument in my question - "interchange the roles of $X$ and $Y$" - quite directly relied on exchangeability and we can see from that outline why that weaker condition should be sufficient to get that $X-Y$ and $Y-X$ have the same distribution. * Romano, J.P. and Siegel, A.F. (1986), Counterexamples in Probability And Statistics, (Wadsworth and Brooks/Cole Statistics/Probability Series) (my question was based on a recollection from reading a little of it in late 1986... so it's no surprise I was a little fuzzy on the details)
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$?
Just to clear up the source of my own confusion, I managed to coax just enough (about 4 lines!) out of Google books to resolve the origin of my doubt. It was from Romano and Siegel* and what they actu
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$? Just to clear up the source of my own confusion, I managed to coax just enough (about 4 lines!) out of Google books to resolve the origin of my doubt. It was from Romano and Siegel* and what they actually have there is: 4.34 Identically distributed random variables such that their difference does not have a symmetric distribution If $X$ and $Y$ are independent and identically distributed, then $X-Y$ has a symmetric distribution about zero. The independence assumption cannot be dropped in general. (However, if $X$ and $Y$ are exchangeable, then $X-Y$ does have a symmetric distribution.) A simple counterexample I came up with is $X\sim U[0,3)$ and $Y=(X-1) \text{ mod } 3$, so $Y$ is also uniform on $[0,3)$ and for which $X-Y$ takes the value $1$ with probability $\frac23$ and $-2$ with probability $\frac13$. This isn't the one I thought of just after I posted the question, but once you have one, further counterexamples are easy to think up and this one is simpler to explain. (Edit: in the end I managed to see their counterexample for the dependent case; it's fine - a simple bivariate example on $\{-1,0,1\}^2$ - but mine's simpler to express so I'll leave it there.) Note here that $(X,Y)$ doesn't have the same distribution as $(Y,X)$ -- so we don't have the exchangeability that R&S mention, which is why the asymmetry is possible. Note also that the informal argument in my question - "interchange the roles of $X$ and $Y$" - quite directly relied on exchangeability and we can see from that outline why that weaker condition should be sufficient to get that $X-Y$ and $Y-X$ have the same distribution. * Romano, J.P. and Siegel, A.F. (1986), Counterexamples in Probability And Statistics, (Wadsworth and Brooks/Cole Statistics/Probability Series) (my question was based on a recollection from reading a little of it in late 1986... so it's no surprise I was a little fuzzy on the details)
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$? Just to clear up the source of my own confusion, I managed to coax just enough (about 4 lines!) out of Google books to resolve the origin of my doubt. It was from Romano and Siegel* and what they actu
45,660
What is the probability for an N-char string to appear in an M-length random string?
This answer can be viewed as supplemental to @StephanKolassa's answer, in light of the counterexample provided in the comment by @whuber. Although the accepted answer is not a general solution, it does work for the specific question asked by the OP. We will start with a sufficient condition for the formula to hold. Let $L$ be the length of the largest string $\gamma$ such that $\alpha = \gamma\delta\gamma$, where $\delta$ is an arbitrary string. Let $\alpha$ and $\beta$ be strings of length $S$ and $M$ respectively.If $\beta$ is generated uniformly at random from an alphabet with $N$ distinct characters, then the probability that $\beta$ contains $\alpha$ is equal to $$\frac{M-S+1}{N^S}$$ so long as $S < M < 2S-L$ This extra condition $M < 2S - L$ is sufficient to ensure that no double counting of strings occurs. Without this condition, there is a potential for double counting, so that the formula becomes an upper bound on the actual probability. Note also that this condition rules out @whubers counterexample ($3 \nless 2\cdot 2 - 1)$. Other examples In the original example, if we increase $M$ from $7$ to $8$, the string DeadDead would be counted twice: once when we count all strings of the form Dead???? and again when we count strings like ????Dead. To see the role of $L$ it is hepful to consider a new string of interest, say $\alpha = $onion, which has $L = 2$ ($\gamma =$ on, $\delta =$ i). Suppose for example that $M=8$, and consider the string onionion. This will be double counted when we consider patterns onion??? and ???onion. dead not Dead What if the OP had asked for the probability that a $7$ character string contained the word dead rather than Dead. Now, the previously stated formula would not apply, because $7 \nless 2\cdot 4 - 1$. Thankfully, the answer is straightforward here, since we are over counting by just one. The probability becomes $$\frac{4}{62^4} - \frac{1}{62^7},$$ which, of course, is practically indistinguishable from the previous answer. As $M$ grows and becomes much larger than $2S - L$, the propensity for double counting will grow however, and the upper bound will become less tight. Bonus It is not too hard to come up with an English word having $L = 3$, for example ionization. Comment if you can come up with a common English word such that $L=4$! Edit: The bonus points go to @SextusEmpiricus who tracked down the English word sterraster! This word corresponds to $L=4$. Anybody want to try for $L=5$?
What is the probability for an N-char string to appear in an M-length random string?
This answer can be viewed as supplemental to @StephanKolassa's answer, in light of the counterexample provided in the comment by @whuber. Although the accepted answer is not a general solution, it do
What is the probability for an N-char string to appear in an M-length random string? This answer can be viewed as supplemental to @StephanKolassa's answer, in light of the counterexample provided in the comment by @whuber. Although the accepted answer is not a general solution, it does work for the specific question asked by the OP. We will start with a sufficient condition for the formula to hold. Let $L$ be the length of the largest string $\gamma$ such that $\alpha = \gamma\delta\gamma$, where $\delta$ is an arbitrary string. Let $\alpha$ and $\beta$ be strings of length $S$ and $M$ respectively.If $\beta$ is generated uniformly at random from an alphabet with $N$ distinct characters, then the probability that $\beta$ contains $\alpha$ is equal to $$\frac{M-S+1}{N^S}$$ so long as $S < M < 2S-L$ This extra condition $M < 2S - L$ is sufficient to ensure that no double counting of strings occurs. Without this condition, there is a potential for double counting, so that the formula becomes an upper bound on the actual probability. Note also that this condition rules out @whubers counterexample ($3 \nless 2\cdot 2 - 1)$. Other examples In the original example, if we increase $M$ from $7$ to $8$, the string DeadDead would be counted twice: once when we count all strings of the form Dead???? and again when we count strings like ????Dead. To see the role of $L$ it is hepful to consider a new string of interest, say $\alpha = $onion, which has $L = 2$ ($\gamma =$ on, $\delta =$ i). Suppose for example that $M=8$, and consider the string onionion. This will be double counted when we consider patterns onion??? and ???onion. dead not Dead What if the OP had asked for the probability that a $7$ character string contained the word dead rather than Dead. Now, the previously stated formula would not apply, because $7 \nless 2\cdot 4 - 1$. Thankfully, the answer is straightforward here, since we are over counting by just one. The probability becomes $$\frac{4}{62^4} - \frac{1}{62^7},$$ which, of course, is practically indistinguishable from the previous answer. As $M$ grows and becomes much larger than $2S - L$, the propensity for double counting will grow however, and the upper bound will become less tight. Bonus It is not too hard to come up with an English word having $L = 3$, for example ionization. Comment if you can come up with a common English word such that $L=4$! Edit: The bonus points go to @SextusEmpiricus who tracked down the English word sterraster! This word corresponds to $L=4$. Anybody want to try for $L=5$?
What is the probability for an N-char string to appear in an M-length random string? This answer can be viewed as supplemental to @StephanKolassa's answer, in light of the counterexample provided in the comment by @whuber. Although the accepted answer is not a general solution, it do
45,661
What is the probability for an N-char string to appear in an M-length random string?
The answer by Stephan Kolassa works, but it is not general as noted in the answer by knrumsey. Several questions here have had similar issues with the overlap/double counting. For methods to solve this see Probability of a similar sub-sequence of length X in two sequences of length Y and Z A fair die is rolled 1,000 times. What is the probability of rolling the same number 5 times in a row? Below is an example of a Markov chain that can be used to solve it. In this case it is case insensitive (if we have x correct letters then there are two possible letters, upper case and lower case, that can be added to get x+1 correct letters) To compute the probability you take the appropriate power of the matrix that describes the Markov chain. library(matrixcalc) stateNames <- c("-","d","de","dea","dead") M <- matrix(c(60/62,58/62,58/62,60/62,0/62, 2/62, 2/62, 2/62,0/62,0/62, 0/62, 2/62, 0/62,0/62,0/62, 0/62, 0/62, 2/62,0/62,0/62, 0/62, 0/62, 0/62,2/62,62/62), nrow=5, byrow=TRUE) row.names(M) <- stateNames; colnames(M) <- stateNames M # - d de dea dead #- 0.96774194 0.93548387 0.93548387 0.96774194 0 #d 0.03225806 0.03225806 0.03225806 0.00000000 0 #de 0.00000000 0.03225806 0.00000000 0.00000000 0 #dea 0.00000000 0.00000000 0.03225806 0.00000000 0 #dead 0.00000000 0.00000000 0.00000000 0.03225806 1 matrix.power(M,7)["dead","-"] # 4.331213e-06 In the above links, there are examples of how to compute estimates of the solution obtained with this Markov chain.
What is the probability for an N-char string to appear in an M-length random string?
The answer by Stephan Kolassa works, but it is not general as noted in the answer by knrumsey. Several questions here have had similar issues with the overlap/double counting. For methods to solve thi
What is the probability for an N-char string to appear in an M-length random string? The answer by Stephan Kolassa works, but it is not general as noted in the answer by knrumsey. Several questions here have had similar issues with the overlap/double counting. For methods to solve this see Probability of a similar sub-sequence of length X in two sequences of length Y and Z A fair die is rolled 1,000 times. What is the probability of rolling the same number 5 times in a row? Below is an example of a Markov chain that can be used to solve it. In this case it is case insensitive (if we have x correct letters then there are two possible letters, upper case and lower case, that can be added to get x+1 correct letters) To compute the probability you take the appropriate power of the matrix that describes the Markov chain. library(matrixcalc) stateNames <- c("-","d","de","dea","dead") M <- matrix(c(60/62,58/62,58/62,60/62,0/62, 2/62, 2/62, 2/62,0/62,0/62, 0/62, 2/62, 0/62,0/62,0/62, 0/62, 0/62, 2/62,0/62,0/62, 0/62, 0/62, 0/62,2/62,62/62), nrow=5, byrow=TRUE) row.names(M) <- stateNames; colnames(M) <- stateNames M # - d de dea dead #- 0.96774194 0.93548387 0.93548387 0.96774194 0 #d 0.03225806 0.03225806 0.03225806 0.00000000 0 #de 0.00000000 0.03225806 0.00000000 0.00000000 0 #dea 0.00000000 0.00000000 0.03225806 0.00000000 0 #dead 0.00000000 0.00000000 0.00000000 0.03225806 1 matrix.power(M,7)["dead","-"] # 4.331213e-06 In the above links, there are examples of how to compute estimates of the solution obtained with this Markov chain.
What is the probability for an N-char string to appear in an M-length random string? The answer by Stephan Kolassa works, but it is not general as noted in the answer by knrumsey. Several questions here have had similar issues with the overlap/double counting. For methods to solve thi
45,662
What is the probability for an N-char string to appear in an M-length random string?
EDIT: The answer by knrumsey is better than mine. I hope the OP will un-accept my answer and accept theirs. (I would consider deleting mine, but it may serve as useful context for knrumsey's.) Overall, there are $62^M$ different possible strings, because you have $62$ choices for each of the $M$ characters. How many of these $62^M$ strings contain your prespecified string $\alpha$? Well, for each "hit", we still have $M-S$ characters that we can choose freely, and $\alpha$ can appear in $M-S+1$ different places in the full string. So we have $(M-S+1)\times 62^{M-S}$ "hits". Dividing, we get a probability of $$ \frac{(M-S+1)\times 62^{M-S}}{62^M} = \frac{M-S+1}{62^S}.$$ When $M = 7$ and $S = 4$, the probability is 1 in 3,694,084. (Of course, that doesn't account for the fact that the effect would have been the same if the random string had contained similar words like "killed" or "corpse", or simply a different capitalization of "Dead".)
What is the probability for an N-char string to appear in an M-length random string?
EDIT: The answer by knrumsey is better than mine. I hope the OP will un-accept my answer and accept theirs. (I would consider deleting mine, but it may serve as useful context for knrumsey's.) Overal
What is the probability for an N-char string to appear in an M-length random string? EDIT: The answer by knrumsey is better than mine. I hope the OP will un-accept my answer and accept theirs. (I would consider deleting mine, but it may serve as useful context for knrumsey's.) Overall, there are $62^M$ different possible strings, because you have $62$ choices for each of the $M$ characters. How many of these $62^M$ strings contain your prespecified string $\alpha$? Well, for each "hit", we still have $M-S$ characters that we can choose freely, and $\alpha$ can appear in $M-S+1$ different places in the full string. So we have $(M-S+1)\times 62^{M-S}$ "hits". Dividing, we get a probability of $$ \frac{(M-S+1)\times 62^{M-S}}{62^M} = \frac{M-S+1}{62^S}.$$ When $M = 7$ and $S = 4$, the probability is 1 in 3,694,084. (Of course, that doesn't account for the fact that the effect would have been the same if the random string had contained similar words like "killed" or "corpse", or simply a different capitalization of "Dead".)
What is the probability for an N-char string to appear in an M-length random string? EDIT: The answer by knrumsey is better than mine. I hope the OP will un-accept my answer and accept theirs. (I would consider deleting mine, but it may serve as useful context for knrumsey's.) Overal
45,663
Role of delays in LSTM networks
UPDATED Your example is very interesting. On one hand it is constructed in such a way that you really need only one parameter and its value is 1: $$y_t=\beta+w y_{t-1}\\\beta=0\\w=1$$ Your training data set is small (96 observations), but with three layer network you have quite a few parameters. It's very easy to overfit. The most interesting part is your test code. It is not clear whether you're trying to do a sequence of one-step forecasts or dynamic multi step forecast. In one step forecast, you predict for time t and get $\hat y_t=f(x_t)=f(y_{t-1})$. So you forecast always with latest observed information to make one step ahead prediction, then proceed to the next time period. Notice how above I'm using $y_{t-1}$ and not $\hat y_{t-1}$. That is the important distinction: in one step forecast you always use the observed value from previous step. In contrast, dynamic forecast uses the previous prediction to come up with the next: $\hat y_t=f(\hat y_{t-1})$. that is why it's called dynamic. So, first, I re-arranged your code a little bit and modified to make it produce the one step and dynamic forecasts for comparison. Here it is with outputs followed: # In[50]: import matplotlib.pyplot as plt from keras.models import Sequential from keras.layers import Dense, SimpleRNN from sklearn.metrics import mean_squared_error data = [0,1,2,3,2,1]*20 import numpy as np def shape_it(X): return np.expand_dims(X.reshape((-1,1)),2) from keras import regularizers from numpy.random import seed # In[51]: n_data = len(data) data = np.matrix(data) n_train = int(0.8*n_data) # In[52]: X_train = shape_it(data[:,:n_train]) Y_train = shape_it(data[:,1:(n_train+1)]) X_test = shape_it(data[:,n_train:-1]) Y_test = shape_it(data[:,(n_train+1):]) # In[26]: plt.plot(X_train.reshape(-1,1)) plt.plot(Y_train.reshape(-1,1)) plt.show() # In[27]: plt.plot(X_test.reshape(-1,1)) plt.plot(Y_test.reshape(-1,1)) plt.show() # In[75]: model = Sequential() batch_size = 1 model.add(SimpleRNN(12, batch_input_shape=(batch_size, X_train.shape[1], X_train.shape[2]),stateful=True)) model.add(Dense(12)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') epochs = 1000 for i in range(epochs): model.fit(X_train, np.reshape(Y_train,(-1,)), epochs=1, batch_size=batch_size, verbose=0, shuffle=False) model.reset_states() # build state model.reset_states() model.predict(X_train, batch_size=batch_size) predictions = list() for i in range(len(X_test)): # make one-step forecast X = X_test[i] X = X.reshape(1, 1, 1) yhat = model.predict(X, batch_size=batch_size)[0,0] # store forecast predictions.append(yhat) expected = Y_test[ i ] print('Month=%d, Predicted=%f, Expected=%f' % (i+1, yhat, expected)) # report performance rmse = np.sqrt(mean_squared_error(Y_test.reshape(len(Y_test)), predictions)) print('Test RMSE: %.3f' % rmse) # line plot of observed vs predicted plt.plot(Y_test.reshape(len(Y_test))) plt.plot(predictions) plt.show() Now we've got the picture you were expecting. Your original code had a couple of issues. One is that ReLU is not a good idea for this particular problem. You have linear problem, so 'linear' or default activation should work better. The second issue is that you have to call with stateful=True in fit function. Finally, I changed the prediction implementation to make it one step forecast. This is not bad, but it's only one step forecast. Next we'll try to do the dynamic forecast as explained earlier. # build state model.reset_states() model.predict(X_train, batch_size=batch_size) dynpredictions = list() dyhat = X_test[0] for i in range(len(X_test)): # make one-step forecast dyhat = yhat.reshape(1, 1, 1) dyhat = model.predict(dyhat, batch_size=batch_size)[0,0] # store forecast dynpredictions.append(dyhat) expected = Y_test[ i ] print('Month=%d, Predicted Dynamically=%f, Expected=%f' % (i+1, dyhat, expected)) drmse = np.sqrt(mean_squared_error(Y_test.reshape(len(Y_test)), dynpredictions)) print('Test Dynamic RMSE: %.3f' % drmse) # line plot of observed vs predicted plt.plot(Y_test.reshape(len(Y_test))) plt.plot(dynpredictions) plt.show() The dynamic forecast doesn't look so good as seen below. Recall that now we're out of sample, and we are not using observed values beyond observation #96 unlike in one step forecast. Still, we want to nail it, because the problem is so obvious to us that we want NN to figure it too. I'm going to try a different NN with just one hidden layer, and regularization to fight overfitting as follows. seed(1) modelR = Sequential() batch_size = 1 modelR.add(SimpleRNN(4, batch_input_shape=(batch_size, X_train.shape[1], X_train.shape[2]),stateful=True, kernel_regularizer=regularizers.l2(0.01), activity_regularizer=regularizers.l1(0.))) modelR.add(Dense(1,kernel_regularizer=regularizers.l2(0.01), activity_regularizer=regularizers.l1(0.))) modelR.compile(loss='mean_squared_error', optimizer='adam') epochs = 1000 for i in range(epochs): modelR.fit(X_train, np.reshape(Y_train,(-1,)), epochs=1, batch_size=batch_size, verbose=0, shuffle=False) modelR.reset_states() # build state modelR.reset_states() modelR.predict(X_train, batch_size=batch_size) predictions = list() for i in range(len(X_test)): # make one-step forecast X = X_test[i] X = X.reshape(1, 1, 1) yhat = modelR.predict(X, batch_size=batch_size)[0,0] # store forecast predictions.append(yhat) expected = Y_test[ i ] print('Month=%d, Predicted=%f, Expected=%f' % (i+1, yhat, expected)) # report performance rmse = np.sqrt(mean_squared_error(Y_test.reshape(len(Y_test)), predictions)) print('Test RMSE: %.3f' % rmse) # line plot of observed vs predicted plt.plot(Y_test.reshape(len(Y_test))) plt.plot(predictions) plt.show() The new model is still working on the one step forecast, as seen below. Let's try the dynamic forecast now. # build state modelR.reset_states() modelR.predict(X_train, batch_size=batch_size) dynpredictions = list() dyhat = X_test[0] for i in range(len(X_test)): # make one-step forecast dyhat = dyhat.reshape(1, 1, 1) dyhat = modelR.predict(dyhat, batch_size=batch_size)[0,0] # store forecast dynpredictions.append(dyhat) expected = Y_test[ i ] print('Month=%d, Predicted Dynamically=%f, Expected=%f' % (i+1, dyhat, expected)) drmse = np.sqrt(mean_squared_error(Y_test.reshape(len(Y_test)), dynpredictions)) print('Test Dynamic RMSE: %.3f' % drmse) # line plot of observed vs predicted plt.plot(Y_test.reshape(len(Y_test))) plt.plot(dynpredictions) plt.show() Now the dynamic forecast seems to be working too!
Role of delays in LSTM networks
UPDATED Your example is very interesting. On one hand it is constructed in such a way that you really need only one parameter and its value is 1: $$y_t=\beta+w y_{t-1}\\\beta=0\\w=1$$ Your training da
Role of delays in LSTM networks UPDATED Your example is very interesting. On one hand it is constructed in such a way that you really need only one parameter and its value is 1: $$y_t=\beta+w y_{t-1}\\\beta=0\\w=1$$ Your training data set is small (96 observations), but with three layer network you have quite a few parameters. It's very easy to overfit. The most interesting part is your test code. It is not clear whether you're trying to do a sequence of one-step forecasts or dynamic multi step forecast. In one step forecast, you predict for time t and get $\hat y_t=f(x_t)=f(y_{t-1})$. So you forecast always with latest observed information to make one step ahead prediction, then proceed to the next time period. Notice how above I'm using $y_{t-1}$ and not $\hat y_{t-1}$. That is the important distinction: in one step forecast you always use the observed value from previous step. In contrast, dynamic forecast uses the previous prediction to come up with the next: $\hat y_t=f(\hat y_{t-1})$. that is why it's called dynamic. So, first, I re-arranged your code a little bit and modified to make it produce the one step and dynamic forecasts for comparison. Here it is with outputs followed: # In[50]: import matplotlib.pyplot as plt from keras.models import Sequential from keras.layers import Dense, SimpleRNN from sklearn.metrics import mean_squared_error data = [0,1,2,3,2,1]*20 import numpy as np def shape_it(X): return np.expand_dims(X.reshape((-1,1)),2) from keras import regularizers from numpy.random import seed # In[51]: n_data = len(data) data = np.matrix(data) n_train = int(0.8*n_data) # In[52]: X_train = shape_it(data[:,:n_train]) Y_train = shape_it(data[:,1:(n_train+1)]) X_test = shape_it(data[:,n_train:-1]) Y_test = shape_it(data[:,(n_train+1):]) # In[26]: plt.plot(X_train.reshape(-1,1)) plt.plot(Y_train.reshape(-1,1)) plt.show() # In[27]: plt.plot(X_test.reshape(-1,1)) plt.plot(Y_test.reshape(-1,1)) plt.show() # In[75]: model = Sequential() batch_size = 1 model.add(SimpleRNN(12, batch_input_shape=(batch_size, X_train.shape[1], X_train.shape[2]),stateful=True)) model.add(Dense(12)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') epochs = 1000 for i in range(epochs): model.fit(X_train, np.reshape(Y_train,(-1,)), epochs=1, batch_size=batch_size, verbose=0, shuffle=False) model.reset_states() # build state model.reset_states() model.predict(X_train, batch_size=batch_size) predictions = list() for i in range(len(X_test)): # make one-step forecast X = X_test[i] X = X.reshape(1, 1, 1) yhat = model.predict(X, batch_size=batch_size)[0,0] # store forecast predictions.append(yhat) expected = Y_test[ i ] print('Month=%d, Predicted=%f, Expected=%f' % (i+1, yhat, expected)) # report performance rmse = np.sqrt(mean_squared_error(Y_test.reshape(len(Y_test)), predictions)) print('Test RMSE: %.3f' % rmse) # line plot of observed vs predicted plt.plot(Y_test.reshape(len(Y_test))) plt.plot(predictions) plt.show() Now we've got the picture you were expecting. Your original code had a couple of issues. One is that ReLU is not a good idea for this particular problem. You have linear problem, so 'linear' or default activation should work better. The second issue is that you have to call with stateful=True in fit function. Finally, I changed the prediction implementation to make it one step forecast. This is not bad, but it's only one step forecast. Next we'll try to do the dynamic forecast as explained earlier. # build state model.reset_states() model.predict(X_train, batch_size=batch_size) dynpredictions = list() dyhat = X_test[0] for i in range(len(X_test)): # make one-step forecast dyhat = yhat.reshape(1, 1, 1) dyhat = model.predict(dyhat, batch_size=batch_size)[0,0] # store forecast dynpredictions.append(dyhat) expected = Y_test[ i ] print('Month=%d, Predicted Dynamically=%f, Expected=%f' % (i+1, dyhat, expected)) drmse = np.sqrt(mean_squared_error(Y_test.reshape(len(Y_test)), dynpredictions)) print('Test Dynamic RMSE: %.3f' % drmse) # line plot of observed vs predicted plt.plot(Y_test.reshape(len(Y_test))) plt.plot(dynpredictions) plt.show() The dynamic forecast doesn't look so good as seen below. Recall that now we're out of sample, and we are not using observed values beyond observation #96 unlike in one step forecast. Still, we want to nail it, because the problem is so obvious to us that we want NN to figure it too. I'm going to try a different NN with just one hidden layer, and regularization to fight overfitting as follows. seed(1) modelR = Sequential() batch_size = 1 modelR.add(SimpleRNN(4, batch_input_shape=(batch_size, X_train.shape[1], X_train.shape[2]),stateful=True, kernel_regularizer=regularizers.l2(0.01), activity_regularizer=regularizers.l1(0.))) modelR.add(Dense(1,kernel_regularizer=regularizers.l2(0.01), activity_regularizer=regularizers.l1(0.))) modelR.compile(loss='mean_squared_error', optimizer='adam') epochs = 1000 for i in range(epochs): modelR.fit(X_train, np.reshape(Y_train,(-1,)), epochs=1, batch_size=batch_size, verbose=0, shuffle=False) modelR.reset_states() # build state modelR.reset_states() modelR.predict(X_train, batch_size=batch_size) predictions = list() for i in range(len(X_test)): # make one-step forecast X = X_test[i] X = X.reshape(1, 1, 1) yhat = modelR.predict(X, batch_size=batch_size)[0,0] # store forecast predictions.append(yhat) expected = Y_test[ i ] print('Month=%d, Predicted=%f, Expected=%f' % (i+1, yhat, expected)) # report performance rmse = np.sqrt(mean_squared_error(Y_test.reshape(len(Y_test)), predictions)) print('Test RMSE: %.3f' % rmse) # line plot of observed vs predicted plt.plot(Y_test.reshape(len(Y_test))) plt.plot(predictions) plt.show() The new model is still working on the one step forecast, as seen below. Let's try the dynamic forecast now. # build state modelR.reset_states() modelR.predict(X_train, batch_size=batch_size) dynpredictions = list() dyhat = X_test[0] for i in range(len(X_test)): # make one-step forecast dyhat = dyhat.reshape(1, 1, 1) dyhat = modelR.predict(dyhat, batch_size=batch_size)[0,0] # store forecast dynpredictions.append(dyhat) expected = Y_test[ i ] print('Month=%d, Predicted Dynamically=%f, Expected=%f' % (i+1, dyhat, expected)) drmse = np.sqrt(mean_squared_error(Y_test.reshape(len(Y_test)), dynpredictions)) print('Test Dynamic RMSE: %.3f' % drmse) # line plot of observed vs predicted plt.plot(Y_test.reshape(len(Y_test))) plt.plot(dynpredictions) plt.show() Now the dynamic forecast seems to be working too!
Role of delays in LSTM networks UPDATED Your example is very interesting. On one hand it is constructed in such a way that you really need only one parameter and its value is 1: $$y_t=\beta+w y_{t-1}\\\beta=0\\w=1$$ Your training da
45,664
Rare Events Logistic Regression
Don't do anything special. However, and this is crucial: choose a good quality measure. And that is not classification accuracy, sensitivity, specificity or similar measures, such as ROC curves. These can be very misleading in the case of unbalanced data, "identifying" that simply labeling everything as the majority class is "optimal". Which it isn't. Oversampling the minority class or undersampling the majority class won't solve this problem, because it amounts to biasing your model and pretending that the population is different than it truly is. Neither will collecting more data solve your problem, since the relation between majority and minority classes won't change. Instead, use probabilistic models instead of hard thresholded 0-1 classification, and then use proper scoring-rules. ("Proper" is really part of the term. There are proper and non-proper scoring rules. Classification accuracy is a non-proper scoring rule, and that is why it is not useful.) Frank Harrell, who knows what he is talking about, has written extensively on the topic: Classification vs. Prediction Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules
Rare Events Logistic Regression
Don't do anything special. However, and this is crucial: choose a good quality measure. And that is not classification accuracy, sensitivity, specificity or similar measures, such as ROC curves. These
Rare Events Logistic Regression Don't do anything special. However, and this is crucial: choose a good quality measure. And that is not classification accuracy, sensitivity, specificity or similar measures, such as ROC curves. These can be very misleading in the case of unbalanced data, "identifying" that simply labeling everything as the majority class is "optimal". Which it isn't. Oversampling the minority class or undersampling the majority class won't solve this problem, because it amounts to biasing your model and pretending that the population is different than it truly is. Neither will collecting more data solve your problem, since the relation between majority and minority classes won't change. Instead, use probabilistic models instead of hard thresholded 0-1 classification, and then use proper scoring-rules. ("Proper" is really part of the term. There are proper and non-proper scoring rules. Classification accuracy is a non-proper scoring rule, and that is why it is not useful.) Frank Harrell, who knows what he is talking about, has written extensively on the topic: Classification vs. Prediction Damage Caused by Classification Accuracy and Other Discontinuous Improper Accuracy Scoring Rules
Rare Events Logistic Regression Don't do anything special. However, and this is crucial: choose a good quality measure. And that is not classification accuracy, sensitivity, specificity or similar measures, such as ROC curves. These
45,665
Rare Events Logistic Regression
What you are asking here is basically food for numerous posts, opinions and a lot of research on the field of imbalanced classes. To answer your main question, there is not a straight answer of what a “rare” event is. I personally would say that your case is basically in the boundary. I will list some approaches here but seriously this will be the tip of the iceberg you will have to go back to google and look for more info: Do nothing. Sometimes it’s fine to just model the data as it is. Especially for an algorithm like logistic regression which not a classifier and predicts probabilities, these class imbalances will be taken into account. In the case you want to use the model as a classifier you might consider to: Undersample the majority class. Sample observations so that you make the two classes balanced Oversample the minority class. Similarly sample some observations from the minority class, obviously with replacement That being said, I believe that these 3 techniques can get you started. Given that you want prediction accuracy, I would suggest to look into other techniques as well. If you really want to use logistic regression then what I would do is: Split the data into train and test sets Define a loss function Check which approach minimises the loss function in the test set
Rare Events Logistic Regression
What you are asking here is basically food for numerous posts, opinions and a lot of research on the field of imbalanced classes. To answer your main question, there is not a straight answer of what a
Rare Events Logistic Regression What you are asking here is basically food for numerous posts, opinions and a lot of research on the field of imbalanced classes. To answer your main question, there is not a straight answer of what a “rare” event is. I personally would say that your case is basically in the boundary. I will list some approaches here but seriously this will be the tip of the iceberg you will have to go back to google and look for more info: Do nothing. Sometimes it’s fine to just model the data as it is. Especially for an algorithm like logistic regression which not a classifier and predicts probabilities, these class imbalances will be taken into account. In the case you want to use the model as a classifier you might consider to: Undersample the majority class. Sample observations so that you make the two classes balanced Oversample the minority class. Similarly sample some observations from the minority class, obviously with replacement That being said, I believe that these 3 techniques can get you started. Given that you want prediction accuracy, I would suggest to look into other techniques as well. If you really want to use logistic regression then what I would do is: Split the data into train and test sets Define a loss function Check which approach minimises the loss function in the test set
Rare Events Logistic Regression What you are asking here is basically food for numerous posts, opinions and a lot of research on the field of imbalanced classes. To answer your main question, there is not a straight answer of what a
45,666
Ordinal regression: logit, probit, complementary log-log or negative log-log?
There is no general guidance on this question, except that if you had to pick one model without knowing anything about the fit of any of the models, you might pick the logistic link (proportional odds ordinal logistic model) because its parameters are more interpretable. In my RMS course notes I have an in-depth case study in the chapter on ordinal models for continuous $Y$. You'll see some diagnostic plots for choosing the link function (the winner in the example was log-log, i.e., the discrete proportional hazards model). The approach I took there was to fit a tentative model (an ordinary linear model) just to get a linear predictor that could be stratified on (I used 6 quantile intervals because of the available sample size) with there being little outcome heterogeneity in each stratum. Then I computed the empirical CDF within each stratum and took various transformations including logit, log-log, probit. Only one (log-log) yielded curves that were parallel. Note that ordinal semiparametric models do not assume a shape for such curves; they only assume parallelism. When $Y$ is discrete there are other displays you can also make, as discussed in the chapter in RMS that preceeds the continuous $Y$ chapter.
Ordinal regression: logit, probit, complementary log-log or negative log-log?
There is no general guidance on this question, except that if you had to pick one model without knowing anything about the fit of any of the models, you might pick the logistic link (proportional odds
Ordinal regression: logit, probit, complementary log-log or negative log-log? There is no general guidance on this question, except that if you had to pick one model without knowing anything about the fit of any of the models, you might pick the logistic link (proportional odds ordinal logistic model) because its parameters are more interpretable. In my RMS course notes I have an in-depth case study in the chapter on ordinal models for continuous $Y$. You'll see some diagnostic plots for choosing the link function (the winner in the example was log-log, i.e., the discrete proportional hazards model). The approach I took there was to fit a tentative model (an ordinary linear model) just to get a linear predictor that could be stratified on (I used 6 quantile intervals because of the available sample size) with there being little outcome heterogeneity in each stratum. Then I computed the empirical CDF within each stratum and took various transformations including logit, log-log, probit. Only one (log-log) yielded curves that were parallel. Note that ordinal semiparametric models do not assume a shape for such curves; they only assume parallelism. When $Y$ is discrete there are other displays you can also make, as discussed in the chapter in RMS that preceeds the continuous $Y$ chapter.
Ordinal regression: logit, probit, complementary log-log or negative log-log? There is no general guidance on this question, except that if you had to pick one model without knowing anything about the fit of any of the models, you might pick the logistic link (proportional odds
45,667
Why does including an offset in ordinary regression change $R^2$?
Both are valid summaries of the models, but they should differ because the models involve different responses. The following analysis focuses on $R^2$, because those differ, too, and the adjusted $R^2$ is a simple function of $R^2$ (but a little more complicated to write). The first model is $$\mathbb{E}(Y \mid x) = \alpha_0 + \alpha_1 x + x\tag{1}$$ where $\alpha_0$ and $\alpha_1$ are parameters to be estimated. The last term $x$ is the "offset": this merely means that term is automatically included and its coefficient (namely, $1$) will not be varied. The second model is $$\mathbb{E}(Y-x\mid x) = \beta_0 + \beta_1 x\tag{2}$$ where $\beta_0$ and $\beta_1$ are parameters to be estimated. Linearity of expectation and the "taking out what is known" property of conditional expectations allows us to rewrite the left hand side as a difference $\mathbb{E}(Y\mid x) - x$ and algebra lets us add $x$ to both sides to produce $$\mathbb{E}(Y\mid x) = \beta_0 + \beta_1 x + x.$$ Thus the models are the same and are even parameterized identically, with $\alpha_i$ corresponding to $\beta_i$. As the output will attest, everything about their fits is the same: the coefficient estimates, their standard errors, the F statistic, and the p-values. However, the predictions differ: model $(1)$ predicts $$\mathbb{E}(Y\mid x)$$ while model $(2)$ predicts $$\mathbb{E}(Y-x\mid x).$$ Therefore, in computing $R^2$--the "amount of variance explained," the "amount of variance" refers to different quantities: $\operatorname{Var}(Y)$ in the first case and $$\operatorname{Var}(Y-x) = \operatorname{Var}(Y) + \operatorname{Var}(x) - 2\operatorname{Cov}(Y,x)$$ in the second. Moreover, the predictions of the two models differ, too: in the first model the predicted value of $\mathbb{E}(Y)$ for any $x$ is $$\hat y_1(x) = \hat\alpha_0 + (1 + \hat \alpha_1)x$$ (using, as is common, hats to designate estimated values of parameters) while in the second model the predicted value of $\mathbb{E}(Y-x)$ is $$\hat y_2(x) = \hat\beta_0 + \hat\beta_1 x = \hat\alpha_0 + \hat \alpha_1 x= \hat y_1(x) - x$$ (since the parameterizations correspond and the fits are the same). We can try to relate the two $R^2$. Let the data be $(x_1,y_1),\ldots, (x_n,y_n)$. For brevity, adopt vector notation $\mathbf{x} = (x_1,\ldots, x_n)$ and $\mathbf{y} = (y_1,\ldots, y_n)$. To distinguish variances and covariances of random variables in the models from properties of the data, for any $n$-vectors $\mathbf a$ and $\mathbf b$ write $$\bar{\mathbf{a}}= \frac{1}{n}\left(a_1 + \cdots + a_n\right)$$ and $$V(\mathbf{a}, \mathbf{b}) = \frac{1}{n-1}\left((a_1-\bar{\mathbf{a}})(b_1-\bar{\mathbf{b}}) + \cdots + (a_n-\bar{\mathbf{a}})(b_n-\bar{\mathbf{b}})\right).$$ Let $V(\mathbf{a}) = V(\mathbf{a},\mathbf{a})$ be a convenient shorthand. In model $(1)$, the coefficient of determination is $$R^2_1 = \frac{V(\hat{y}_1(\mathbf{x}))}{V(\mathbf{y})}$$ while in model $(2)$ it is $$\eqalign{R^2_2 &= \frac{V(\hat{y}_2(\mathbf{x}))}{V(\mathbf{y} - \mathbf{x})}\\ &=\frac{V(\hat{y}_1(\mathbf{x}) - \mathbf{x})}{V(\mathbf{y} - \mathbf{x})}\\ &=\frac{V(\hat{y}_1(\mathbf{x})) + V(\mathbf{x}) - 2V(\hat{y}_1(\mathbf{x}), \mathbf{x})}{V(\mathbf{y}) + V(\mathbf{x}) - 2V(\mathbf{y}, \mathbf{x})}. }$$ We can see $R^2_1$ lurking in the numerator in the form $V(\hat{y}_1(\mathbf{x})) = V(\mathbf{y}) R^2_1$, but no general simplification is evident. Indeed, we cannot even say in general which $R^2$ is greater than the other, even though the models give identical predictions of $\mathbb{E}(Y)$. These considerations suggest that $R^2$ might be overinterpreted in many situations. In particular, as a measure of "goodness of fit" it leaves much to be desired. Although it has its uses--it is a basic ingredient in many informative regression statistics--its meaning and interpretation might not be as straightforward as they would seem.
Why does including an offset in ordinary regression change $R^2$?
Both are valid summaries of the models, but they should differ because the models involve different responses. The following analysis focuses on $R^2$, because those differ, too, and the adjusted $R
Why does including an offset in ordinary regression change $R^2$? Both are valid summaries of the models, but they should differ because the models involve different responses. The following analysis focuses on $R^2$, because those differ, too, and the adjusted $R^2$ is a simple function of $R^2$ (but a little more complicated to write). The first model is $$\mathbb{E}(Y \mid x) = \alpha_0 + \alpha_1 x + x\tag{1}$$ where $\alpha_0$ and $\alpha_1$ are parameters to be estimated. The last term $x$ is the "offset": this merely means that term is automatically included and its coefficient (namely, $1$) will not be varied. The second model is $$\mathbb{E}(Y-x\mid x) = \beta_0 + \beta_1 x\tag{2}$$ where $\beta_0$ and $\beta_1$ are parameters to be estimated. Linearity of expectation and the "taking out what is known" property of conditional expectations allows us to rewrite the left hand side as a difference $\mathbb{E}(Y\mid x) - x$ and algebra lets us add $x$ to both sides to produce $$\mathbb{E}(Y\mid x) = \beta_0 + \beta_1 x + x.$$ Thus the models are the same and are even parameterized identically, with $\alpha_i$ corresponding to $\beta_i$. As the output will attest, everything about their fits is the same: the coefficient estimates, their standard errors, the F statistic, and the p-values. However, the predictions differ: model $(1)$ predicts $$\mathbb{E}(Y\mid x)$$ while model $(2)$ predicts $$\mathbb{E}(Y-x\mid x).$$ Therefore, in computing $R^2$--the "amount of variance explained," the "amount of variance" refers to different quantities: $\operatorname{Var}(Y)$ in the first case and $$\operatorname{Var}(Y-x) = \operatorname{Var}(Y) + \operatorname{Var}(x) - 2\operatorname{Cov}(Y,x)$$ in the second. Moreover, the predictions of the two models differ, too: in the first model the predicted value of $\mathbb{E}(Y)$ for any $x$ is $$\hat y_1(x) = \hat\alpha_0 + (1 + \hat \alpha_1)x$$ (using, as is common, hats to designate estimated values of parameters) while in the second model the predicted value of $\mathbb{E}(Y-x)$ is $$\hat y_2(x) = \hat\beta_0 + \hat\beta_1 x = \hat\alpha_0 + \hat \alpha_1 x= \hat y_1(x) - x$$ (since the parameterizations correspond and the fits are the same). We can try to relate the two $R^2$. Let the data be $(x_1,y_1),\ldots, (x_n,y_n)$. For brevity, adopt vector notation $\mathbf{x} = (x_1,\ldots, x_n)$ and $\mathbf{y} = (y_1,\ldots, y_n)$. To distinguish variances and covariances of random variables in the models from properties of the data, for any $n$-vectors $\mathbf a$ and $\mathbf b$ write $$\bar{\mathbf{a}}= \frac{1}{n}\left(a_1 + \cdots + a_n\right)$$ and $$V(\mathbf{a}, \mathbf{b}) = \frac{1}{n-1}\left((a_1-\bar{\mathbf{a}})(b_1-\bar{\mathbf{b}}) + \cdots + (a_n-\bar{\mathbf{a}})(b_n-\bar{\mathbf{b}})\right).$$ Let $V(\mathbf{a}) = V(\mathbf{a},\mathbf{a})$ be a convenient shorthand. In model $(1)$, the coefficient of determination is $$R^2_1 = \frac{V(\hat{y}_1(\mathbf{x}))}{V(\mathbf{y})}$$ while in model $(2)$ it is $$\eqalign{R^2_2 &= \frac{V(\hat{y}_2(\mathbf{x}))}{V(\mathbf{y} - \mathbf{x})}\\ &=\frac{V(\hat{y}_1(\mathbf{x}) - \mathbf{x})}{V(\mathbf{y} - \mathbf{x})}\\ &=\frac{V(\hat{y}_1(\mathbf{x})) + V(\mathbf{x}) - 2V(\hat{y}_1(\mathbf{x}), \mathbf{x})}{V(\mathbf{y}) + V(\mathbf{x}) - 2V(\mathbf{y}, \mathbf{x})}. }$$ We can see $R^2_1$ lurking in the numerator in the form $V(\hat{y}_1(\mathbf{x})) = V(\mathbf{y}) R^2_1$, but no general simplification is evident. Indeed, we cannot even say in general which $R^2$ is greater than the other, even though the models give identical predictions of $\mathbb{E}(Y)$. These considerations suggest that $R^2$ might be overinterpreted in many situations. In particular, as a measure of "goodness of fit" it leaves much to be desired. Although it has its uses--it is a basic ingredient in many informative regression statistics--its meaning and interpretation might not be as straightforward as they would seem.
Why does including an offset in ordinary regression change $R^2$? Both are valid summaries of the models, but they should differ because the models involve different responses. The following analysis focuses on $R^2$, because those differ, too, and the adjusted $R
45,668
Kalman filter vs Kalman Smoother for beta calculations
They are not really different approaches in that they are solutions to different problems: one computes the sequence of filtering distributions $p(\beta_t|Y_{1:t})$, and the other the distributions based on all observations $p(\beta_t|Y_{1:T})$, for $t =1,...,T$. The smoother doesn't "hide underlying dynamics" but rather adjusts its state estimate (with respect to the filter) to reflect the fact that new data has been observed; what "looked like" an increase in $\beta$ at time $t$ is now, on the basis of more accumulated evidence, believed to have been mostly observation noise and a much smaller move in $\beta$. Which algorithm you should use depends on what you need it for. If you are really looking at this data purely retrospectively as you mention, then smoother is what you want. If you want to build a trading algorithm based on $\beta_t$ then you obviously need to use the filtered estimate in your backtesting because the smoothed one will not be available when you actually use the strategy (it depends on future data). EDIT to answer additional questions from a comment: Are you suggesting taking the output of the filter, and then forecasting the beta by fitting some model to it? You are already specifying a model for beta in your state space model (this is often a random walk but does not need to be), and you can forecast directly from this model. A two-step approximation where you first filter with simple linear dynamics and then forecast the filter output with a more complicated model (something that could not have been cast to linear state space form, say) could "work" in some cases I guess, but it would be at the very least logically inconsistent. If you went that route it would definitely need to be done on the filter output and tested for accuracy (the most meaningful way to do so being choosing the model which optimizes something tangible like the P&L on a trading strategy based on your beta forecast, since you can't actually observe the "true" beta). Both of your beta estimates are implicitly conditional on the dynamics you have assumed for beta. If the dynamics of the filtered/smoothed estimates don't "make sense" then I would perhaps reconsider the state dynamics in the model. Additionally, you have to take into account uncertainty: the fact that the smoother beta increases monotonically for a period does not mean that the "true" beta necessarily did. It is simply your best guess based on the data that you have, which does not actually include beta. You should plot confidence bands so you can see what the smoother is really "saying" about beta. If the data is very noisy and doesn't contain a lot of information about beta, it would not make any sense for the smoother estimate to be "bumpy": that information is just not there.
Kalman filter vs Kalman Smoother for beta calculations
They are not really different approaches in that they are solutions to different problems: one computes the sequence of filtering distributions $p(\beta_t|Y_{1:t})$, and the other the distributions ba
Kalman filter vs Kalman Smoother for beta calculations They are not really different approaches in that they are solutions to different problems: one computes the sequence of filtering distributions $p(\beta_t|Y_{1:t})$, and the other the distributions based on all observations $p(\beta_t|Y_{1:T})$, for $t =1,...,T$. The smoother doesn't "hide underlying dynamics" but rather adjusts its state estimate (with respect to the filter) to reflect the fact that new data has been observed; what "looked like" an increase in $\beta$ at time $t$ is now, on the basis of more accumulated evidence, believed to have been mostly observation noise and a much smaller move in $\beta$. Which algorithm you should use depends on what you need it for. If you are really looking at this data purely retrospectively as you mention, then smoother is what you want. If you want to build a trading algorithm based on $\beta_t$ then you obviously need to use the filtered estimate in your backtesting because the smoothed one will not be available when you actually use the strategy (it depends on future data). EDIT to answer additional questions from a comment: Are you suggesting taking the output of the filter, and then forecasting the beta by fitting some model to it? You are already specifying a model for beta in your state space model (this is often a random walk but does not need to be), and you can forecast directly from this model. A two-step approximation where you first filter with simple linear dynamics and then forecast the filter output with a more complicated model (something that could not have been cast to linear state space form, say) could "work" in some cases I guess, but it would be at the very least logically inconsistent. If you went that route it would definitely need to be done on the filter output and tested for accuracy (the most meaningful way to do so being choosing the model which optimizes something tangible like the P&L on a trading strategy based on your beta forecast, since you can't actually observe the "true" beta). Both of your beta estimates are implicitly conditional on the dynamics you have assumed for beta. If the dynamics of the filtered/smoothed estimates don't "make sense" then I would perhaps reconsider the state dynamics in the model. Additionally, you have to take into account uncertainty: the fact that the smoother beta increases monotonically for a period does not mean that the "true" beta necessarily did. It is simply your best guess based on the data that you have, which does not actually include beta. You should plot confidence bands so you can see what the smoother is really "saying" about beta. If the data is very noisy and doesn't contain a lot of information about beta, it would not make any sense for the smoother estimate to be "bumpy": that information is just not there.
Kalman filter vs Kalman Smoother for beta calculations They are not really different approaches in that they are solutions to different problems: one computes the sequence of filtering distributions $p(\beta_t|Y_{1:t})$, and the other the distributions ba
45,669
Zero correlation between $x$ and $y = f(x)$?
There are three mutually exclusive possibilities for $f$ (apart from the trivial one where the domain of $f$ has just one element). To be fully general and avoid trivial complications, let's not worry about correlation, but focus on covariance instead: when covariance is zero, correlation is either zero or undefined. (Correlation becomes undefined when the variance of either marginal distribution is zero.) $f$ is not injective. Suppose there exist $x_1 \lt x_2$ for which $f(x_1)=f(x_2)$. Putting probabilities of $1/2$ on each of $x_1$ and $x_2$ gives zero covariance (and undefined correlation). $f$ is injective but not monotonic. Otherwise suppose there exist $x_1\lt x_2 \lt x_3$ for which the $y_i=f(x_i)$ are not in order. Since covariance does not change with translations, make the calculations easier by shifting the $x$ and $y$ coordinates so that $x_2=y_2=0$. The assumption amounts to $x_1\lt 0,$ $x_3\gt 0$, and $y_1$ and $y_3$ have the same sign. Define $C(p)$ to be the covariance achieved by putting probabilities of $p$ on $x_1$, $1/2-p$ on $x_3$, and $1/2$ on $0$. $C$ clearly is a quadratic function of $p$ and therefore is continuous. Compute $$C(0)=\frac{1}{4}x_3y_3,\ C(1/2)=\frac{1}{4}x_1y_1.$$ The assumptions imply $C(0)$ and $C(1/2)$ have opposite signs. The Intermediate Value Theorem implies there is some $p\in (0,1/2)$ for which $C(p)=0$: use this value of $p$ to achieve zero covariance. The correlation will be defined and zero. $f$ is monotonic. Otherwise $f$ is strictly monotonic (increasing or decreasing). From the characterization of covariance as an expected signed area, it is obvious that all covariances must be strictly positive or strictly negative when the probability is not an atom. The correlation will be defined.
Zero correlation between $x$ and $y = f(x)$?
There are three mutually exclusive possibilities for $f$ (apart from the trivial one where the domain of $f$ has just one element). To be fully general and avoid trivial complications, let's not worr
Zero correlation between $x$ and $y = f(x)$? There are three mutually exclusive possibilities for $f$ (apart from the trivial one where the domain of $f$ has just one element). To be fully general and avoid trivial complications, let's not worry about correlation, but focus on covariance instead: when covariance is zero, correlation is either zero or undefined. (Correlation becomes undefined when the variance of either marginal distribution is zero.) $f$ is not injective. Suppose there exist $x_1 \lt x_2$ for which $f(x_1)=f(x_2)$. Putting probabilities of $1/2$ on each of $x_1$ and $x_2$ gives zero covariance (and undefined correlation). $f$ is injective but not monotonic. Otherwise suppose there exist $x_1\lt x_2 \lt x_3$ for which the $y_i=f(x_i)$ are not in order. Since covariance does not change with translations, make the calculations easier by shifting the $x$ and $y$ coordinates so that $x_2=y_2=0$. The assumption amounts to $x_1\lt 0,$ $x_3\gt 0$, and $y_1$ and $y_3$ have the same sign. Define $C(p)$ to be the covariance achieved by putting probabilities of $p$ on $x_1$, $1/2-p$ on $x_3$, and $1/2$ on $0$. $C$ clearly is a quadratic function of $p$ and therefore is continuous. Compute $$C(0)=\frac{1}{4}x_3y_3,\ C(1/2)=\frac{1}{4}x_1y_1.$$ The assumptions imply $C(0)$ and $C(1/2)$ have opposite signs. The Intermediate Value Theorem implies there is some $p\in (0,1/2)$ for which $C(p)=0$: use this value of $p$ to achieve zero covariance. The correlation will be defined and zero. $f$ is monotonic. Otherwise $f$ is strictly monotonic (increasing or decreasing). From the characterization of covariance as an expected signed area, it is obvious that all covariances must be strictly positive or strictly negative when the probability is not an atom. The correlation will be defined.
Zero correlation between $x$ and $y = f(x)$? There are three mutually exclusive possibilities for $f$ (apart from the trivial one where the domain of $f$ has just one element). To be fully general and avoid trivial complications, let's not worr
45,670
Zero correlation between $x$ and $y = f(x)$?
The answer is no if $f$ is affine-linear. This holds by the linearity and translation invariance of the covariance.
Zero correlation between $x$ and $y = f(x)$?
The answer is no if $f$ is affine-linear. This holds by the linearity and translation invariance of the covariance.
Zero correlation between $x$ and $y = f(x)$? The answer is no if $f$ is affine-linear. This holds by the linearity and translation invariance of the covariance.
Zero correlation between $x$ and $y = f(x)$? The answer is no if $f$ is affine-linear. This holds by the linearity and translation invariance of the covariance.
45,671
Interpreting interactions in a linear model vs quadratic model
My favorite way to understand an interaction between two continuous predictors (e.g. heat and year) is to plot it with the full range of one predictor on the x-axis and a few different lines representing potentially interesting values of the other predictor --- I usually pick three, representing low, medium, and high levels of the predictor, but you can play around with it depending on your data. For your specific case, here is some example code. First, since you didn't provide example data, I'll generate some that (very roughly) matches the parameter estimates you provide: set.seed(24601) # to get exactly the same results as mine, use this set.seed() # generate data year <- 1996:2006 heat <- rnorm(n=20, mean=50, sd=15) location <- letters[1:10] library(tidyverse) my_data <- base::expand.grid(year=year, heat=heat, location=location) %>% mutate(yield = -75411.13 + 38.89*year - 82.56*heat + 0.04*year*heat + rnorm(nrow(.), sd = 50)) Basically, I just used the fixed effect estimates from your model and added a little normal noise. For your own purposes, of course, you will use your real data and real fitted models. Model 1 (interaction, linear effects only) # run model library(lme4) mdl <- lmer(yield ~ year * heat + (1|location), data = my_data) From your question, it seems like you're interested primarily in the overall fixed effects (not the estimates for individual locations, etc.), so I'll focus on that. If you want to get into visualizing the results for individual locations, I recommend this awesome blog post: https://tjmahr.github.io/plotting-partial-pooling-in-mixed-effects-models/ Here are the fixed effects as I estimated them (they're a bit different from your estimates, so your plot will look a bit different, too, but I think it's close enough to be useful): > (fe <- summary(mdl)$coefficients[1:4,1]) (Intercept) year heat year:heat -74219.711477 38.293178 -117.339456 0.057416 Now let's plot it to take a look at that interaction. I'll choose to put heat on the x-axis and have three representative lines for low, medium, and high values of year, but you could switch and do it the other way if you prefer. I'll use those values (the full range of heat and the three example values of year) to generate predicted yield scores for each combination based on the fixed effects estimates from the model. # select values for plotting # full range of heat, and selected low, med and high values for year plot_df <- expand.grid(heat=heat, year=c(min(year), mean(year), max(year))) %>% mutate(yield = fe[1] + fe[2]*year + fe[3]*heat + fe[4]*year*heat) Now I'll plot those predicted yield values as lines. plot_interaction <- ggplot(plot_df, aes(y=yield, x = heat, color = as.factor(year))) + geom_line(size=2) + labs(color = "year") + # tweak plot appearance scale_color_brewer(palette = "Dark2") + theme(legend.position = "top") + theme_classic() plot_interaction So what do we see here? First of all, the interaction is very small relative to the individual effects of year and heat --- the lines look almost parallel. But you know from the parameter estimates that it is significantly different from zero, so it's there even if it's small. heat has a negative effect when year = 0 (ideally, you will have centered both predictors before estimating the model, so this would be "at the mean of year"), and the positive interaction term indicates that as year increases, the effect of heat gets less negative, so it weakens. What you should look for in the plot is the slope to be a bit shallower for higher years than for low years. Model 2 (linear and quadratic effects, with interactions) The second model seems much more complicated, but it's actually just as easy to plot, almost! We'll do the same procedure as before, first getting the fixed effects from the model then generating some plotting data with the full range of heat, selected values of year, and predicted values for yield based on the fixed effect estimates. First, since I don't have your real data, I'm re-generating another dataset that will (roughly) match the parameter estimates of the second model, so we can get a more or less accurate plot. my_data <- base::expand.grid(year=year, heat=heat, location=location) %>% mutate(yield = -97816.61 + 50.07*year + 20499.87*heat - 632.2*heat*heat - 10.22*year*heat +.31*year*heat*heat + rnorm(nrow(.), sd = 50)) Run the second model: mdl <- lmer(yield ~ year*(heat + I(heat^2)) + (1|location), data = my_data) Get the fixed effects: fe <- summary(mdl)$coefficients[1:6,1] > round(fe, 3) (Intercept) year heat I(heat^2) year:heat year:I(heat^2) -95838.400 49.083 20410.924 -631.207 -10.176 0.310 Close enough. :) Now generate the plotting data. # full range of heat, and selected low, med and high values for year plot_df <- expand.grid(heat=heat, year=c(min(year), mean(year), max(year))) %>% mutate(yield = fe[1] + fe[2]*year + fe[3]*heat + fe[4]*heat*heat + fe[5]*year*heat + fe[6]*year*heat*heat) We can use exactly the same plotting code as before, just feed it the updated dataframe: plot_interaction %+% plot_df Now we can see that the general effect of heat on yield is negative, with an accelerating effect, so that there's a steeper negative slope at higher levels of heat. Moreover, we can see that the effect of heat depends on year (i.e. the interaction), such that heat drops off more sharply in earlier years than later years. You should be able to generate plots for your own data using this code. I always find having a visual is extremely helpful when interpreting interactions, especially in models with more than a few coefficients (such as your model 2). (Note that if the ranges for heat and year are pretty different in your data to what I came up with generating these data, then your plots might look quite different even though the parameter estimates are similar.) A final note on plotting and interpreting interactions My guess from your parameter estimates is that you didn't center year and heat before estimating the model, which is why I didn't either. But you probably should. Not only does it ease some model estimation issues like multicollinearity, it makes interpreting your interactions easier since 0 values are more meaningful. For example, the parameter estimates you're getting for heat are the predicted change in yield for each additional unit of heat when year = 0. Unless you've centered year (or you're studying heat and yields in antiquity), you're probably talking about values far outside of the range of your data. Your estimate for the effect of heat will be much more meaningful if you center first.
Interpreting interactions in a linear model vs quadratic model
My favorite way to understand an interaction between two continuous predictors (e.g. heat and year) is to plot it with the full range of one predictor on the x-axis and a few different lines represent
Interpreting interactions in a linear model vs quadratic model My favorite way to understand an interaction between two continuous predictors (e.g. heat and year) is to plot it with the full range of one predictor on the x-axis and a few different lines representing potentially interesting values of the other predictor --- I usually pick three, representing low, medium, and high levels of the predictor, but you can play around with it depending on your data. For your specific case, here is some example code. First, since you didn't provide example data, I'll generate some that (very roughly) matches the parameter estimates you provide: set.seed(24601) # to get exactly the same results as mine, use this set.seed() # generate data year <- 1996:2006 heat <- rnorm(n=20, mean=50, sd=15) location <- letters[1:10] library(tidyverse) my_data <- base::expand.grid(year=year, heat=heat, location=location) %>% mutate(yield = -75411.13 + 38.89*year - 82.56*heat + 0.04*year*heat + rnorm(nrow(.), sd = 50)) Basically, I just used the fixed effect estimates from your model and added a little normal noise. For your own purposes, of course, you will use your real data and real fitted models. Model 1 (interaction, linear effects only) # run model library(lme4) mdl <- lmer(yield ~ year * heat + (1|location), data = my_data) From your question, it seems like you're interested primarily in the overall fixed effects (not the estimates for individual locations, etc.), so I'll focus on that. If you want to get into visualizing the results for individual locations, I recommend this awesome blog post: https://tjmahr.github.io/plotting-partial-pooling-in-mixed-effects-models/ Here are the fixed effects as I estimated them (they're a bit different from your estimates, so your plot will look a bit different, too, but I think it's close enough to be useful): > (fe <- summary(mdl)$coefficients[1:4,1]) (Intercept) year heat year:heat -74219.711477 38.293178 -117.339456 0.057416 Now let's plot it to take a look at that interaction. I'll choose to put heat on the x-axis and have three representative lines for low, medium, and high values of year, but you could switch and do it the other way if you prefer. I'll use those values (the full range of heat and the three example values of year) to generate predicted yield scores for each combination based on the fixed effects estimates from the model. # select values for plotting # full range of heat, and selected low, med and high values for year plot_df <- expand.grid(heat=heat, year=c(min(year), mean(year), max(year))) %>% mutate(yield = fe[1] + fe[2]*year + fe[3]*heat + fe[4]*year*heat) Now I'll plot those predicted yield values as lines. plot_interaction <- ggplot(plot_df, aes(y=yield, x = heat, color = as.factor(year))) + geom_line(size=2) + labs(color = "year") + # tweak plot appearance scale_color_brewer(palette = "Dark2") + theme(legend.position = "top") + theme_classic() plot_interaction So what do we see here? First of all, the interaction is very small relative to the individual effects of year and heat --- the lines look almost parallel. But you know from the parameter estimates that it is significantly different from zero, so it's there even if it's small. heat has a negative effect when year = 0 (ideally, you will have centered both predictors before estimating the model, so this would be "at the mean of year"), and the positive interaction term indicates that as year increases, the effect of heat gets less negative, so it weakens. What you should look for in the plot is the slope to be a bit shallower for higher years than for low years. Model 2 (linear and quadratic effects, with interactions) The second model seems much more complicated, but it's actually just as easy to plot, almost! We'll do the same procedure as before, first getting the fixed effects from the model then generating some plotting data with the full range of heat, selected values of year, and predicted values for yield based on the fixed effect estimates. First, since I don't have your real data, I'm re-generating another dataset that will (roughly) match the parameter estimates of the second model, so we can get a more or less accurate plot. my_data <- base::expand.grid(year=year, heat=heat, location=location) %>% mutate(yield = -97816.61 + 50.07*year + 20499.87*heat - 632.2*heat*heat - 10.22*year*heat +.31*year*heat*heat + rnorm(nrow(.), sd = 50)) Run the second model: mdl <- lmer(yield ~ year*(heat + I(heat^2)) + (1|location), data = my_data) Get the fixed effects: fe <- summary(mdl)$coefficients[1:6,1] > round(fe, 3) (Intercept) year heat I(heat^2) year:heat year:I(heat^2) -95838.400 49.083 20410.924 -631.207 -10.176 0.310 Close enough. :) Now generate the plotting data. # full range of heat, and selected low, med and high values for year plot_df <- expand.grid(heat=heat, year=c(min(year), mean(year), max(year))) %>% mutate(yield = fe[1] + fe[2]*year + fe[3]*heat + fe[4]*heat*heat + fe[5]*year*heat + fe[6]*year*heat*heat) We can use exactly the same plotting code as before, just feed it the updated dataframe: plot_interaction %+% plot_df Now we can see that the general effect of heat on yield is negative, with an accelerating effect, so that there's a steeper negative slope at higher levels of heat. Moreover, we can see that the effect of heat depends on year (i.e. the interaction), such that heat drops off more sharply in earlier years than later years. You should be able to generate plots for your own data using this code. I always find having a visual is extremely helpful when interpreting interactions, especially in models with more than a few coefficients (such as your model 2). (Note that if the ranges for heat and year are pretty different in your data to what I came up with generating these data, then your plots might look quite different even though the parameter estimates are similar.) A final note on plotting and interpreting interactions My guess from your parameter estimates is that you didn't center year and heat before estimating the model, which is why I didn't either. But you probably should. Not only does it ease some model estimation issues like multicollinearity, it makes interpreting your interactions easier since 0 values are more meaningful. For example, the parameter estimates you're getting for heat are the predicted change in yield for each additional unit of heat when year = 0. Unless you've centered year (or you're studying heat and yields in antiquity), you're probably talking about values far outside of the range of your data. Your estimate for the effect of heat will be much more meaningful if you center first.
Interpreting interactions in a linear model vs quadratic model My favorite way to understand an interaction between two continuous predictors (e.g. heat and year) is to plot it with the full range of one predictor on the x-axis and a few different lines represent
45,672
Interpreting interactions in a linear model vs quadratic model
Taking derivative respect to heat we get marginal effect that is: $\frac{\partial yield}{\partial heat}=2500-2*632*heat-10*year+2*0.31*year*heat$ We can rearrange part of interest of ME in the following way: $year*(-10+0.62*heat)$ It means that greater heat we have then slower it's positive effect decline with each year. So suppose we have 10 heat. Then each next year the effect will decline by $-10+0.62*10\approx-4$ compare to previous year. However suppose that $heat>\frac{10}{0.62}$ so for example $heat=100$. Then each year heat effect will grow by extra $-10+0.62*100\approx52$. Also heat may change year by year so its effect will vary. Wheather we expected to have positive or negative time tendency depends on heat value.
Interpreting interactions in a linear model vs quadratic model
Taking derivative respect to heat we get marginal effect that is: $\frac{\partial yield}{\partial heat}=2500-2*632*heat-10*year+2*0.31*year*heat$ We can rearrange part of interest of ME in the followi
Interpreting interactions in a linear model vs quadratic model Taking derivative respect to heat we get marginal effect that is: $\frac{\partial yield}{\partial heat}=2500-2*632*heat-10*year+2*0.31*year*heat$ We can rearrange part of interest of ME in the following way: $year*(-10+0.62*heat)$ It means that greater heat we have then slower it's positive effect decline with each year. So suppose we have 10 heat. Then each next year the effect will decline by $-10+0.62*10\approx-4$ compare to previous year. However suppose that $heat>\frac{10}{0.62}$ so for example $heat=100$. Then each year heat effect will grow by extra $-10+0.62*100\approx52$. Also heat may change year by year so its effect will vary. Wheather we expected to have positive or negative time tendency depends on heat value.
Interpreting interactions in a linear model vs quadratic model Taking derivative respect to heat we get marginal effect that is: $\frac{\partial yield}{\partial heat}=2500-2*632*heat-10*year+2*0.31*year*heat$ We can rearrange part of interest of ME in the followi
45,673
Deep Learning: Wild differences after model is retrained on the same data, what to do?
This is normal and exactly what you should expect. Often, deep models are exquisitely sensitive to the initial parameters. It would appear that they are exquisitely sensitive to the first step taken, even. This is because the loss function is non-convex and optimization procedures have difficulty finding any kind of global minimum--the existence and locatability-by-an-optimization-algorithm of which would make your models the same every time. This is therefore not necessarily an indication of uncertainty in your model. What to do? (1) You might as well try 10 different random initializations or train it 10 different times with the same initialization and take the one that gives you best results. (Generally when DL is applied, it seems that practitioners are measuring Performace on massive held out sets--CV isn't even feasible.) (2) Read Chapter 8 of Deep Learning by Goodfellow et al. before returning to the answers to this post. It is very interesting and might give insight into what you're seeing. Here is a revealing diagram shown there. Devising an optimization procedure that will arrive at the same solution in high dimensions regardless of where it begins is an area of active research, compounded by the difficulties shown just above; besides trying to navigate a nonconvex topology, things like the amount of training iterations might now dictate where you land depending on where you start. Even if you weren't initializing differently each time, you still could find very different models. You may not even be ending up at different local minimima; in high dimensions, you could land at a saddle point, or plateau, etc, etc. Sometimes, à modeler stop training because the training error plateaus but the norm of the gradient continues to increase, suggesting that it hasn't even reached a critical point! Ultimately, training deep models can take months for these reasons and appears to be part art. If you're looking for a model that trains the same way every time, consider one of the myriad models with convex losses (e.g., SVM, LR). Perhaps you don't have enough data to train a very deep model...perhaps with enough data the training would be more stable..but I don't think this is necessarily the case.
Deep Learning: Wild differences after model is retrained on the same data, what to do?
This is normal and exactly what you should expect. Often, deep models are exquisitely sensitive to the initial parameters. It would appear that they are exquisitely sensitive to the first step taken,
Deep Learning: Wild differences after model is retrained on the same data, what to do? This is normal and exactly what you should expect. Often, deep models are exquisitely sensitive to the initial parameters. It would appear that they are exquisitely sensitive to the first step taken, even. This is because the loss function is non-convex and optimization procedures have difficulty finding any kind of global minimum--the existence and locatability-by-an-optimization-algorithm of which would make your models the same every time. This is therefore not necessarily an indication of uncertainty in your model. What to do? (1) You might as well try 10 different random initializations or train it 10 different times with the same initialization and take the one that gives you best results. (Generally when DL is applied, it seems that practitioners are measuring Performace on massive held out sets--CV isn't even feasible.) (2) Read Chapter 8 of Deep Learning by Goodfellow et al. before returning to the answers to this post. It is very interesting and might give insight into what you're seeing. Here is a revealing diagram shown there. Devising an optimization procedure that will arrive at the same solution in high dimensions regardless of where it begins is an area of active research, compounded by the difficulties shown just above; besides trying to navigate a nonconvex topology, things like the amount of training iterations might now dictate where you land depending on where you start. Even if you weren't initializing differently each time, you still could find very different models. You may not even be ending up at different local minimima; in high dimensions, you could land at a saddle point, or plateau, etc, etc. Sometimes, à modeler stop training because the training error plateaus but the norm of the gradient continues to increase, suggesting that it hasn't even reached a critical point! Ultimately, training deep models can take months for these reasons and appears to be part art. If you're looking for a model that trains the same way every time, consider one of the myriad models with convex losses (e.g., SVM, LR). Perhaps you don't have enough data to train a very deep model...perhaps with enough data the training would be more stable..but I don't think this is necessarily the case.
Deep Learning: Wild differences after model is retrained on the same data, what to do? This is normal and exactly what you should expect. Often, deep models are exquisitely sensitive to the initial parameters. It would appear that they are exquisitely sensitive to the first step taken,
45,674
Deep Learning: Wild differences after model is retrained on the same data, what to do?
Assuming that your classifier is trained well and not over- or underfit, this can usually be taken as a measure of uncertainty: for some reason, different models using the same training data get different results. This could be possibly because you simply cannot predict this temperature well from the data, e.g. there is no clear correlation available. Another reason could be train/test data differences. It is a common problem that the training data differ from the test data (e.g. when training on simulation but predicting real events). Highly varying predictions can indicate that (at least the varying feature) is badly represented in the training sample. What I would propose you to do: Split your training sample, say 80'000 to 20'000, then train your ten models on the former sample and test them on the latter sample. If the variation reduces, it is because of the second reason stated above. Go over your data from an experts point of view: are there variables which probably are more/less correlated? Don't forget about the order of magnitude! If your left temperature has a std of 1 and the one on the right of 10, this could also be because the temperatures in the data of the left vary in a much narrower field. As an example, compare the prediction of the earth surface temperature vs the temperature at the surface of the sun. Of course the prediction about the sun is, absolutely compared, much more off then your prediction about the earth temperature. If this could be a problem, you may scale the errors by dividing them by the mean (or similar).
Deep Learning: Wild differences after model is retrained on the same data, what to do?
Assuming that your classifier is trained well and not over- or underfit, this can usually be taken as a measure of uncertainty: for some reason, different models using the same training data get diffe
Deep Learning: Wild differences after model is retrained on the same data, what to do? Assuming that your classifier is trained well and not over- or underfit, this can usually be taken as a measure of uncertainty: for some reason, different models using the same training data get different results. This could be possibly because you simply cannot predict this temperature well from the data, e.g. there is no clear correlation available. Another reason could be train/test data differences. It is a common problem that the training data differ from the test data (e.g. when training on simulation but predicting real events). Highly varying predictions can indicate that (at least the varying feature) is badly represented in the training sample. What I would propose you to do: Split your training sample, say 80'000 to 20'000, then train your ten models on the former sample and test them on the latter sample. If the variation reduces, it is because of the second reason stated above. Go over your data from an experts point of view: are there variables which probably are more/less correlated? Don't forget about the order of magnitude! If your left temperature has a std of 1 and the one on the right of 10, this could also be because the temperatures in the data of the left vary in a much narrower field. As an example, compare the prediction of the earth surface temperature vs the temperature at the surface of the sun. Of course the prediction about the sun is, absolutely compared, much more off then your prediction about the earth temperature. If this could be a problem, you may scale the errors by dividing them by the mean (or similar).
Deep Learning: Wild differences after model is retrained on the same data, what to do? Assuming that your classifier is trained well and not over- or underfit, this can usually be taken as a measure of uncertainty: for some reason, different models using the same training data get diffe
45,675
What exactly is overfitting?
You can't determine which curve is better by staring at them. And by "staring" I mean analyzing them based on pure statistical features of this particular sample. For instance, the black curve is better than the green one if the blue dots that stick out of the blue area into the red are by a pure chance, i.e. random. If you obtained another sample and the blue dots in the red area disappeared, while other blue dots showed up, this would mean that the black curve is truly capturing the separation, and the deviations are random. BUT how would you know this by looking at this ONE sample?! You can't. Therefore, lacking the context it is impossible to say which curve is better by just staring at this sample and the curves on it. You need exogenous information, which could be other samples or your knowledge of the domain. Overfitting is the concept, and there's no one right way of identifying the issue that works for any domain and any sample. It's case by case. Like you wrote the dynamics of error reduction in training and testing samples is one way. It goes to the same idea that I wrote above: detecting that the deviations from the model are random. For instance, if you obtained another sample, and it rendered the different blue points in red area but these new points were very close the old one - this would mean that the deviations from the black line are systematic. In this case you would naturally gravitate towards the blue line. So, overfitting in my world is treating random deviations as systematic. Overfitting model is worse than non overfitting model ceteris baribus. However, you can certainly construct an example when the overfitting model will have some other features that non-overfitting model doesn't have, and argue that it makes the former better than the latter. The main issue with overfitting (treating random as systematic) will mess up its forecasts. It does so mathematically because it becomes very sensitive to those inputs that are not important. It converts the noise in inputs into a false signal in the response, while the non-overfitting ignores the noise and produces smoother response, hence higher signal to noise ratio in the output.
What exactly is overfitting?
You can't determine which curve is better by staring at them. And by "staring" I mean analyzing them based on pure statistical features of this particular sample. For instance, the black curve is bet
What exactly is overfitting? You can't determine which curve is better by staring at them. And by "staring" I mean analyzing them based on pure statistical features of this particular sample. For instance, the black curve is better than the green one if the blue dots that stick out of the blue area into the red are by a pure chance, i.e. random. If you obtained another sample and the blue dots in the red area disappeared, while other blue dots showed up, this would mean that the black curve is truly capturing the separation, and the deviations are random. BUT how would you know this by looking at this ONE sample?! You can't. Therefore, lacking the context it is impossible to say which curve is better by just staring at this sample and the curves on it. You need exogenous information, which could be other samples or your knowledge of the domain. Overfitting is the concept, and there's no one right way of identifying the issue that works for any domain and any sample. It's case by case. Like you wrote the dynamics of error reduction in training and testing samples is one way. It goes to the same idea that I wrote above: detecting that the deviations from the model are random. For instance, if you obtained another sample, and it rendered the different blue points in red area but these new points were very close the old one - this would mean that the deviations from the black line are systematic. In this case you would naturally gravitate towards the blue line. So, overfitting in my world is treating random deviations as systematic. Overfitting model is worse than non overfitting model ceteris baribus. However, you can certainly construct an example when the overfitting model will have some other features that non-overfitting model doesn't have, and argue that it makes the former better than the latter. The main issue with overfitting (treating random as systematic) will mess up its forecasts. It does so mathematically because it becomes very sensitive to those inputs that are not important. It converts the noise in inputs into a false signal in the response, while the non-overfitting ignores the noise and produces smoother response, hence higher signal to noise ratio in the output.
What exactly is overfitting? You can't determine which curve is better by staring at them. And by "staring" I mean analyzing them based on pure statistical features of this particular sample. For instance, the black curve is bet
45,676
What exactly is overfitting?
From wikipedia: In overfitting, a statistical model describes random error or noise instead of the underlying relationship. Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfit has poor predictive performance, as it overreacts to minor fluctuations in the training data. So basically when training a model on data, you are both fitting noise and structure. The noise comes from sampling error and as a machine learning designer your job is to design the algorithm such that it fits as much of the stucture as posible without getting to much noise, such that the performance degenerates. So looking at it from a marginal perspective say you add one unit of complexity to your model. The marginal performance change is now composed as a bias reduction term from the additional structure you are fitting and variance term from the noise you are fitting. When the marginal variance effect is larger then the marginal bias effect you are overfitting. Standard illustration of bias and variance below. By the way the assymtotic training error of random forest classification is 0 (at least if there are not identical observations with different classes). This is true since on all predictions are present in on average 62 % of the estimators and each of these estimators have the correct prediction. So given enough trees the law of large numbers will assure that the correct class will have at least a score of 0.62 no matter the predictions when the observation is not used to fit the estimator.
What exactly is overfitting?
From wikipedia: In overfitting, a statistical model describes random error or noise instead of the underlying relationship. Overfitting occurs when a model is excessively complex, such as having
What exactly is overfitting? From wikipedia: In overfitting, a statistical model describes random error or noise instead of the underlying relationship. Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfit has poor predictive performance, as it overreacts to minor fluctuations in the training data. So basically when training a model on data, you are both fitting noise and structure. The noise comes from sampling error and as a machine learning designer your job is to design the algorithm such that it fits as much of the stucture as posible without getting to much noise, such that the performance degenerates. So looking at it from a marginal perspective say you add one unit of complexity to your model. The marginal performance change is now composed as a bias reduction term from the additional structure you are fitting and variance term from the noise you are fitting. When the marginal variance effect is larger then the marginal bias effect you are overfitting. Standard illustration of bias and variance below. By the way the assymtotic training error of random forest classification is 0 (at least if there are not identical observations with different classes). This is true since on all predictions are present in on average 62 % of the estimators and each of these estimators have the correct prediction. So given enough trees the law of large numbers will assure that the correct class will have at least a score of 0.62 no matter the predictions when the observation is not used to fit the estimator.
What exactly is overfitting? From wikipedia: In overfitting, a statistical model describes random error or noise instead of the underlying relationship. Overfitting occurs when a model is excessively complex, such as having
45,677
What exactly is overfitting?
Overfitting is when you end up modeling noise in the data which results in lower classification error on training data but reduces the accuracy on not-seen(validation data). Say you have 10 pairs: $(x,2x+e)$ plotted with you with e as a small random error. You can definitely model this perfectly with a 9 degree polynomial. But that would be overfitting as this is mostly not the correct model. The correct model is y = 2x. In this light you can even see what two most popular solutions to this problem are 1. Get more data 2. Keep the model simpler (Less number of parameters)
What exactly is overfitting?
Overfitting is when you end up modeling noise in the data which results in lower classification error on training data but reduces the accuracy on not-seen(validation data). Say you have 10 pairs: $(
What exactly is overfitting? Overfitting is when you end up modeling noise in the data which results in lower classification error on training data but reduces the accuracy on not-seen(validation data). Say you have 10 pairs: $(x,2x+e)$ plotted with you with e as a small random error. You can definitely model this perfectly with a 9 degree polynomial. But that would be overfitting as this is mostly not the correct model. The correct model is y = 2x. In this light you can even see what two most popular solutions to this problem are 1. Get more data 2. Keep the model simpler (Less number of parameters)
What exactly is overfitting? Overfitting is when you end up modeling noise in the data which results in lower classification error on training data but reduces the accuracy on not-seen(validation data). Say you have 10 pairs: $(
45,678
What exactly is overfitting?
ELI5 version: Consider a neural network having 1M parameters for classifying cat-dog. We can assume each parameter of 1M as a linear line fitting a yes/no question for features like if the img has tail, claws etc. what happens here is we have more than enough questions/parameters to infer if a given img is cat/dog and we learn extra information including background etc which can be in train distribution and may not be in validation distribution.
What exactly is overfitting?
ELI5 version: Consider a neural network having 1M parameters for classifying cat-dog. We can assume each parameter of 1M as a linear line fitting a yes/no question for features like if the img has tai
What exactly is overfitting? ELI5 version: Consider a neural network having 1M parameters for classifying cat-dog. We can assume each parameter of 1M as a linear line fitting a yes/no question for features like if the img has tail, claws etc. what happens here is we have more than enough questions/parameters to infer if a given img is cat/dog and we learn extra information including background etc which can be in train distribution and may not be in validation distribution.
What exactly is overfitting? ELI5 version: Consider a neural network having 1M parameters for classifying cat-dog. We can assume each parameter of 1M as a linear line fitting a yes/no question for features like if the img has tai
45,679
What is the expectation of the absolute value of the Skellam distribution?
It's possible to write the expectation in terms of easy-to-compute special functions. Let $z$ follow a Skellam distribution with rates $\lambda_1$ and $\lambda_2$, and $k = |z|$. The pmf for $k$ is: $$p(k; \lambda_1, \lambda_2) = \begin{cases} e^{-\lambda_1 - \lambda_2} \left( \left(\frac{\lambda_1}{\lambda_2}\right)^{\frac{k}{2}} I_k(2\sqrt{\lambda_1 \lambda_2}) + \left(\frac{\lambda_2}{\lambda_1}\right)^{\frac{k}{2}} I_{-k}(2\sqrt{\lambda_1 \lambda_2}) \right) &\text{if } k > 0\\ e^{-\lambda_1 - \lambda_2}I_0 (2\sqrt{\lambda_1 \lambda_2})& \text{if } k = 0\end{cases}$$ Here $I_k(a)$ is a modified Bessel function of the first kind and has the symmetry property $I_{k}(a) = I_{-k}(a)$, so the moment generating function of $k$ is $$\begin{aligned} \mathcal{M}(t; \lambda_1, \lambda_2) = e^{-\lambda_1 - \lambda_2} \left(\sum_{k=0}^{\infty} e^{tk} I_k(2\sqrt{\lambda_1 \lambda_2}) \big[\big(\frac{\lambda_1}{\lambda_2}\big)^{\frac{k}{2}} + \big(\frac{\lambda_2}{\lambda_1}\big)^{\frac{k}{2}} \big] - I_0 (2\sqrt{\lambda_1 \lambda_2}) \right) \end{aligned} $$ Written in this form, recognize that the sum can be written in terms of a special function known as Marcum's $Q$ (used, for example, in the cdf of the noncentral $\chi^2$ distribution). A definition of $Q$ is: $$ Q(\sqrt{2b},\sqrt{2a}) = e^{-a - b} \sum_{k=0}^\infty \left(\frac{a}{b}\right)^{\frac{k}{2}} I_k(2\sqrt{a b}) $$ So that the moment-generating function becomes: $$\begin{aligned} \mathcal{M}(t;\lambda_1, \lambda_2) = e^{-\lambda_1 - \lambda_2} \big(&Q(\sqrt{2\lambda_2e^{-t}},\sqrt{2\lambda_1e^t}) e^{\lambda_1e^t + \lambda_2e^{-t}} + \\ &Q(\sqrt{2\lambda_1e^{-t}},\sqrt{2\lambda_2e^t}) e^{\lambda_2e^t + \lambda_1e^{-t}} - \\ &I_0 (2\sqrt{\lambda_1 \lambda_2})\big) \end{aligned}$$ The derivative of $Q(\sqrt{2\lambda_1e^{-t}}, \sqrt{2\lambda_2e^t})$ w.r.t. $t$ is: $$Q'(\sqrt{2\lambda_1e^{-t}}, \sqrt{2\lambda_2e^t}) = e^{ -\lambda_1 e^t - \lambda_2 e^{-t}} (\lambda_2e^{-t} I_0(2\sqrt{\lambda_1 \lambda_2 }) + \sqrt{\lambda_2 \lambda_1} I_1(2\sqrt{\lambda_1 \lambda_2 }) )$$ Differentiating the mgf around $t=0$ and simplifying gives the expectation of $k$: $$ \begin{aligned} \mathbb{E}[k; \lambda_1, \lambda_2] = 2 &e^{-\lambda_1 - \lambda_2} \big( \lambda_2 I_0(2\sqrt{\lambda_1 \lambda_2 }) + \sqrt{\lambda_1 \lambda_2} I_1(2\sqrt{\lambda_1 \lambda_2 }) \big) + \\ &(\lambda_2 - \lambda_1)\left(1 - 2 Q(\sqrt{2\lambda_1}, \sqrt{2\lambda_2}) \right) \end{aligned} $$ The $Q$ function can be calculated using any statistical package that implements the noncentral $\chi^2$ distribution function (see below for an R example). In the case where $\lambda_1 = \lambda_2 = \lambda$, the expectation reduces to: $$ \mathbb{E}[k; \lambda] = 2\lambda e^{-2\lambda} \left( I_0(2\lambda) + I_1(2\lambda) \right) $$ A numerical example in R: set.seed(4) MarcumQ <- function(a, b) 1-pchisq( b^2, 2, a^2) # case where l1 \neq l2 exp_k <- function(l1, l2) { a <- 2*sqrt(l1*l2) 2*exp(-l1-l2)*(l2*besselI(a,0) + a/2*besselI(a,1)) + (l2-l1)*(1 - 2*MarcumQ(sqrt(2*l1),sqrt(2*l2))) } exp_k(5,20) # analytical # [1] 15.00187 mean(abs(rpois(100000,5) - rpois(100000,20))) # simulated # [1] 15.0018 # case where l1 = l2 exp_k2 <- function(l) exp(-2*l)*2*l*(besselI(2*l,0) + besselI(2*l,1)) exp_k2(20) # analytical # [1] 5.03042 mean(abs(rpois(100000,20) - rpois(100000,20))) # simulated # [1] 5.03498
What is the expectation of the absolute value of the Skellam distribution?
It's possible to write the expectation in terms of easy-to-compute special functions. Let $z$ follow a Skellam distribution with rates $\lambda_1$ and $\lambda_2$, and $k = |z|$. The pmf for $k$ is: $
What is the expectation of the absolute value of the Skellam distribution? It's possible to write the expectation in terms of easy-to-compute special functions. Let $z$ follow a Skellam distribution with rates $\lambda_1$ and $\lambda_2$, and $k = |z|$. The pmf for $k$ is: $$p(k; \lambda_1, \lambda_2) = \begin{cases} e^{-\lambda_1 - \lambda_2} \left( \left(\frac{\lambda_1}{\lambda_2}\right)^{\frac{k}{2}} I_k(2\sqrt{\lambda_1 \lambda_2}) + \left(\frac{\lambda_2}{\lambda_1}\right)^{\frac{k}{2}} I_{-k}(2\sqrt{\lambda_1 \lambda_2}) \right) &\text{if } k > 0\\ e^{-\lambda_1 - \lambda_2}I_0 (2\sqrt{\lambda_1 \lambda_2})& \text{if } k = 0\end{cases}$$ Here $I_k(a)$ is a modified Bessel function of the first kind and has the symmetry property $I_{k}(a) = I_{-k}(a)$, so the moment generating function of $k$ is $$\begin{aligned} \mathcal{M}(t; \lambda_1, \lambda_2) = e^{-\lambda_1 - \lambda_2} \left(\sum_{k=0}^{\infty} e^{tk} I_k(2\sqrt{\lambda_1 \lambda_2}) \big[\big(\frac{\lambda_1}{\lambda_2}\big)^{\frac{k}{2}} + \big(\frac{\lambda_2}{\lambda_1}\big)^{\frac{k}{2}} \big] - I_0 (2\sqrt{\lambda_1 \lambda_2}) \right) \end{aligned} $$ Written in this form, recognize that the sum can be written in terms of a special function known as Marcum's $Q$ (used, for example, in the cdf of the noncentral $\chi^2$ distribution). A definition of $Q$ is: $$ Q(\sqrt{2b},\sqrt{2a}) = e^{-a - b} \sum_{k=0}^\infty \left(\frac{a}{b}\right)^{\frac{k}{2}} I_k(2\sqrt{a b}) $$ So that the moment-generating function becomes: $$\begin{aligned} \mathcal{M}(t;\lambda_1, \lambda_2) = e^{-\lambda_1 - \lambda_2} \big(&Q(\sqrt{2\lambda_2e^{-t}},\sqrt{2\lambda_1e^t}) e^{\lambda_1e^t + \lambda_2e^{-t}} + \\ &Q(\sqrt{2\lambda_1e^{-t}},\sqrt{2\lambda_2e^t}) e^{\lambda_2e^t + \lambda_1e^{-t}} - \\ &I_0 (2\sqrt{\lambda_1 \lambda_2})\big) \end{aligned}$$ The derivative of $Q(\sqrt{2\lambda_1e^{-t}}, \sqrt{2\lambda_2e^t})$ w.r.t. $t$ is: $$Q'(\sqrt{2\lambda_1e^{-t}}, \sqrt{2\lambda_2e^t}) = e^{ -\lambda_1 e^t - \lambda_2 e^{-t}} (\lambda_2e^{-t} I_0(2\sqrt{\lambda_1 \lambda_2 }) + \sqrt{\lambda_2 \lambda_1} I_1(2\sqrt{\lambda_1 \lambda_2 }) )$$ Differentiating the mgf around $t=0$ and simplifying gives the expectation of $k$: $$ \begin{aligned} \mathbb{E}[k; \lambda_1, \lambda_2] = 2 &e^{-\lambda_1 - \lambda_2} \big( \lambda_2 I_0(2\sqrt{\lambda_1 \lambda_2 }) + \sqrt{\lambda_1 \lambda_2} I_1(2\sqrt{\lambda_1 \lambda_2 }) \big) + \\ &(\lambda_2 - \lambda_1)\left(1 - 2 Q(\sqrt{2\lambda_1}, \sqrt{2\lambda_2}) \right) \end{aligned} $$ The $Q$ function can be calculated using any statistical package that implements the noncentral $\chi^2$ distribution function (see below for an R example). In the case where $\lambda_1 = \lambda_2 = \lambda$, the expectation reduces to: $$ \mathbb{E}[k; \lambda] = 2\lambda e^{-2\lambda} \left( I_0(2\lambda) + I_1(2\lambda) \right) $$ A numerical example in R: set.seed(4) MarcumQ <- function(a, b) 1-pchisq( b^2, 2, a^2) # case where l1 \neq l2 exp_k <- function(l1, l2) { a <- 2*sqrt(l1*l2) 2*exp(-l1-l2)*(l2*besselI(a,0) + a/2*besselI(a,1)) + (l2-l1)*(1 - 2*MarcumQ(sqrt(2*l1),sqrt(2*l2))) } exp_k(5,20) # analytical # [1] 15.00187 mean(abs(rpois(100000,5) - rpois(100000,20))) # simulated # [1] 15.0018 # case where l1 = l2 exp_k2 <- function(l) exp(-2*l)*2*l*(besselI(2*l,0) + besselI(2*l,1)) exp_k2(20) # analytical # [1] 5.03042 mean(abs(rpois(100000,20) - rpois(100000,20))) # simulated # [1] 5.03498
What is the expectation of the absolute value of the Skellam distribution? It's possible to write the expectation in terms of easy-to-compute special functions. Let $z$ follow a Skellam distribution with rates $\lambda_1$ and $\lambda_2$, and $k = |z|$. The pmf for $k$ is: $
45,680
Is "Shannon entropy" used incorrectly in machine learning related literature?
It's not a problem. In fact Shannon himself suggested that other units could be used, see in his paper "A Mathematical Theory of Communication" the very first equation (bottom of page 1). Here's a quote from the paper: In analytical work where integration and differentiation are involved the base e is sometimes useful. The resulting units of information will be called natural units. Change from the base a to base b merely requires multiplication by $\log_b a$.
Is "Shannon entropy" used incorrectly in machine learning related literature?
It's not a problem. In fact Shannon himself suggested that other units could be used, see in his paper "A Mathematical Theory of Communication" the very first equation (bottom of page 1). Here's a quo
Is "Shannon entropy" used incorrectly in machine learning related literature? It's not a problem. In fact Shannon himself suggested that other units could be used, see in his paper "A Mathematical Theory of Communication" the very first equation (bottom of page 1). Here's a quote from the paper: In analytical work where integration and differentiation are involved the base e is sometimes useful. The resulting units of information will be called natural units. Change from the base a to base b merely requires multiplication by $\log_b a$.
Is "Shannon entropy" used incorrectly in machine learning related literature? It's not a problem. In fact Shannon himself suggested that other units could be used, see in his paper "A Mathematical Theory of Communication" the very first equation (bottom of page 1). Here's a quo
45,681
Intuition for Rayleigh PDF
Here's some intuition: The bivariate distribution of the errors has its maximum at 0. However, the distribution of the distance from the center does not, since the only point contributing density there is the one at the center. As you move out a little the bivariate density has decreased only a little but you get a little "circle" of contributions to the density of distance from the center, making a small distance relatively more probable than a 0 distance. Then as you keep increasing the radius, the larger circumference "picks up" more density while the joint density of the error at that distance decreases more rapidly. Eventually you reach a peak, where the rate of loss from the second and gain from the first are cancelling, and then after that the increasing radius loses out and the density of distance from the center of the error distribution starts to fall.
Intuition for Rayleigh PDF
Here's some intuition: The bivariate distribution of the errors has its maximum at 0. However, the distribution of the distance from the center does not, since the only point contributing density ther
Intuition for Rayleigh PDF Here's some intuition: The bivariate distribution of the errors has its maximum at 0. However, the distribution of the distance from the center does not, since the only point contributing density there is the one at the center. As you move out a little the bivariate density has decreased only a little but you get a little "circle" of contributions to the density of distance from the center, making a small distance relatively more probable than a 0 distance. Then as you keep increasing the radius, the larger circumference "picks up" more density while the joint density of the error at that distance decreases more rapidly. Eventually you reach a peak, where the rate of loss from the second and gain from the first are cancelling, and then after that the increasing radius loses out and the density of distance from the center of the error distribution starts to fall.
Intuition for Rayleigh PDF Here's some intuition: The bivariate distribution of the errors has its maximum at 0. However, the distribution of the distance from the center does not, since the only point contributing density ther
45,682
Intuition for Rayleigh PDF
No, the OP's claim that "the two components of $\mathbf{e}_i = (e_{x,i}, e_{y,i})^T $ are independent of each other and normally distributed with equal mean $\mu$ and variance $\sigma^2$" is incorrect: the means are $0$ because the OP subtracted off $\mu$ when you defined $\mathbf{e}_i $ as being $ \mathbf{x}_i - \mathbf{x}^*$. Edit: After I wrote the above, the OP corrected the statement in the question. As to the question as to why the pdf of the Rayleigh random variable is not maximum at $0$ because the errors are more likely to be close to $0$, it is true that the joint pdf of $e_{x,i}$ and $e_{y,i}$ has a peak at $(0,0)$ but the probability that $e_i = \sqrt {e_{x,i}^2 + {e}_{y,i}^2}$ is small, say $\leq \epsilon$, is the volume under the joint pdf in a very slim (diameter $2\epsilon$) cylinder, and this is converging to $0$ as $\epsilon \to 0$. More generally, for $r \in [0,\infty)$, the event $\{r \leq e_i \leq r+\Delta r\}$ occurs whenever the point $(e_{x,i},e_{x,i})$ is in the annular region that lies between the circles of radius $r$ and $r+\Delta r$ centered at the origin. In this region, the pdf has value $\approx \frac{1}{2\pi \sigma^2}\exp(-r^2/2\sigma^2)$ while the area of the region is $\pi (r+\Delta r)^2 - \pi r^2 \approx 2\pi r\Delta r$ giving that $$P\{r \leq e_i \leq r+\Delta r\} \approx f_{e_i}(r)\Delta r \approx \frac{1}{2\pi \sigma^2}\exp(-r^2/2\sigma^2)\cdot 2\pi r\Delta r,$$ that is, $$f_{e_i}(r) = \frac{r}{\sigma^2}\exp(-r^2/2\sigma^2), r \geq 0$$ which is the Rayleigh pdf. Since $r$ increases steadily while $\exp(-r^2/2\sigma^2)$ decreases (slowly at first, but then very rapidly) as $r$ increases from $0$ to $\infty$, the Rayleigh pdf increases from $0$ at first, but soon reaches a peak and then declines rapidly towards $0$. The location of the peak is $\sigma$ as we can determine via the standard calculus methods (or looking up the answer on Wikipedia), but notice that the peak of the Rayleigh pdf is at the point where its CDF $1-\exp(-r^2/2\sigma^2)\mathbf 1_{\{r\colon r \geq 0\}}$ has maximum derivative. Recalling that the normal density function $\frac{1}{\sigma\sqrt{2\pi}}\exp(-r^2/2\sigma^2)$ has inflection points at $r = \pm \sigma$, we can deduce the location of the peak using only statistical knowledge instead of mindless mathematical calculations.
Intuition for Rayleigh PDF
No, the OP's claim that "the two components of $\mathbf{e}_i = (e_{x,i}, e_{y,i})^T $ are independent of each other and normally distributed with equal mean $\mu$ and variance $\sigma^2$" is incorr
Intuition for Rayleigh PDF No, the OP's claim that "the two components of $\mathbf{e}_i = (e_{x,i}, e_{y,i})^T $ are independent of each other and normally distributed with equal mean $\mu$ and variance $\sigma^2$" is incorrect: the means are $0$ because the OP subtracted off $\mu$ when you defined $\mathbf{e}_i $ as being $ \mathbf{x}_i - \mathbf{x}^*$. Edit: After I wrote the above, the OP corrected the statement in the question. As to the question as to why the pdf of the Rayleigh random variable is not maximum at $0$ because the errors are more likely to be close to $0$, it is true that the joint pdf of $e_{x,i}$ and $e_{y,i}$ has a peak at $(0,0)$ but the probability that $e_i = \sqrt {e_{x,i}^2 + {e}_{y,i}^2}$ is small, say $\leq \epsilon$, is the volume under the joint pdf in a very slim (diameter $2\epsilon$) cylinder, and this is converging to $0$ as $\epsilon \to 0$. More generally, for $r \in [0,\infty)$, the event $\{r \leq e_i \leq r+\Delta r\}$ occurs whenever the point $(e_{x,i},e_{x,i})$ is in the annular region that lies between the circles of radius $r$ and $r+\Delta r$ centered at the origin. In this region, the pdf has value $\approx \frac{1}{2\pi \sigma^2}\exp(-r^2/2\sigma^2)$ while the area of the region is $\pi (r+\Delta r)^2 - \pi r^2 \approx 2\pi r\Delta r$ giving that $$P\{r \leq e_i \leq r+\Delta r\} \approx f_{e_i}(r)\Delta r \approx \frac{1}{2\pi \sigma^2}\exp(-r^2/2\sigma^2)\cdot 2\pi r\Delta r,$$ that is, $$f_{e_i}(r) = \frac{r}{\sigma^2}\exp(-r^2/2\sigma^2), r \geq 0$$ which is the Rayleigh pdf. Since $r$ increases steadily while $\exp(-r^2/2\sigma^2)$ decreases (slowly at first, but then very rapidly) as $r$ increases from $0$ to $\infty$, the Rayleigh pdf increases from $0$ at first, but soon reaches a peak and then declines rapidly towards $0$. The location of the peak is $\sigma$ as we can determine via the standard calculus methods (or looking up the answer on Wikipedia), but notice that the peak of the Rayleigh pdf is at the point where its CDF $1-\exp(-r^2/2\sigma^2)\mathbf 1_{\{r\colon r \geq 0\}}$ has maximum derivative. Recalling that the normal density function $\frac{1}{\sigma\sqrt{2\pi}}\exp(-r^2/2\sigma^2)$ has inflection points at $r = \pm \sigma$, we can deduce the location of the peak using only statistical knowledge instead of mindless mathematical calculations.
Intuition for Rayleigh PDF No, the OP's claim that "the two components of $\mathbf{e}_i = (e_{x,i}, e_{y,i})^T $ are independent of each other and normally distributed with equal mean $\mu$ and variance $\sigma^2$" is incorr
45,683
Intuition for Rayleigh PDF
Your "absolute error" random variable is Rayleigh distributed, with PDF $$\frac{x}{\sigma^2} e^{-x^2/(2\sigma^2)}, \quad x \geq 0$$ only if the mean of $e_i$ is centered on the point $x^*$. (If not, then you do have a "systematic error" which, if you can't factor out, would require you to use the Rice distribution to model the absolute error.) Note that your variable e is non-negative, and any "error" represents a positive value. So yes, if variance is non-zero then the cumulative probability at exactly zero (i.e., "no error") is zero. I find it convenient to think of this graphically: You're defining "absolute error" as a distance from some point in two dimensions. The only way for the distance to be zero is for a sample to exactly match $x^*$.
Intuition for Rayleigh PDF
Your "absolute error" random variable is Rayleigh distributed, with PDF $$\frac{x}{\sigma^2} e^{-x^2/(2\sigma^2)}, \quad x \geq 0$$ only if the mean of $e_i$ is centered on the point $x^*$. (If not,
Intuition for Rayleigh PDF Your "absolute error" random variable is Rayleigh distributed, with PDF $$\frac{x}{\sigma^2} e^{-x^2/(2\sigma^2)}, \quad x \geq 0$$ only if the mean of $e_i$ is centered on the point $x^*$. (If not, then you do have a "systematic error" which, if you can't factor out, would require you to use the Rice distribution to model the absolute error.) Note that your variable e is non-negative, and any "error" represents a positive value. So yes, if variance is non-zero then the cumulative probability at exactly zero (i.e., "no error") is zero. I find it convenient to think of this graphically: You're defining "absolute error" as a distance from some point in two dimensions. The only way for the distance to be zero is for a sample to exactly match $x^*$.
Intuition for Rayleigh PDF Your "absolute error" random variable is Rayleigh distributed, with PDF $$\frac{x}{\sigma^2} e^{-x^2/(2\sigma^2)}, \quad x \geq 0$$ only if the mean of $e_i$ is centered on the point $x^*$. (If not,
45,684
How does Stigler derive this result from Bernoulli's weak law of large numbers?
It is indeed very simple algebra. Clearly you need to end up with no term in $P \big( \left| \frac{X}{N} - p \right| \leq \epsilon \big)$; since there's already a term in the complementary event, you have an obvious substitution you can perform. $$ P \bigg( \left| \frac{X}{N} - p \right| \leq \epsilon \bigg) > c\,P \bigg( \left| \frac{X}{N} - p \right| > \epsilon \bigg) $$ $$ 1-P \bigg( \left| \frac{X}{N} - p \right| > \epsilon \bigg)> c\,P \bigg( \left| \frac{X}{N} - p \right| > \epsilon \bigg)$$ $$ 1>(1+c)\,P \bigg( \left| \frac{X}{N} - p \right| > \epsilon \bigg)$$ $$ \frac{1}{1+c}>\,P \bigg( \left| \frac{X}{N} - p \right| > \epsilon \bigg)$$ etc. (you need $c>-1$ for this last step to work like that but it's said to be "large positive" at the top so we should be fine)
How does Stigler derive this result from Bernoulli's weak law of large numbers?
It is indeed very simple algebra. Clearly you need to end up with no term in $P \big( \left| \frac{X}{N} - p \right| \leq \epsilon \big)$; since there's already a term in the complementary event, you
How does Stigler derive this result from Bernoulli's weak law of large numbers? It is indeed very simple algebra. Clearly you need to end up with no term in $P \big( \left| \frac{X}{N} - p \right| \leq \epsilon \big)$; since there's already a term in the complementary event, you have an obvious substitution you can perform. $$ P \bigg( \left| \frac{X}{N} - p \right| \leq \epsilon \bigg) > c\,P \bigg( \left| \frac{X}{N} - p \right| > \epsilon \bigg) $$ $$ 1-P \bigg( \left| \frac{X}{N} - p \right| > \epsilon \bigg)> c\,P \bigg( \left| \frac{X}{N} - p \right| > \epsilon \bigg)$$ $$ 1>(1+c)\,P \bigg( \left| \frac{X}{N} - p \right| > \epsilon \bigg)$$ $$ \frac{1}{1+c}>\,P \bigg( \left| \frac{X}{N} - p \right| > \epsilon \bigg)$$ etc. (you need $c>-1$ for this last step to work like that but it's said to be "large positive" at the top so we should be fine)
How does Stigler derive this result from Bernoulli's weak law of large numbers? It is indeed very simple algebra. Clearly you need to end up with no term in $P \big( \left| \frac{X}{N} - p \right| \leq \epsilon \big)$; since there's already a term in the complementary event, you
45,685
How to speed up hyperparameter optimization?
Here are some general techniques to speed up hyperparameter optimization. If you have a large dataset, use a simple validation set instead of cross validation. This will increase the speed by a factor of ~k, compared to k-fold cross validation. This won't work well if you don't have enough data. Parallelize the problem across multiple machines. Each machine can fit models with different choices of hyperparameters. This will increase the speed by a factor of ~m for m machines. Avoid redundant computations. Pre-compute or cache the results of computations that can be reused for subsequent model fits (or iteratively updated with less work than computing from scratch). A couple simple examples: if the model needs a pairwise distance matrix, compute all distances at the beginning, rather than re-computing them each time a model is fit. If the model needs a mean and covariance matrix, iteratively update them depending on the points in each training set, rather than computing from scratch. If using grid search, decrease the number of hyperparameter values you're willing to consider (i.e. use a coarser grid). This can give potentially large speedups because the total number of combinations scales multiplicatively. The risk is that, if the grid becomes too coarse, you may miss the optimal values. You can compensate for this by performing an initial, coarse search, then performing a finer search in the neighborhood of the best initial values. This is safer than the first strategy, but there's still some risk that the initial, coarse search could end up in a suboptimal neighborhood from which the finer search can't escape. Use random search. Bergstra and Bengio (2012) describe this strategy, and show that it can give large speedups compared to grid search. The reason is that model performance may be much more sensitive to some hyperparameters than others, and the important ones aren't known a priori. Grid search can waste iterations by trying many different values for non-influential hyperparameters while holding the influential ones fixed. Random search samples random hyperparameter values from some simple distribution, so all hyperparameters change on every iteration. Changing non-influential hyperparameters has little effect (so nothing is lost), and changing the influential ones gives more chance for improvement. Whether using grid search or random search, make sure that the spacing between values is appropriate for each hyperparameter (in grid search, this is the grid spacing; in random search, it's related to the distribution from which hyperparameter values are sampled). For example, it makes sense to try linearly spaced values for some hyperparameters, and logarithmically spaced values for others. Using inappropriate spacing can either lead to poor coverage (increasing the risk of missing the optimal value) or require overly dense coverage to compensate (wasting computation time). Use Bayesian optimization. This is a much more complicated approach, and is an active topic of research. In the context of hyperparameter optimization, it tries to learn a model of the loss function over hyperparameters, and use this model to adaptively choose the next hyperparameter values to try. The value of the loss function for the new hyperparameters is used in turn to update the model. References: Bergstra and Bengio (2012). Random search for hyper-parameter optimization.
How to speed up hyperparameter optimization?
Here are some general techniques to speed up hyperparameter optimization. If you have a large dataset, use a simple validation set instead of cross validation. This will increase the speed by a factor
How to speed up hyperparameter optimization? Here are some general techniques to speed up hyperparameter optimization. If you have a large dataset, use a simple validation set instead of cross validation. This will increase the speed by a factor of ~k, compared to k-fold cross validation. This won't work well if you don't have enough data. Parallelize the problem across multiple machines. Each machine can fit models with different choices of hyperparameters. This will increase the speed by a factor of ~m for m machines. Avoid redundant computations. Pre-compute or cache the results of computations that can be reused for subsequent model fits (or iteratively updated with less work than computing from scratch). A couple simple examples: if the model needs a pairwise distance matrix, compute all distances at the beginning, rather than re-computing them each time a model is fit. If the model needs a mean and covariance matrix, iteratively update them depending on the points in each training set, rather than computing from scratch. If using grid search, decrease the number of hyperparameter values you're willing to consider (i.e. use a coarser grid). This can give potentially large speedups because the total number of combinations scales multiplicatively. The risk is that, if the grid becomes too coarse, you may miss the optimal values. You can compensate for this by performing an initial, coarse search, then performing a finer search in the neighborhood of the best initial values. This is safer than the first strategy, but there's still some risk that the initial, coarse search could end up in a suboptimal neighborhood from which the finer search can't escape. Use random search. Bergstra and Bengio (2012) describe this strategy, and show that it can give large speedups compared to grid search. The reason is that model performance may be much more sensitive to some hyperparameters than others, and the important ones aren't known a priori. Grid search can waste iterations by trying many different values for non-influential hyperparameters while holding the influential ones fixed. Random search samples random hyperparameter values from some simple distribution, so all hyperparameters change on every iteration. Changing non-influential hyperparameters has little effect (so nothing is lost), and changing the influential ones gives more chance for improvement. Whether using grid search or random search, make sure that the spacing between values is appropriate for each hyperparameter (in grid search, this is the grid spacing; in random search, it's related to the distribution from which hyperparameter values are sampled). For example, it makes sense to try linearly spaced values for some hyperparameters, and logarithmically spaced values for others. Using inappropriate spacing can either lead to poor coverage (increasing the risk of missing the optimal value) or require overly dense coverage to compensate (wasting computation time). Use Bayesian optimization. This is a much more complicated approach, and is an active topic of research. In the context of hyperparameter optimization, it tries to learn a model of the loss function over hyperparameters, and use this model to adaptively choose the next hyperparameter values to try. The value of the loss function for the new hyperparameters is used in turn to update the model. References: Bergstra and Bengio (2012). Random search for hyper-parameter optimization.
How to speed up hyperparameter optimization? Here are some general techniques to speed up hyperparameter optimization. If you have a large dataset, use a simple validation set instead of cross validation. This will increase the speed by a factor
45,686
How to speed up hyperparameter optimization?
The primary approach is allow to perform evaluations on sub-sets of your data or for a limited number of iterations or for a limited amount of time (if your algorithm is anytime algorithm). Then, you can exploit it by a hyperparameter optimization algorithm which supports multifidelity evaluations, i.e., can exploit low-fidelity objective function evaluations (e.g., obtained on a sub-set of your data or after a small number of iterations) to adaptively gain insights about high-fidelity objective function evaluations in order to substantially speedup the search process. Only use random search if your budget of function evaluations is way below 10x the number of variables and/or your problem is very noisy. Otherwise, almost certainly there are better algorithms to use for your case.
How to speed up hyperparameter optimization?
The primary approach is allow to perform evaluations on sub-sets of your data or for a limited number of iterations or for a limited amount of time (if your algorithm is anytime algorithm). Then, you
How to speed up hyperparameter optimization? The primary approach is allow to perform evaluations on sub-sets of your data or for a limited number of iterations or for a limited amount of time (if your algorithm is anytime algorithm). Then, you can exploit it by a hyperparameter optimization algorithm which supports multifidelity evaluations, i.e., can exploit low-fidelity objective function evaluations (e.g., obtained on a sub-set of your data or after a small number of iterations) to adaptively gain insights about high-fidelity objective function evaluations in order to substantially speedup the search process. Only use random search if your budget of function evaluations is way below 10x the number of variables and/or your problem is very noisy. Otherwise, almost certainly there are better algorithms to use for your case.
How to speed up hyperparameter optimization? The primary approach is allow to perform evaluations on sub-sets of your data or for a limited number of iterations or for a limited amount of time (if your algorithm is anytime algorithm). Then, you
45,687
How to speed up hyperparameter optimization?
Take a look at the optimiser built in to dlib. The technique it uses is "mathematically [proven to be] better than random search in a number of non-trivial situations".
How to speed up hyperparameter optimization?
Take a look at the optimiser built in to dlib. The technique it uses is "mathematically [proven to be] better than random search in a number of non-trivial situations".
How to speed up hyperparameter optimization? Take a look at the optimiser built in to dlib. The technique it uses is "mathematically [proven to be] better than random search in a number of non-trivial situations".
How to speed up hyperparameter optimization? Take a look at the optimiser built in to dlib. The technique it uses is "mathematically [proven to be] better than random search in a number of non-trivial situations".
45,688
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$?
The confidence interval with $\frac{1}{\sqrt{n}}$ is based on the same idea as the confidence interval with $1.96\sqrt{\frac{p(1-p)}{n}}$ but is more "conservative", in the sense that it is larger. The reason for that is that the function $$f(x) = x \left(1-x\right), x \in [0,1] $$ can be shown (with elementary calculus) to be maximal for $x= \frac{1}{2}$. Thus $$1.96\sqrt{\frac{p(1-p)}{n}} \leq 1.96 \sqrt{ \frac{1}{4n}} \approx \frac{1}{\sqrt{n}} $$ The confidence interval with $1.96\sqrt{\frac{p(1-p)}{n}}$ will be close to the one with $\frac{1}{\sqrt{n}}$ for $p \approx 1/2$ but since in none of your problems is that the case, I would probably use the classical $1.96\sqrt{\frac{p(1-p)}{n}}$ CI.
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$?
The confidence interval with $\frac{1}{\sqrt{n}}$ is based on the same idea as the confidence interval with $1.96\sqrt{\frac{p(1-p)}{n}}$ but is more "conservative", in the sense that it is larger. Th
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$? The confidence interval with $\frac{1}{\sqrt{n}}$ is based on the same idea as the confidence interval with $1.96\sqrt{\frac{p(1-p)}{n}}$ but is more "conservative", in the sense that it is larger. The reason for that is that the function $$f(x) = x \left(1-x\right), x \in [0,1] $$ can be shown (with elementary calculus) to be maximal for $x= \frac{1}{2}$. Thus $$1.96\sqrt{\frac{p(1-p)}{n}} \leq 1.96 \sqrt{ \frac{1}{4n}} \approx \frac{1}{\sqrt{n}} $$ The confidence interval with $1.96\sqrt{\frac{p(1-p)}{n}}$ will be close to the one with $\frac{1}{\sqrt{n}}$ for $p \approx 1/2$ but since in none of your problems is that the case, I would probably use the classical $1.96\sqrt{\frac{p(1-p)}{n}}$ CI.
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$? The confidence interval with $\frac{1}{\sqrt{n}}$ is based on the same idea as the confidence interval with $1.96\sqrt{\frac{p(1-p)}{n}}$ but is more "conservative", in the sense that it is larger. Th
45,689
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$?
For confidence intervals we always use the form $\hat{p} \pm z\sqrt{\frac{p(1-p)}{n}}$. For the 95% confidence interval $z=1.96$ Since the term $z\sqrt{\frac{p(1-p)}{n}}$ depends on $p$ there are some proportions which have bigger confidence intervals. The worst case scenario is where $p=0.5$, this has the most variation which is why the confidence interval is larger. Before we collect any data we might want to estimate how good the results might be. Since we don't have data for the proportion we can just use the worst case scenario. For the worst case scenario the accuracy indicated by the confidence interval is $z\sqrt{\frac{0.5(1-0.5)}{n}}$ For the 95% confidence interval where $z=1.96$ this simplifies to $\frac{0.98}{\sqrt{n}}$ This is where your idea of $\frac{1}{\sqrt{n}}$ comes from. The constant in the numerator is $0.98$ which is close to $1$ but this is only for the 95% confidence interval, other levels of confidence will have other values. In your questions you have data for every case so you won't need to use the worst case scenario. However, it's still useful to remember for times when you are planning an experiment and you want to see if it is feasible before collecting any data.
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$?
For confidence intervals we always use the form $\hat{p} \pm z\sqrt{\frac{p(1-p)}{n}}$. For the 95% confidence interval $z=1.96$ Since the term $z\sqrt{\frac{p(1-p)}{n}}$ depends on $p$ there are some
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$? For confidence intervals we always use the form $\hat{p} \pm z\sqrt{\frac{p(1-p)}{n}}$. For the 95% confidence interval $z=1.96$ Since the term $z\sqrt{\frac{p(1-p)}{n}}$ depends on $p$ there are some proportions which have bigger confidence intervals. The worst case scenario is where $p=0.5$, this has the most variation which is why the confidence interval is larger. Before we collect any data we might want to estimate how good the results might be. Since we don't have data for the proportion we can just use the worst case scenario. For the worst case scenario the accuracy indicated by the confidence interval is $z\sqrt{\frac{0.5(1-0.5)}{n}}$ For the 95% confidence interval where $z=1.96$ this simplifies to $\frac{0.98}{\sqrt{n}}$ This is where your idea of $\frac{1}{\sqrt{n}}$ comes from. The constant in the numerator is $0.98$ which is close to $1$ but this is only for the 95% confidence interval, other levels of confidence will have other values. In your questions you have data for every case so you won't need to use the worst case scenario. However, it's still useful to remember for times when you are planning an experiment and you want to see if it is feasible before collecting any data.
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$? For confidence intervals we always use the form $\hat{p} \pm z\sqrt{\frac{p(1-p)}{n}}$. For the 95% confidence interval $z=1.96$ Since the term $z\sqrt{\frac{p(1-p)}{n}}$ depends on $p$ there are some
45,690
concurvity in negative binomial GAM
If you have concurvity values that high, I would want to do some additional checks to see if the concurvity was leading to problems in the estimation of the smooths. Much as I would if I was fitting a GLM with highly collinear covariates (i.e. with large VIFs). The reason for the difference between full = TRUE (the default) and full = FALSE is because the former considers whether any particular term is concurved with some combination of the all the other terms in the model. In other words, with full = TRUE we are concerned with identifying which smooths can be approximated by any combination of the other smooths in the model. When we have full = FALSE, we are no longer looking at the combinations of the other smooths but on the pairwise concurvities that combine to give the full = TRUE concurvity value. In your example, the 0.9 concurvity with full = TRUE is telling us that this smooth can be well-approximated by some combination of the other smooths (or parametric terms if the smooths are close to those kinds of functions). The full = FALSE information breaks down this 0.9 number and tells us which of the other smooths (or parametric effects) are most strongly concurved with indicated variable. So, that 0.9 might be the result of some strong concurvity with the variable where pair-wise concurvity is 0.5 plus the concurvity with some other smooths. As such, use full = TRUE to indicate which variables might give you cause for concern. Then use full = FALSE to see if there is one variable or a small set of variables responsible for the concurvity. Once you have identified the potential variables associated with high concurvity, you can try dropping one of these variables from the model, refitting and comparing the the estimated smooths for the dropped variable in both models. You should be watching for example for smooth functions that change sign or shape strongly when different covariates with which the smooth is concurved are included in the model or not. You will need to use your domain knowledge to decide whether to retain the highly concurved smooth or not; perhaps you don't need it as the "effect" represented by the smooth is actually contained in the smooths of other covariates (like including smooths of temperature and altitude and slope in the same model, we probably don't need all...).
concurvity in negative binomial GAM
If you have concurvity values that high, I would want to do some additional checks to see if the concurvity was leading to problems in the estimation of the smooths. Much as I would if I was fitting a
concurvity in negative binomial GAM If you have concurvity values that high, I would want to do some additional checks to see if the concurvity was leading to problems in the estimation of the smooths. Much as I would if I was fitting a GLM with highly collinear covariates (i.e. with large VIFs). The reason for the difference between full = TRUE (the default) and full = FALSE is because the former considers whether any particular term is concurved with some combination of the all the other terms in the model. In other words, with full = TRUE we are concerned with identifying which smooths can be approximated by any combination of the other smooths in the model. When we have full = FALSE, we are no longer looking at the combinations of the other smooths but on the pairwise concurvities that combine to give the full = TRUE concurvity value. In your example, the 0.9 concurvity with full = TRUE is telling us that this smooth can be well-approximated by some combination of the other smooths (or parametric terms if the smooths are close to those kinds of functions). The full = FALSE information breaks down this 0.9 number and tells us which of the other smooths (or parametric effects) are most strongly concurved with indicated variable. So, that 0.9 might be the result of some strong concurvity with the variable where pair-wise concurvity is 0.5 plus the concurvity with some other smooths. As such, use full = TRUE to indicate which variables might give you cause for concern. Then use full = FALSE to see if there is one variable or a small set of variables responsible for the concurvity. Once you have identified the potential variables associated with high concurvity, you can try dropping one of these variables from the model, refitting and comparing the the estimated smooths for the dropped variable in both models. You should be watching for example for smooth functions that change sign or shape strongly when different covariates with which the smooth is concurved are included in the model or not. You will need to use your domain knowledge to decide whether to retain the highly concurved smooth or not; perhaps you don't need it as the "effect" represented by the smooth is actually contained in the smooths of other covariates (like including smooths of temperature and altitude and slope in the same model, we probably don't need all...).
concurvity in negative binomial GAM If you have concurvity values that high, I would want to do some additional checks to see if the concurvity was leading to problems in the estimation of the smooths. Much as I would if I was fitting a
45,691
Does the Cox proportional hazards model process past values for time-varying covariates?
It seems that your question concerns not the specific R function coxph, but survival models in general. The vignette, when speaking about "covariate values of each subject just prior to the event time", refers to the hazard function $h(t)$. This function in fact only takes into account the current values of covariates, and determines the risk for individuals "alive" at day $t$ to suffer the event at day $t+1$. However, to calculate the number of individuals that are still "alive" at $t$, you need the full history of covariates - and that is where the past values come in. One could build a model iteratively, by calculating the number of individuals for every $t$ and testing whether they survive one more day, but using the cumulative hazard function $H(t)$ makes everything much easier to process. To give an example, consider the event "getting hit by a car", and covariate "amount of traffic today". High amount of traffic on one day means that the risk to get hit that particular day, $h(t)$, is high; also, only few individuals are likely to survive past it ($H(t)$ is high). However, given that you survived that day, the momentary risk next day $h(t+1)$ is not influenced by the past. If you specifically want to model some damage which persists after exposure, you will need to explicitly add that as a covariate. Thought I should add a disclaimer that I am not familiar with using time-dependent covariates in that particular R function, so anyone more experienced with it is welcome to correct me.
Does the Cox proportional hazards model process past values for time-varying covariates?
It seems that your question concerns not the specific R function coxph, but survival models in general. The vignette, when speaking about "covariate values of each subject just prior to the event time
Does the Cox proportional hazards model process past values for time-varying covariates? It seems that your question concerns not the specific R function coxph, but survival models in general. The vignette, when speaking about "covariate values of each subject just prior to the event time", refers to the hazard function $h(t)$. This function in fact only takes into account the current values of covariates, and determines the risk for individuals "alive" at day $t$ to suffer the event at day $t+1$. However, to calculate the number of individuals that are still "alive" at $t$, you need the full history of covariates - and that is where the past values come in. One could build a model iteratively, by calculating the number of individuals for every $t$ and testing whether they survive one more day, but using the cumulative hazard function $H(t)$ makes everything much easier to process. To give an example, consider the event "getting hit by a car", and covariate "amount of traffic today". High amount of traffic on one day means that the risk to get hit that particular day, $h(t)$, is high; also, only few individuals are likely to survive past it ($H(t)$ is high). However, given that you survived that day, the momentary risk next day $h(t+1)$ is not influenced by the past. If you specifically want to model some damage which persists after exposure, you will need to explicitly add that as a covariate. Thought I should add a disclaimer that I am not familiar with using time-dependent covariates in that particular R function, so anyone more experienced with it is welcome to correct me.
Does the Cox proportional hazards model process past values for time-varying covariates? It seems that your question concerns not the specific R function coxph, but survival models in general. The vignette, when speaking about "covariate values of each subject just prior to the event time
45,692
Does the Cox proportional hazards model process past values for time-varying covariates?
The answer from @juod gets to the essential point: calculations in Cox regressions are based on instantaneous values of covariates "just prior to" each event. Prior history is taken into account in the following way: each individual still at risk at an event time survived up until then, so those individuals' covariate values (potentially time-varying) at all earlier event times have already been incorporated into the regression. As the answer in the Cross Validated page to which you linked states, however, you may construct the instantaneous values of the time-dependent covariates in any way that makes sense. That question was posed specifically about a cumulative covariate, integrated over time, so that its value for an indivdual at each event time depended (in that particular way) on all previous values for that individual. If some way of treating your covariate values like that makes sense for your application, then do so. Think carefully about how you wish to proceed. Is the simple fact that an individual has survived up to a particular event time, given the individual's prior covariate values, enough of a way to incorporate history? Or do you need to add up or average or otherwise assess prior values over some period of time to find the best instantaneous relation between the covariate and outcome in Cox regression? That decision must be based on your understanding of the underlying subject matter.
Does the Cox proportional hazards model process past values for time-varying covariates?
The answer from @juod gets to the essential point: calculations in Cox regressions are based on instantaneous values of covariates "just prior to" each event. Prior history is taken into account in th
Does the Cox proportional hazards model process past values for time-varying covariates? The answer from @juod gets to the essential point: calculations in Cox regressions are based on instantaneous values of covariates "just prior to" each event. Prior history is taken into account in the following way: each individual still at risk at an event time survived up until then, so those individuals' covariate values (potentially time-varying) at all earlier event times have already been incorporated into the regression. As the answer in the Cross Validated page to which you linked states, however, you may construct the instantaneous values of the time-dependent covariates in any way that makes sense. That question was posed specifically about a cumulative covariate, integrated over time, so that its value for an indivdual at each event time depended (in that particular way) on all previous values for that individual. If some way of treating your covariate values like that makes sense for your application, then do so. Think carefully about how you wish to proceed. Is the simple fact that an individual has survived up to a particular event time, given the individual's prior covariate values, enough of a way to incorporate history? Or do you need to add up or average or otherwise assess prior values over some period of time to find the best instantaneous relation between the covariate and outcome in Cox regression? That decision must be based on your understanding of the underlying subject matter.
Does the Cox proportional hazards model process past values for time-varying covariates? The answer from @juod gets to the essential point: calculations in Cox regressions are based on instantaneous values of covariates "just prior to" each event. Prior history is taken into account in th
45,693
Contrasting covariance calculation using R, Matlab, Pandas, NumPy cov, NumPy linalg.svd
Note that numpy.cov() considers its input data matrix to have observations in each column, and variables in each row, so to get numpy.cov() to return what other packages do, you have to pass the transpose of the data matrix to numpy.cov(). The Python code that you linked can be used to simulate what other packages do, but it contains some errors: N should be the number of rows, not columns, and you have to perform the matrix multiplication in the other order: import numpy as np def cov(X0): print "\n==\nMatrix:" print X0 X = X0 - X0.mean(axis=0) N = X.shape[0] # !!! fact = float(N - 1) print "Covariance:" print np.dot(X.T, X) / fact # !!! X0 = np.vstack(([1, 2], [3, 4])) cov(X0) cov(X0.T) X0 = np.vstack(([1, 2], [3, 4], [22, 44])) cov(X0) cov(X0.T) With these fixes, the covariance behaves as expected: == Matrix: [[1 2] [3 4]] Covariance: [[ 2. 2.] [ 2. 2.]] == Matrix: [[1 3] [2 4]] Covariance: [[ 0.5 0.5] [ 0.5 0.5]] == Matrix: [[ 1 2] [ 3 4] [22 44]] Covariance: [[ 134.33333333 274.33333333] [ 274.33333333 561.33333333]] == Matrix: [[ 1 3 22] [ 2 4 44]] Covariance: [[ 0.5 0.5 11. ] [ 0.5 0.5 11. ] [ 11. 11. 242. ]] As for the numpy.linalg.svd() code, you need to center the data matrix by subtracting off the variable means, and the multiplication involving the V matrix must be performed in the other order. With these changes you will replicate everybody else's behavior: import numpy as np def cov(X0): print "\n==\nMatrix:" print X0 X = X0 - X0.mean(axis=0) U, s, V = np.linalg.svd(X, full_matrices = 0) D = np.dot(np.dot(V.T,np.diag(s**2)),V) Dadjust = D / (X0.shape[0] - 1) print "Covariance:" print (Dadjust)
Contrasting covariance calculation using R, Matlab, Pandas, NumPy cov, NumPy linalg.svd
Note that numpy.cov() considers its input data matrix to have observations in each column, and variables in each row, so to get numpy.cov() to return what other packages do, you have to pass the trans
Contrasting covariance calculation using R, Matlab, Pandas, NumPy cov, NumPy linalg.svd Note that numpy.cov() considers its input data matrix to have observations in each column, and variables in each row, so to get numpy.cov() to return what other packages do, you have to pass the transpose of the data matrix to numpy.cov(). The Python code that you linked can be used to simulate what other packages do, but it contains some errors: N should be the number of rows, not columns, and you have to perform the matrix multiplication in the other order: import numpy as np def cov(X0): print "\n==\nMatrix:" print X0 X = X0 - X0.mean(axis=0) N = X.shape[0] # !!! fact = float(N - 1) print "Covariance:" print np.dot(X.T, X) / fact # !!! X0 = np.vstack(([1, 2], [3, 4])) cov(X0) cov(X0.T) X0 = np.vstack(([1, 2], [3, 4], [22, 44])) cov(X0) cov(X0.T) With these fixes, the covariance behaves as expected: == Matrix: [[1 2] [3 4]] Covariance: [[ 2. 2.] [ 2. 2.]] == Matrix: [[1 3] [2 4]] Covariance: [[ 0.5 0.5] [ 0.5 0.5]] == Matrix: [[ 1 2] [ 3 4] [22 44]] Covariance: [[ 134.33333333 274.33333333] [ 274.33333333 561.33333333]] == Matrix: [[ 1 3 22] [ 2 4 44]] Covariance: [[ 0.5 0.5 11. ] [ 0.5 0.5 11. ] [ 11. 11. 242. ]] As for the numpy.linalg.svd() code, you need to center the data matrix by subtracting off the variable means, and the multiplication involving the V matrix must be performed in the other order. With these changes you will replicate everybody else's behavior: import numpy as np def cov(X0): print "\n==\nMatrix:" print X0 X = X0 - X0.mean(axis=0) U, s, V = np.linalg.svd(X, full_matrices = 0) D = np.dot(np.dot(V.T,np.diag(s**2)),V) Dadjust = D / (X0.shape[0] - 1) print "Covariance:" print (Dadjust)
Contrasting covariance calculation using R, Matlab, Pandas, NumPy cov, NumPy linalg.svd Note that numpy.cov() considers its input data matrix to have observations in each column, and variables in each row, so to get numpy.cov() to return what other packages do, you have to pass the trans
45,694
Interpreting seasonality in ACF and PACF plots
As you've rightly pointed out, the ACF in the first image clearly shows an annual seasonal trend wrt. peaks at yearly lag at about 12, 24, etc. The log-transformed series represents the series scaled to a logarithmic scale. This represents the size of the seasonal fluctuations and random fluctuations in the log-transformed time series which seem to be roughly constant over the yearly seasonal fluctuation and does not seem to depend on the level of the time series. Since, we observe annual seasonality, the most appropriate $d$-th order differencing for this data set seems to be the $12$-th order differencing. Then, the log-transformed series is expected to represent a randomly fluctuated log-series. The elimination of the annual cycle seems about right.
Interpreting seasonality in ACF and PACF plots
As you've rightly pointed out, the ACF in the first image clearly shows an annual seasonal trend wrt. peaks at yearly lag at about 12, 24, etc. The log-transformed series represents the series scaled
Interpreting seasonality in ACF and PACF plots As you've rightly pointed out, the ACF in the first image clearly shows an annual seasonal trend wrt. peaks at yearly lag at about 12, 24, etc. The log-transformed series represents the series scaled to a logarithmic scale. This represents the size of the seasonal fluctuations and random fluctuations in the log-transformed time series which seem to be roughly constant over the yearly seasonal fluctuation and does not seem to depend on the level of the time series. Since, we observe annual seasonality, the most appropriate $d$-th order differencing for this data set seems to be the $12$-th order differencing. Then, the log-transformed series is expected to represent a randomly fluctuated log-series. The elimination of the annual cycle seems about right.
Interpreting seasonality in ACF and PACF plots As you've rightly pointed out, the ACF in the first image clearly shows an annual seasonal trend wrt. peaks at yearly lag at about 12, 24, etc. The log-transformed series represents the series scaled
45,695
Interpreting seasonality in ACF and PACF plots
Seasonal differencing is relevant when the time series is seasonally integrated. Consider the simplest form of seasonal integration -- a SARIMA$(0,0,0)\times(0,1,0)_h$ model with a seasonal period $h$. The original time series under this model is made up of $h$ random walks that alternate every season. I.e. each season has its own random walk, and the random walks of the different seasons are unrelated. Here is an example with $h=4$ (circles of different colours are used to distinguish between the seasons): That may or may not be sensible in applications as you would not always expect the difference between two consecutive time points to have values that diverge from each other (which happens under seasonal integration). A sign that a series is not seasonally integrated is significant PACF at seasonal lags after seasonal differencing. For a seasonally non-integrated series, taking seasonal differences does not solve a problem but rather creates one (the problem of overdifferencing). The presence of seasonal integration can be formally tested by OCSB or Canova-Hansen tests. If the series is seasonally non-integrated, you may consider a SARIMA$(p,d,q)\times(P,0,Q)_h$ model or using dummy variables or Fourier terms.
Interpreting seasonality in ACF and PACF plots
Seasonal differencing is relevant when the time series is seasonally integrated. Consider the simplest form of seasonal integration -- a SARIMA$(0,0,0)\times(0,1,0)_h$ model with a seasonal period $h$
Interpreting seasonality in ACF and PACF plots Seasonal differencing is relevant when the time series is seasonally integrated. Consider the simplest form of seasonal integration -- a SARIMA$(0,0,0)\times(0,1,0)_h$ model with a seasonal period $h$. The original time series under this model is made up of $h$ random walks that alternate every season. I.e. each season has its own random walk, and the random walks of the different seasons are unrelated. Here is an example with $h=4$ (circles of different colours are used to distinguish between the seasons): That may or may not be sensible in applications as you would not always expect the difference between two consecutive time points to have values that diverge from each other (which happens under seasonal integration). A sign that a series is not seasonally integrated is significant PACF at seasonal lags after seasonal differencing. For a seasonally non-integrated series, taking seasonal differences does not solve a problem but rather creates one (the problem of overdifferencing). The presence of seasonal integration can be formally tested by OCSB or Canova-Hansen tests. If the series is seasonally non-integrated, you may consider a SARIMA$(p,d,q)\times(P,0,Q)_h$ model or using dummy variables or Fourier terms.
Interpreting seasonality in ACF and PACF plots Seasonal differencing is relevant when the time series is seasonally integrated. Consider the simplest form of seasonal integration -- a SARIMA$(0,0,0)\times(0,1,0)_h$ model with a seasonal period $h$
45,696
Probability of class in binary classification
You are considering different classifiers, but in fact this is not a classification problem. You are not interested in classifying your data as zeros and ones, but in predicting probabilities that individual cases are zeros and ones. In this case the usual method of choice, that is designed especially for such problems, is logistic regression. Contrary to popular beliefs, logistic regression is not a classifier, but rather it predicts probabilities, so it does exactly what you want.
Probability of class in binary classification
You are considering different classifiers, but in fact this is not a classification problem. You are not interested in classifying your data as zeros and ones, but in predicting probabilities that ind
Probability of class in binary classification You are considering different classifiers, but in fact this is not a classification problem. You are not interested in classifying your data as zeros and ones, but in predicting probabilities that individual cases are zeros and ones. In this case the usual method of choice, that is designed especially for such problems, is logistic regression. Contrary to popular beliefs, logistic regression is not a classifier, but rather it predicts probabilities, so it does exactly what you want.
Probability of class in binary classification You are considering different classifiers, but in fact this is not a classification problem. You are not interested in classifying your data as zeros and ones, but in predicting probabilities that ind
45,697
What is the inverse square of a distance (Euclidean)?
Imagine that we want to classify as red or blue the unknown gray point in the data cloud. Your algorithm is set up to measure Euclidean distances to the $k =3$ closest neighbors: Two of them are blue, and the third one is red. Even assigning uniform weight (equal vote) to each one of the three points, the algorithm correctly classifies the unknown gray ball as blue. However, we start wondering why we gave equal vote to the third ball (the red ball tangentially touching on the "sphere of influence" around the gray ball). It is not entirely fair that it exerts as much leverage on the classification as the first and second closer balls. So we weight the vote by the inverse of the square of the Euclidean distance to avoid a situation in which it could have have resulted in a spurious classification: In this case two of the neighbors incorrectly point to a red assignment, and win over the final vote, overruling the influence of the much closer blue neighbor, an effect that can be factored in by weighting by the inverse of the squared distance so that the farthest-away red ball has the least say in the classification: Computationally, the algorithm will attempt to resolve each case with a formula such as: $$\hat y=\text{max}_r\left(\sum_{i=1}^k w_{(i)}\,\mathbf{1}_{y{(i)}=r}\right)$$ $r$ stands for the number of classes (in this case, $2$, {red,blue}), $\mathbf{1}$ is the indicator variable: the sum is obtained for each class. Finally the weights $w_{(i)}$ calculated through a kernel function, with in the case you describe could be an inversion kernel $\frac{1}{\lvert d\rvert}.$
What is the inverse square of a distance (Euclidean)?
Imagine that we want to classify as red or blue the unknown gray point in the data cloud. Your algorithm is set up to measure Euclidean distances to the $k =3$ closest neighbors: Two of them are blue
What is the inverse square of a distance (Euclidean)? Imagine that we want to classify as red or blue the unknown gray point in the data cloud. Your algorithm is set up to measure Euclidean distances to the $k =3$ closest neighbors: Two of them are blue, and the third one is red. Even assigning uniform weight (equal vote) to each one of the three points, the algorithm correctly classifies the unknown gray ball as blue. However, we start wondering why we gave equal vote to the third ball (the red ball tangentially touching on the "sphere of influence" around the gray ball). It is not entirely fair that it exerts as much leverage on the classification as the first and second closer balls. So we weight the vote by the inverse of the square of the Euclidean distance to avoid a situation in which it could have have resulted in a spurious classification: In this case two of the neighbors incorrectly point to a red assignment, and win over the final vote, overruling the influence of the much closer blue neighbor, an effect that can be factored in by weighting by the inverse of the squared distance so that the farthest-away red ball has the least say in the classification: Computationally, the algorithm will attempt to resolve each case with a formula such as: $$\hat y=\text{max}_r\left(\sum_{i=1}^k w_{(i)}\,\mathbf{1}_{y{(i)}=r}\right)$$ $r$ stands for the number of classes (in this case, $2$, {red,blue}), $\mathbf{1}$ is the indicator variable: the sum is obtained for each class. Finally the weights $w_{(i)}$ calculated through a kernel function, with in the case you describe could be an inversion kernel $\frac{1}{\lvert d\rvert}.$
What is the inverse square of a distance (Euclidean)? Imagine that we want to classify as red or blue the unknown gray point in the data cloud. Your algorithm is set up to measure Euclidean distances to the $k =3$ closest neighbors: Two of them are blue
45,698
How to interpret log-log regression coefficients with a different base to the natural log
If you understand how to convert units--such as kilograms to pounds or meters to feet--then you will understand perfectly what is going on here, too, because it involves a simple change of units. (For more about this, please see Gung's answer to How will changing the units of explanatory variables affect a regression model?) The "base" of a logarithm is its units of measurement. That is, changing bases amounts to multiplication by a constant: $$\log_b(x) = c\, \log(x)\tag{1}$$ where $c = 1/(\log b)$. This gives you all you need to answer the questions. Begin with the first model, $$\log \text{brain} = \alpha + \beta\,\log\text{body}.$$ Both sides use natural logs: that's their common unit of measurement for logarithms. Specifically, $\alpha$ is measured in "nats" and $\beta$ is "nats per nat": it is unitless. Compare this to the second model using $(1)$ $$c \log \text{brain} = \log_{10}\text{brain} = \alpha_{10} + \beta_{10}\,\log_{10}\text{body} = \alpha_{10} + \beta_{10}\,c \log\text{body}.$$ Both sides use common logs. Changing back to the original units by dividing through by $c$ yields $$\log \text{brain} = \frac{1}{c}\alpha_{10} + \beta_{10}\,\log\text{body}.$$ Comparing this to the first model shows immediately that $$\alpha = \frac{1}{c}\alpha_{10},\quad \beta = \beta_{10}.$$ In other words, the slopes do not change (they cannot, since they are unitless) but the intercept must have its units converted (from nats to a common log) to match those of the new base of logarithms. The same relationship must hold among the estimated coefficients, too. In particular, the invariance of the slopes shows you do not need to change your interpretation of them. As an example, consider the estimates in your code. It reports $\hat\alpha = -1.8980$ and $\hat\beta = 0.6518$. We therefore anticipate that $\hat\alpha_{10} = (1/c)(-1.8980)$ where $c=\log{10} = 2.30\ldots$. The value works out to $\hat \alpha_{10} = (1/2.30\ldots)(-1.8980) = -0.8243$. Sure enough, that's precisely what the second model outputs for the intercept (and the two slope estimates are the same).
How to interpret log-log regression coefficients with a different base to the natural log
If you understand how to convert units--such as kilograms to pounds or meters to feet--then you will understand perfectly what is going on here, too, because it involves a simple change of units. (Fo
How to interpret log-log regression coefficients with a different base to the natural log If you understand how to convert units--such as kilograms to pounds or meters to feet--then you will understand perfectly what is going on here, too, because it involves a simple change of units. (For more about this, please see Gung's answer to How will changing the units of explanatory variables affect a regression model?) The "base" of a logarithm is its units of measurement. That is, changing bases amounts to multiplication by a constant: $$\log_b(x) = c\, \log(x)\tag{1}$$ where $c = 1/(\log b)$. This gives you all you need to answer the questions. Begin with the first model, $$\log \text{brain} = \alpha + \beta\,\log\text{body}.$$ Both sides use natural logs: that's their common unit of measurement for logarithms. Specifically, $\alpha$ is measured in "nats" and $\beta$ is "nats per nat": it is unitless. Compare this to the second model using $(1)$ $$c \log \text{brain} = \log_{10}\text{brain} = \alpha_{10} + \beta_{10}\,\log_{10}\text{body} = \alpha_{10} + \beta_{10}\,c \log\text{body}.$$ Both sides use common logs. Changing back to the original units by dividing through by $c$ yields $$\log \text{brain} = \frac{1}{c}\alpha_{10} + \beta_{10}\,\log\text{body}.$$ Comparing this to the first model shows immediately that $$\alpha = \frac{1}{c}\alpha_{10},\quad \beta = \beta_{10}.$$ In other words, the slopes do not change (they cannot, since they are unitless) but the intercept must have its units converted (from nats to a common log) to match those of the new base of logarithms. The same relationship must hold among the estimated coefficients, too. In particular, the invariance of the slopes shows you do not need to change your interpretation of them. As an example, consider the estimates in your code. It reports $\hat\alpha = -1.8980$ and $\hat\beta = 0.6518$. We therefore anticipate that $\hat\alpha_{10} = (1/c)(-1.8980)$ where $c=\log{10} = 2.30\ldots$. The value works out to $\hat \alpha_{10} = (1/2.30\ldots)(-1.8980) = -0.8243$. Sure enough, that's precisely what the second model outputs for the intercept (and the two slope estimates are the same).
How to interpret log-log regression coefficients with a different base to the natural log If you understand how to convert units--such as kilograms to pounds or meters to feet--then you will understand perfectly what is going on here, too, because it involves a simple change of units. (Fo
45,699
How to interpret log-log regression coefficients with a different base to the natural log
The key issue here is that you have the same base of the log on both sides. So an estimated $\beta_1=1$ tells you that if you multiple body by the base of the logs you multiply brain by $\beta_1 \times$ the base of the logs. Since the base of the logs is the same on both sides it factors out. If you raise the base of the logs to the power of the intercept you will find all three are the same.
How to interpret log-log regression coefficients with a different base to the natural log
The key issue here is that you have the same base of the log on both sides. So an estimated $\beta_1=1$ tells you that if you multiple body by the base of the logs you multiply brain by $\beta_1 \time
How to interpret log-log regression coefficients with a different base to the natural log The key issue here is that you have the same base of the log on both sides. So an estimated $\beta_1=1$ tells you that if you multiple body by the base of the logs you multiply brain by $\beta_1 \times$ the base of the logs. Since the base of the logs is the same on both sides it factors out. If you raise the base of the logs to the power of the intercept you will find all three are the same.
How to interpret log-log regression coefficients with a different base to the natural log The key issue here is that you have the same base of the log on both sides. So an estimated $\beta_1=1$ tells you that if you multiple body by the base of the logs you multiply brain by $\beta_1 \time
45,700
n observations from a random variable VS. 1 observation from n i.i.d random variables
When modelling a sample $(x_1,\ldots,x_n)$ as an $i.i.d$ sample from a given distribution $F$, the correct way of modelling is to see this sample as the realisation of $n$ random variables $(X_1,\ldots,X_n)$ made of $n$ independent random variables identically distributed from $F$: $$(x_1,\ldots,x_n)=(X_1,\ldots,X_n)(\omega)\qquad\omega\in\Omega$$ The concept of $n$ realizations of a single random variable is a shortcut that is not well-defined because one cannot handle independence with a single random variable.
n observations from a random variable VS. 1 observation from n i.i.d random variables
When modelling a sample $(x_1,\ldots,x_n)$ as an $i.i.d$ sample from a given distribution $F$, the correct way of modelling is to see this sample as the realisation of $n$ random variables $(X_1,\ldot
n observations from a random variable VS. 1 observation from n i.i.d random variables When modelling a sample $(x_1,\ldots,x_n)$ as an $i.i.d$ sample from a given distribution $F$, the correct way of modelling is to see this sample as the realisation of $n$ random variables $(X_1,\ldots,X_n)$ made of $n$ independent random variables identically distributed from $F$: $$(x_1,\ldots,x_n)=(X_1,\ldots,X_n)(\omega)\qquad\omega\in\Omega$$ The concept of $n$ realizations of a single random variable is a shortcut that is not well-defined because one cannot handle independence with a single random variable.
n observations from a random variable VS. 1 observation from n i.i.d random variables When modelling a sample $(x_1,\ldots,x_n)$ as an $i.i.d$ sample from a given distribution $F$, the correct way of modelling is to see this sample as the realisation of $n$ random variables $(X_1,\ldot