idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
45,601
Interpretations of negative confidence interval
It seems to me SEM has little use except to calculate CI? What quantitative information can we derive from SEM? We can say the true mean weight of the 1000 chickens is likely (very qualitative) to fall between 2 kg to 8 kg (sample mean ± SEM), but do we know the probability? Let us begin with an observation. The SEM ...
Interpretations of negative confidence interval
It seems to me SEM has little use except to calculate CI? What quantitative information can we derive from SEM? We can say the true mean weight of the 1000 chickens is likely (very qualitative) to fal
Interpretations of negative confidence interval It seems to me SEM has little use except to calculate CI? What quantitative information can we derive from SEM? We can say the true mean weight of the 1000 chickens is likely (very qualitative) to fall between 2 kg to 8 kg (sample mean ± SEM), but do we know the probabili...
Interpretations of negative confidence interval It seems to me SEM has little use except to calculate CI? What quantitative information can we derive from SEM? We can say the true mean weight of the 1000 chickens is likely (very qualitative) to fal
45,602
Interpretations of negative confidence interval
what you did - you created confidence intervals under assumption that chicken weights are drawn from normal disrtibution (with value range $(-\infty, \infty)$) - in fact these can be drawn from other disrtibution with $\mathbb{R_+}$ support e.g. erlang or chi distribution, but when sample size is $> 50$ we can assume t...
Interpretations of negative confidence interval
what you did - you created confidence intervals under assumption that chicken weights are drawn from normal disrtibution (with value range $(-\infty, \infty)$) - in fact these can be drawn from other
Interpretations of negative confidence interval what you did - you created confidence intervals under assumption that chicken weights are drawn from normal disrtibution (with value range $(-\infty, \infty)$) - in fact these can be drawn from other disrtibution with $\mathbb{R_+}$ support e.g. erlang or chi distribution...
Interpretations of negative confidence interval what you did - you created confidence intervals under assumption that chicken weights are drawn from normal disrtibution (with value range $(-\infty, \infty)$) - in fact these can be drawn from other
45,603
Interpretations of negative confidence interval
I would just (first of all) work on logarithmic scale and back-transform the confidence limits obtained on that scale. That way you're assured of positive limits. Going full Bayes on this is an answer of wide appeal, but as you're asking this question I am not clear that "learn a whole new approach to statistics" is l...
Interpretations of negative confidence interval
I would just (first of all) work on logarithmic scale and back-transform the confidence limits obtained on that scale. That way you're assured of positive limits. Going full Bayes on this is an answe
Interpretations of negative confidence interval I would just (first of all) work on logarithmic scale and back-transform the confidence limits obtained on that scale. That way you're assured of positive limits. Going full Bayes on this is an answer of wide appeal, but as you're asking this question I am not clear that...
Interpretations of negative confidence interval I would just (first of all) work on logarithmic scale and back-transform the confidence limits obtained on that scale. That way you're assured of positive limits. Going full Bayes on this is an answe
45,604
Interpretations of negative confidence interval
To produce a probability for weights you should really apply Bayesian methods in this case. This is not about frequentism against Bayes, but you have some very strong prior information here: You know, that a chickens weight is not negative and that it is not .5 kg. Standard frequentist methods are basically open for al...
Interpretations of negative confidence interval
To produce a probability for weights you should really apply Bayesian methods in this case. This is not about frequentism against Bayes, but you have some very strong prior information here: You know,
Interpretations of negative confidence interval To produce a probability for weights you should really apply Bayesian methods in this case. This is not about frequentism against Bayes, but you have some very strong prior information here: You know, that a chickens weight is not negative and that it is not .5 kg. Standa...
Interpretations of negative confidence interval To produce a probability for weights you should really apply Bayesian methods in this case. This is not about frequentism against Bayes, but you have some very strong prior information here: You know,
45,605
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater than 1
The following analysis obtains bounds that hold for sufficiently small $\alpha$ and are expressed in terms of elementary functions. The tail probability, written as a function of $\alpha\gt 0,$ is $$p_{t,\beta}(\alpha) = {\Pr}_{\alpha,\beta}(X\gt t) = \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}\int_t^1 s^{...
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater th
The following analysis obtains bounds that hold for sufficiently small $\alpha$ and are expressed in terms of elementary functions. The tail probability, written as a function of $\alpha\gt 0,$ is $$p
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater than 1 The following analysis obtains bounds that hold for sufficiently small $\alpha$ and are expressed in terms of elementary functions. The tail probability, written as a function of $\alpha\gt 0,$ is $$p_{t,\beta}(\alp...
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater th The following analysis obtains bounds that hold for sufficiently small $\alpha$ and are expressed in terms of elementary functions. The tail probability, written as a function of $\alpha\gt 0,$ is $$p
45,606
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater than 1
Graphical comment: CDF plots of $\mathsf{Beta}(a, 2)$ for $a=.01, .05, .1, .15, .2, .25, .3$ (respective colors red through purple).
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater th
Graphical comment: CDF plots of $\mathsf{Beta}(a, 2)$ for $a=.01, .05, .1, .15, .2, .25, .3$ (respective colors red through purple).
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater than 1 Graphical comment: CDF plots of $\mathsf{Beta}(a, 2)$ for $a=.01, .05, .1, .15, .2, .25, .3$ (respective colors red through purple).
Tail bound of beta distribution when $\alpha$ is sufficiently close to zero while $\beta$ greater th Graphical comment: CDF plots of $\mathsf{Beta}(a, 2)$ for $a=.01, .05, .1, .15, .2, .25, .3$ (respective colors red through purple).
45,607
GAM factor smooth interaction--include main effect smooth?
You need to be careful with ordered factors here in mgcv as they aren't doing what I think you want to be fitting. If you pass an ordered factor to by, then gam() etc set up a smooth for all the levels except the reference level, and further more they are set up as smooth differences between the reference level and the...
GAM factor smooth interaction--include main effect smooth?
You need to be careful with ordered factors here in mgcv as they aren't doing what I think you want to be fitting. If you pass an ordered factor to by, then gam() etc set up a smooth for all the level
GAM factor smooth interaction--include main effect smooth? You need to be careful with ordered factors here in mgcv as they aren't doing what I think you want to be fitting. If you pass an ordered factor to by, then gam() etc set up a smooth for all the levels except the reference level, and further more they are set u...
GAM factor smooth interaction--include main effect smooth? You need to be careful with ordered factors here in mgcv as they aren't doing what I think you want to be fitting. If you pass an ordered factor to by, then gam() etc set up a smooth for all the level
45,608
Normality testing with very large sample size?
Continuation from comment: If you are using simulated normal data from R, then you can be quite confident that what purport to be normal samples really are. So there shouldn't be 'quirks' for the Shapio-Wilk test to detect. Checking 100,000 standard normal samples of size 1000 with the Shapiro-Wilk test, I got rejecti...
Normality testing with very large sample size?
Continuation from comment: If you are using simulated normal data from R, then you can be quite confident that what purport to be normal samples really are. So there shouldn't be 'quirks' for the Sha
Normality testing with very large sample size? Continuation from comment: If you are using simulated normal data from R, then you can be quite confident that what purport to be normal samples really are. So there shouldn't be 'quirks' for the Shapio-Wilk test to detect. Checking 100,000 standard normal samples of size...
Normality testing with very large sample size? Continuation from comment: If you are using simulated normal data from R, then you can be quite confident that what purport to be normal samples really are. So there shouldn't be 'quirks' for the Sha
45,609
Normality testing with very large sample size?
As @gg pointed out in a comment, this entire discussion in pointless without defining how normal-like does data have to be for us to consider it "normal enough". In practice, I often like the following criteria: Skewness close to 0, maybe a (-1,1) range, or that you feel more comfortable with depending on "how normal-...
Normality testing with very large sample size?
As @gg pointed out in a comment, this entire discussion in pointless without defining how normal-like does data have to be for us to consider it "normal enough". In practice, I often like the followin
Normality testing with very large sample size? As @gg pointed out in a comment, this entire discussion in pointless without defining how normal-like does data have to be for us to consider it "normal enough". In practice, I often like the following criteria: Skewness close to 0, maybe a (-1,1) range, or that you feel ...
Normality testing with very large sample size? As @gg pointed out in a comment, this entire discussion in pointless without defining how normal-like does data have to be for us to consider it "normal enough". In practice, I often like the followin
45,610
Normality testing with very large sample size?
...However, if the sample size is very large, the test is extremely "accurate" but practically useless because the confidence interval is too small. They will always reject the null, even if the distribution is reasonably normal enough... What if you take a sub-sample of size 100 or 300 from the large sample consistin...
Normality testing with very large sample size?
...However, if the sample size is very large, the test is extremely "accurate" but practically useless because the confidence interval is too small. They will always reject the null, even if the distr
Normality testing with very large sample size? ...However, if the sample size is very large, the test is extremely "accurate" but practically useless because the confidence interval is too small. They will always reject the null, even if the distribution is reasonably normal enough... What if you take a sub-sample of ...
Normality testing with very large sample size? ...However, if the sample size is very large, the test is extremely "accurate" but practically useless because the confidence interval is too small. They will always reject the null, even if the distr
45,611
How does logistic regression "elegantly" handle unbalanced classes?
No, we can't include the prevalence as a feature. After all, this is exactly what we are trying to model! What FH means here is that if there are features that contribute to the prevalence of the target, these will have appropriate parameter estimates in the logistic regression. If a disease is extremely rare, the inte...
How does logistic regression "elegantly" handle unbalanced classes?
No, we can't include the prevalence as a feature. After all, this is exactly what we are trying to model! What FH means here is that if there are features that contribute to the prevalence of the targ
How does logistic regression "elegantly" handle unbalanced classes? No, we can't include the prevalence as a feature. After all, this is exactly what we are trying to model! What FH means here is that if there are features that contribute to the prevalence of the target, these will have appropriate parameter estimates ...
How does logistic regression "elegantly" handle unbalanced classes? No, we can't include the prevalence as a feature. After all, this is exactly what we are trying to model! What FH means here is that if there are features that contribute to the prevalence of the targ
45,612
Why does Judea Pearl call his causal graphs Markovian?
He is referring to the Parental Markov Condition (see theorems 1.2.7 and 1.4.1 of Causality). Given a graph $G$, we say a distribution $P$ is Markov relative to $G$ if every variable is independent of all its non descendants conditional on its parents. An acyclic causal model $M$ with jointly independent error terms in...
Why does Judea Pearl call his causal graphs Markovian?
He is referring to the Parental Markov Condition (see theorems 1.2.7 and 1.4.1 of Causality). Given a graph $G$, we say a distribution $P$ is Markov relative to $G$ if every variable is independent of
Why does Judea Pearl call his causal graphs Markovian? He is referring to the Parental Markov Condition (see theorems 1.2.7 and 1.4.1 of Causality). Given a graph $G$, we say a distribution $P$ is Markov relative to $G$ if every variable is independent of all its non descendants conditional on its parents. An acyclic c...
Why does Judea Pearl call his causal graphs Markovian? He is referring to the Parental Markov Condition (see theorems 1.2.7 and 1.4.1 of Causality). Given a graph $G$, we say a distribution $P$ is Markov relative to $G$ if every variable is independent of
45,613
Why does Judea Pearl call his causal graphs Markovian?
These graphs do satisfy the Markov property - once you condition on the parent node, from which the causal arrow comes, the variable is independent of earlier ancestors that causally affect that parent (unless there is a separate arrow directly from the ancestor node to the present node).
Why does Judea Pearl call his causal graphs Markovian?
These graphs do satisfy the Markov property - once you condition on the parent node, from which the causal arrow comes, the variable is independent of earlier ancestors that causally affect that paren
Why does Judea Pearl call his causal graphs Markovian? These graphs do satisfy the Markov property - once you condition on the parent node, from which the causal arrow comes, the variable is independent of earlier ancestors that causally affect that parent (unless there is a separate arrow directly from the ancestor no...
Why does Judea Pearl call his causal graphs Markovian? These graphs do satisfy the Markov property - once you condition on the parent node, from which the causal arrow comes, the variable is independent of earlier ancestors that causally affect that paren
45,614
Why is the unbiased sample variance estimator so ubiquitous in science?
Dividing by (n+1) minimizes the MSE only for normally distributed data. In general, a variance estimator of the form $$s_k^2 = \frac{1}{k}\sum_{i=1}^n (x_i-\overline{x})^2$$ has minimal MSE for $$k=(n-1)\left[ \frac{1}{n}\left(\frac{\mu_4}{\mu_2^2} - \frac{n-3}{n-1}\right) + 1\right]$$ where $\mu_2$ and $\mu_4$ are the...
Why is the unbiased sample variance estimator so ubiquitous in science?
Dividing by (n+1) minimizes the MSE only for normally distributed data. In general, a variance estimator of the form $$s_k^2 = \frac{1}{k}\sum_{i=1}^n (x_i-\overline{x})^2$$ has minimal MSE for $$k=(n
Why is the unbiased sample variance estimator so ubiquitous in science? Dividing by (n+1) minimizes the MSE only for normally distributed data. In general, a variance estimator of the form $$s_k^2 = \frac{1}{k}\sum_{i=1}^n (x_i-\overline{x})^2$$ has minimal MSE for $$k=(n-1)\left[ \frac{1}{n}\left(\frac{\mu_4}{\mu_2^2}...
Why is the unbiased sample variance estimator so ubiquitous in science? Dividing by (n+1) minimizes the MSE only for normally distributed data. In general, a variance estimator of the form $$s_k^2 = \frac{1}{k}\sum_{i=1}^n (x_i-\overline{x})^2$$ has minimal MSE for $$k=(n
45,615
Why is the unbiased sample variance estimator so ubiquitous in science?
Personally I prefer $\hat\sigma^2$ over $\hat\sigma_{MMSE}^2$ for a reason different from unbiasedness. If the estimation problem were in fact symmetric, i.e., too low values would be as bad as too high values of the same size, I'd think that the MSE would be a good measure, and optimum MSE would be in fact good. (Note...
Why is the unbiased sample variance estimator so ubiquitous in science?
Personally I prefer $\hat\sigma^2$ over $\hat\sigma_{MMSE}^2$ for a reason different from unbiasedness. If the estimation problem were in fact symmetric, i.e., too low values would be as bad as too hi
Why is the unbiased sample variance estimator so ubiquitous in science? Personally I prefer $\hat\sigma^2$ over $\hat\sigma_{MMSE}^2$ for a reason different from unbiasedness. If the estimation problem were in fact symmetric, i.e., too low values would be as bad as too high values of the same size, I'd think that the M...
Why is the unbiased sample variance estimator so ubiquitous in science? Personally I prefer $\hat\sigma^2$ over $\hat\sigma_{MMSE}^2$ for a reason different from unbiasedness. If the estimation problem were in fact symmetric, i.e., too low values would be as bad as too hi
45,616
Why is the unbiased sample variance estimator so ubiquitous in science?
Calculating sample variance as squared deviation divided by n + 1 (instead of n - 1) will lead to underestimating variance, which will lead to confidence intervals with lower than expected probability coverage. That's probably at least part of the reason that an unbiased estimator of variance is preferred over minimizi...
Why is the unbiased sample variance estimator so ubiquitous in science?
Calculating sample variance as squared deviation divided by n + 1 (instead of n - 1) will lead to underestimating variance, which will lead to confidence intervals with lower than expected probability
Why is the unbiased sample variance estimator so ubiquitous in science? Calculating sample variance as squared deviation divided by n + 1 (instead of n - 1) will lead to underestimating variance, which will lead to confidence intervals with lower than expected probability coverage. That's probably at least part of the ...
Why is the unbiased sample variance estimator so ubiquitous in science? Calculating sample variance as squared deviation divided by n + 1 (instead of n - 1) will lead to underestimating variance, which will lead to confidence intervals with lower than expected probability
45,617
Can i include the product of two random variables? Or do I risk collinearity?
Operating under your stated assumption that $x_3=x_1x_2$ and $x_4=x_1/x_2$ need to be entertained as possible explanatory variables in a model of a response $Y$ (and therefore not summarily dropped because they might be a little inconvenient), it can be helpful to consider alternative ways of expressing this model. As ...
Can i include the product of two random variables? Or do I risk collinearity?
Operating under your stated assumption that $x_3=x_1x_2$ and $x_4=x_1/x_2$ need to be entertained as possible explanatory variables in a model of a response $Y$ (and therefore not summarily dropped be
Can i include the product of two random variables? Or do I risk collinearity? Operating under your stated assumption that $x_3=x_1x_2$ and $x_4=x_1/x_2$ need to be entertained as possible explanatory variables in a model of a response $Y$ (and therefore not summarily dropped because they might be a little inconvenient)...
Can i include the product of two random variables? Or do I risk collinearity? Operating under your stated assumption that $x_3=x_1x_2$ and $x_4=x_1/x_2$ need to be entertained as possible explanatory variables in a model of a response $Y$ (and therefore not summarily dropped be
45,618
Can i include the product of two random variables? Or do I risk collinearity?
You won't have perfect collinearity (as per your question), but you do risk multicollinearity issues with your two additional regressors. While they're not algebraically linear combinations of the two predictors, it can be the case that these variables (x1-x4) in a particular sample might lay close to to a linear subsp...
Can i include the product of two random variables? Or do I risk collinearity?
You won't have perfect collinearity (as per your question), but you do risk multicollinearity issues with your two additional regressors. While they're not algebraically linear combinations of the two
Can i include the product of two random variables? Or do I risk collinearity? You won't have perfect collinearity (as per your question), but you do risk multicollinearity issues with your two additional regressors. While they're not algebraically linear combinations of the two predictors, it can be the case that these...
Can i include the product of two random variables? Or do I risk collinearity? You won't have perfect collinearity (as per your question), but you do risk multicollinearity issues with your two additional regressors. While they're not algebraically linear combinations of the two
45,619
Can i include the product of two random variables? Or do I risk collinearity?
No, you don’t risk collinearity because $x_i$ are not linearly dependent in general, i.e. the below equation has just one solution holding for all possible $x_i$: $$a_1x_1+a_2x_2+a_3x_3+a_4x_4=0$$ And that is $a_i=0$. In $x_5=x_1+x_2$ case, the following equation has non-zero solutions such that $a_1=a_2=-a_5$: $$a_1x_...
Can i include the product of two random variables? Or do I risk collinearity?
No, you don’t risk collinearity because $x_i$ are not linearly dependent in general, i.e. the below equation has just one solution holding for all possible $x_i$: $$a_1x_1+a_2x_2+a_3x_3+a_4x_4=0$$ And
Can i include the product of two random variables? Or do I risk collinearity? No, you don’t risk collinearity because $x_i$ are not linearly dependent in general, i.e. the below equation has just one solution holding for all possible $x_i$: $$a_1x_1+a_2x_2+a_3x_3+a_4x_4=0$$ And that is $a_i=0$. In $x_5=x_1+x_2$ case, t...
Can i include the product of two random variables? Or do I risk collinearity? No, you don’t risk collinearity because $x_i$ are not linearly dependent in general, i.e. the below equation has just one solution holding for all possible $x_i$: $$a_1x_1+a_2x_2+a_3x_3+a_4x_4=0$$ And
45,620
Can i include the product of two random variables? Or do I risk collinearity?
Don't do y = x1 + x2 + x3 + x3 ...which is equivalent in your case to y = x1 + x2 + (x1 * x2) + (x1 / x2) To include the product of the predictors, it is better practice to create your model with this formula: y = x1 * x2. The * sign indicates that you are also using the interaction effect, the equivalent of your ...
Can i include the product of two random variables? Or do I risk collinearity?
Don't do y = x1 + x2 + x3 + x3 ...which is equivalent in your case to y = x1 + x2 + (x1 * x2) + (x1 / x2) To include the product of the predictors, it is better practice to create your model with t
Can i include the product of two random variables? Or do I risk collinearity? Don't do y = x1 + x2 + x3 + x3 ...which is equivalent in your case to y = x1 + x2 + (x1 * x2) + (x1 / x2) To include the product of the predictors, it is better practice to create your model with this formula: y = x1 * x2. The * sign ind...
Can i include the product of two random variables? Or do I risk collinearity? Don't do y = x1 + x2 + x3 + x3 ...which is equivalent in your case to y = x1 + x2 + (x1 * x2) + (x1 / x2) To include the product of the predictors, it is better practice to create your model with t
45,621
Why no variance term in Bayesian logistic regression?
Logistic regression, Bayesian or not, is a model defined in terms of Bernoulli distribution. The distribution is parametrized by "probability of success" $p$ with mean $p$ and variance $p(1-p)$, i.e. the variance directly follows from the mean. So there is no "separate" variance term, this is what the quote seems to sa...
Why no variance term in Bayesian logistic regression?
Logistic regression, Bayesian or not, is a model defined in terms of Bernoulli distribution. The distribution is parametrized by "probability of success" $p$ with mean $p$ and variance $p(1-p)$, i.e.
Why no variance term in Bayesian logistic regression? Logistic regression, Bayesian or not, is a model defined in terms of Bernoulli distribution. The distribution is parametrized by "probability of success" $p$ with mean $p$ and variance $p(1-p)$, i.e. the variance directly follows from the mean. So there is no "separ...
Why no variance term in Bayesian logistic regression? Logistic regression, Bayesian or not, is a model defined in terms of Bernoulli distribution. The distribution is parametrized by "probability of success" $p$ with mean $p$ and variance $p(1-p)$, i.e.
45,622
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression require a funky distribution?
When testing $H_0:\rho=\rho_0\,(\ne0)$ against any suitable alternative, one usually resorts to the variance stabilising Fisher transformation of the sample correlation coefficient $r$. The usual t-test you are referring to is mainly reserved for the case $\rho_0=0$. For moderately large $n$ (for example when $n\ge 25$...
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression requi
When testing $H_0:\rho=\rho_0\,(\ne0)$ against any suitable alternative, one usually resorts to the variance stabilising Fisher transformation of the sample correlation coefficient $r$. The usual t-te
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression require a funky distribution? When testing $H_0:\rho=\rho_0\,(\ne0)$ against any suitable alternative, one usually resorts to the variance stabilising Fisher transformation of the sample correlation coefficient $r$. The usual...
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression requi When testing $H_0:\rho=\rho_0\,(\ne0)$ against any suitable alternative, one usually resorts to the variance stabilising Fisher transformation of the sample correlation coefficient $r$. The usual t-te
45,623
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression require a funky distribution?
If one is using something like a Wald statistic, a very simple way to do this to test if $H_o: \beta - c = 0$ vs $H_a: \beta - c \neq 0$ Since we know that $\hat \beta \text{ } \dot \sim N(\beta, se)$, then $\hat \beta -c \text{ } \dot \sim N(\beta - c, se)$, since $c$ is just a constant. This gives us a Wald statisti...
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression requi
If one is using something like a Wald statistic, a very simple way to do this to test if $H_o: \beta - c = 0$ vs $H_a: \beta - c \neq 0$ Since we know that $\hat \beta \text{ } \dot \sim N(\beta, se)
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression require a funky distribution? If one is using something like a Wald statistic, a very simple way to do this to test if $H_o: \beta - c = 0$ vs $H_a: \beta - c \neq 0$ Since we know that $\hat \beta \text{ } \dot \sim N(\beta...
Does computing the test statistic for $H_{0}\text{: }\beta = c$, for $c \ne 0$ in a regression requi If one is using something like a Wald statistic, a very simple way to do this to test if $H_o: \beta - c = 0$ vs $H_a: \beta - c \neq 0$ Since we know that $\hat \beta \text{ } \dot \sim N(\beta, se)
45,624
Do studentized residuals follow t-distribution
$\newcommand{\e}{\varepsilon}$$\newcommand{\0}{\mathbf 0}$$\newcommand{\E}{\text E}$$\newcommand{\V}{\text{Var}}$I'll start by working with this in matrix form. Let $y = X\beta + \e$ be our model with $\e \sim \mathcal N(\0, \sigma^2 I)$ and $X \in \mathbb R^{n\times p}$ full rank. Then $\hat y = Hy$ where $H = X(X^TX)...
Do studentized residuals follow t-distribution
$\newcommand{\e}{\varepsilon}$$\newcommand{\0}{\mathbf 0}$$\newcommand{\E}{\text E}$$\newcommand{\V}{\text{Var}}$I'll start by working with this in matrix form. Let $y = X\beta + \e$ be our model with
Do studentized residuals follow t-distribution $\newcommand{\e}{\varepsilon}$$\newcommand{\0}{\mathbf 0}$$\newcommand{\E}{\text E}$$\newcommand{\V}{\text{Var}}$I'll start by working with this in matrix form. Let $y = X\beta + \e$ be our model with $\e \sim \mathcal N(\0, \sigma^2 I)$ and $X \in \mathbb R^{n\times p}$ f...
Do studentized residuals follow t-distribution $\newcommand{\e}{\varepsilon}$$\newcommand{\0}{\mathbf 0}$$\newcommand{\E}{\text E}$$\newcommand{\V}{\text{Var}}$I'll start by working with this in matrix form. Let $y = X\beta + \e$ be our model with
45,625
Do studentized residuals follow t-distribution
jld's answer (+1) describes the construction of a $t$ random variable, but does not mention why independence is violated, so I figured I would chime in. The numerator $$ \frac{e_i}{\sigma\sqrt{1 - h_i}} \sim \mathcal N(0,1) $$ and the chi-squared random variable in the denominator $$ e^Te / \sigma^2 \sim \chi^2_{n-...
Do studentized residuals follow t-distribution
jld's answer (+1) describes the construction of a $t$ random variable, but does not mention why independence is violated, so I figured I would chime in. The numerator $$ \frac{e_i}{\sigma\sqrt{1 -
Do studentized residuals follow t-distribution jld's answer (+1) describes the construction of a $t$ random variable, but does not mention why independence is violated, so I figured I would chime in. The numerator $$ \frac{e_i}{\sigma\sqrt{1 - h_i}} \sim \mathcal N(0,1) $$ and the chi-squared random variable in the ...
Do studentized residuals follow t-distribution jld's answer (+1) describes the construction of a $t$ random variable, but does not mention why independence is violated, so I figured I would chime in. The numerator $$ \frac{e_i}{\sigma\sqrt{1 -
45,626
Wilcoxon Test - non normality, non equal variances, sample size not the same
tl;dr if you want to interpret the rejection of the null hypothesis as evidence that prices for women are greater than those for men, then you do need the assumption of equal variance (in fact, equal distributions) between the two populations. If you are satisfied with showing that the distribution of prices for women ...
Wilcoxon Test - non normality, non equal variances, sample size not the same
tl;dr if you want to interpret the rejection of the null hypothesis as evidence that prices for women are greater than those for men, then you do need the assumption of equal variance (in fact, equal
Wilcoxon Test - non normality, non equal variances, sample size not the same tl;dr if you want to interpret the rejection of the null hypothesis as evidence that prices for women are greater than those for men, then you do need the assumption of equal variance (in fact, equal distributions) between the two populations....
Wilcoxon Test - non normality, non equal variances, sample size not the same tl;dr if you want to interpret the rejection of the null hypothesis as evidence that prices for women are greater than those for men, then you do need the assumption of equal variance (in fact, equal
45,627
Wilcoxon Test - non normality, non equal variances, sample size not the same
Ben Bolker's answer is great. I just wanted to add an answer to the "Would another test be better?" part. If you want to be able to conclude that prices for women are greater than those for men without the assumptions of equal distributions under the null, Brunner-Munzel's test can be recommended. For a full technical ...
Wilcoxon Test - non normality, non equal variances, sample size not the same
Ben Bolker's answer is great. I just wanted to add an answer to the "Would another test be better?" part. If you want to be able to conclude that prices for women are greater than those for men withou
Wilcoxon Test - non normality, non equal variances, sample size not the same Ben Bolker's answer is great. I just wanted to add an answer to the "Would another test be better?" part. If you want to be able to conclude that prices for women are greater than those for men without the assumptions of equal distributions un...
Wilcoxon Test - non normality, non equal variances, sample size not the same Ben Bolker's answer is great. I just wanted to add an answer to the "Would another test be better?" part. If you want to be able to conclude that prices for women are greater than those for men withou
45,628
What would be a Bayesian equivalent of this mixed-effects logistic regression model
Stan is the state-of-the-art in Bayesian model fitting. It has an official R interface through rstan. With rstan you would need to learn how to write your models in the Stan language. Alternatively, Stan also provides the rstanarm package (hat-tip to @ben-bolker for pointing out the omission), through which you can wri...
What would be a Bayesian equivalent of this mixed-effects logistic regression model
Stan is the state-of-the-art in Bayesian model fitting. It has an official R interface through rstan. With rstan you would need to learn how to write your models in the Stan language. Alternatively, S
What would be a Bayesian equivalent of this mixed-effects logistic regression model Stan is the state-of-the-art in Bayesian model fitting. It has an official R interface through rstan. With rstan you would need to learn how to write your models in the Stan language. Alternatively, Stan also provides the rstanarm packa...
What would be a Bayesian equivalent of this mixed-effects logistic regression model Stan is the state-of-the-art in Bayesian model fitting. It has an official R interface through rstan. With rstan you would need to learn how to write your models in the Stan language. Alternatively, S
45,629
Searching for a weekly rhythm in weight loss/gain
Using differences between succeeding days (more susceptible to noise) Compute for every week $i$ the seven values $$\begin{array}{rcl} d_{1i} &=& weight_{Monday}-weight_{Sunday} \\ d_{2i} &=& weight_{Tuesday}-weight_{Monday} \\ d_{3i} &=& weight_{Wednesday}-weight_{Tuesday} \\ d_{4i} &=& weight_{Thursday}-weight_{Wedn...
Searching for a weekly rhythm in weight loss/gain
Using differences between succeeding days (more susceptible to noise) Compute for every week $i$ the seven values $$\begin{array}{rcl} d_{1i} &=& weight_{Monday}-weight_{Sunday} \\ d_{2i} &=& weight_
Searching for a weekly rhythm in weight loss/gain Using differences between succeeding days (more susceptible to noise) Compute for every week $i$ the seven values $$\begin{array}{rcl} d_{1i} &=& weight_{Monday}-weight_{Sunday} \\ d_{2i} &=& weight_{Tuesday}-weight_{Monday} \\ d_{3i} &=& weight_{Wednesday}-weight_{Tue...
Searching for a weekly rhythm in weight loss/gain Using differences between succeeding days (more susceptible to noise) Compute for every week $i$ the seven values $$\begin{array}{rcl} d_{1i} &=& weight_{Monday}-weight_{Sunday} \\ d_{2i} &=& weight_
45,630
Searching for a weekly rhythm in weight loss/gain
I would plot this with the $x$-axis showing the days of the week and plot a separate line for each week with weight on the $y$-axis. I might also compute the mean for Mondays, Tuesday, and so on and plot that as well with a contrasting symbol. If I was interested in finding days which stood out from the trend I would s...
Searching for a weekly rhythm in weight loss/gain
I would plot this with the $x$-axis showing the days of the week and plot a separate line for each week with weight on the $y$-axis. I might also compute the mean for Mondays, Tuesday, and so on and p
Searching for a weekly rhythm in weight loss/gain I would plot this with the $x$-axis showing the days of the week and plot a separate line for each week with weight on the $y$-axis. I might also compute the mean for Mondays, Tuesday, and so on and plot that as well with a contrasting symbol. If I was interested in fin...
Searching for a weekly rhythm in weight loss/gain I would plot this with the $x$-axis showing the days of the week and plot a separate line for each week with weight on the $y$-axis. I might also compute the mean for Mondays, Tuesday, and so on and p
45,631
Evaluating the hazard function when the CDF is close to 1?
If the matter is numerical stability, you could look at the log of the hazard function: $$log(h(t; \theta)) = log(f(t;\theta)) - log(1-F(t;\theta))$$ You could use the log / log.p = TRUE flag in R for log values and the lower.tail flag for obtaining $log(1 - F(t;\theta))$ values: dweibull(100,1,1, log = T) # -100 pweib...
Evaluating the hazard function when the CDF is close to 1?
If the matter is numerical stability, you could look at the log of the hazard function: $$log(h(t; \theta)) = log(f(t;\theta)) - log(1-F(t;\theta))$$ You could use the log / log.p = TRUE flag in R for
Evaluating the hazard function when the CDF is close to 1? If the matter is numerical stability, you could look at the log of the hazard function: $$log(h(t; \theta)) = log(f(t;\theta)) - log(1-F(t;\theta))$$ You could use the log / log.p = TRUE flag in R for log values and the lower.tail flag for obtaining $log(1 - F(...
Evaluating the hazard function when the CDF is close to 1? If the matter is numerical stability, you could look at the log of the hazard function: $$log(h(t; \theta)) = log(f(t;\theta)) - log(1-F(t;\theta))$$ You could use the log / log.p = TRUE flag in R for
45,632
Evaluating the hazard function when the CDF is close to 1?
For a survival curve based on a parametric distribution, often the hazard is an explicit function of the parameters. For example, this link provides several hazard functions for different distribtuons. So when we know the values of parameters and want to calculate the hazard, as asked in this question, the best way is ...
Evaluating the hazard function when the CDF is close to 1?
For a survival curve based on a parametric distribution, often the hazard is an explicit function of the parameters. For example, this link provides several hazard functions for different distribtuons
Evaluating the hazard function when the CDF is close to 1? For a survival curve based on a parametric distribution, often the hazard is an explicit function of the parameters. For example, this link provides several hazard functions for different distribtuons. So when we know the values of parameters and want to calcul...
Evaluating the hazard function when the CDF is close to 1? For a survival curve based on a parametric distribution, often the hazard is an explicit function of the parameters. For example, this link provides several hazard functions for different distribtuons
45,633
About Sampling and Random Variables
The random variable $Y$ describes a relationship between events and the corresponding probabilities of those events. In more practical terms, a random variable describes a data-generating process. When you generate a random data point that is described by the random variable $Y$, the probability distribution of $Y$ des...
About Sampling and Random Variables
The random variable $Y$ describes a relationship between events and the corresponding probabilities of those events. In more practical terms, a random variable describes a data-generating process. Whe
About Sampling and Random Variables The random variable $Y$ describes a relationship between events and the corresponding probabilities of those events. In more practical terms, a random variable describes a data-generating process. When you generate a random data point that is described by the random variable $Y$, the...
About Sampling and Random Variables The random variable $Y$ describes a relationship between events and the corresponding probabilities of those events. In more practical terms, a random variable describes a data-generating process. Whe
45,634
About Sampling and Random Variables
For the sake of redundancy and addition, a random value is (the Mathematical modeling of the process of having a ) a measurement or experiment whose value is not predictable/deterministic ; its value can only be understood probabilistically, meaning that it can be tested over the longer run. A standard example is that ...
About Sampling and Random Variables
For the sake of redundancy and addition, a random value is (the Mathematical modeling of the process of having a ) a measurement or experiment whose value is not predictable/deterministic ; its value
About Sampling and Random Variables For the sake of redundancy and addition, a random value is (the Mathematical modeling of the process of having a ) a measurement or experiment whose value is not predictable/deterministic ; its value can only be understood probabilistically, meaning that it can be tested over the lon...
About Sampling and Random Variables For the sake of redundancy and addition, a random value is (the Mathematical modeling of the process of having a ) a measurement or experiment whose value is not predictable/deterministic ; its value
45,635
Can Cramér's V be used an effect size measure for McNemar's test?
In the context of a 2x2 table, Cramer's $V$ is equivalent to the phi coefficient. Moreover, phi is equivalent to Pearson's product moment correlation of the two columns of $1$'s and $0$'s when the 2x2 table is disaggregated. That corresponds to a different magnitude than the one that McNemar's test is testing. So, n...
Can Cramér's V be used an effect size measure for McNemar's test?
In the context of a 2x2 table, Cramer's $V$ is equivalent to the phi coefficient. Moreover, phi is equivalent to Pearson's product moment correlation of the two columns of $1$'s and $0$'s when the 2x
Can Cramér's V be used an effect size measure for McNemar's test? In the context of a 2x2 table, Cramer's $V$ is equivalent to the phi coefficient. Moreover, phi is equivalent to Pearson's product moment correlation of the two columns of $1$'s and $0$'s when the 2x2 table is disaggregated. That corresponds to a diffe...
Can Cramér's V be used an effect size measure for McNemar's test? In the context of a 2x2 table, Cramer's $V$ is equivalent to the phi coefficient. Moreover, phi is equivalent to Pearson's product moment correlation of the two columns of $1$'s and $0$'s when the 2x
45,636
Can Cramér's V be used an effect size measure for McNemar's test?
Cramér's $V$ doesn't correspond well to what is tested by McNemar's test. Edit: Disclosure: the webpage and R package cited below are mine. Probably the most common effect size statistic for McNemar's test is odds ratio, though Cohen's $g$ could be used. Cohen (1988) also uses a statistic he calls $P$. For defintions,...
Can Cramér's V be used an effect size measure for McNemar's test?
Cramér's $V$ doesn't correspond well to what is tested by McNemar's test. Edit: Disclosure: the webpage and R package cited below are mine. Probably the most common effect size statistic for McNemar's
Can Cramér's V be used an effect size measure for McNemar's test? Cramér's $V$ doesn't correspond well to what is tested by McNemar's test. Edit: Disclosure: the webpage and R package cited below are mine. Probably the most common effect size statistic for McNemar's test is odds ratio, though Cohen's $g$ could be used....
Can Cramér's V be used an effect size measure for McNemar's test? Cramér's $V$ doesn't correspond well to what is tested by McNemar's test. Edit: Disclosure: the webpage and R package cited below are mine. Probably the most common effect size statistic for McNemar's
45,637
Equivalence test for binominal data
While one can use the t test to test for proportion difference, the z test is a tad more precise, since it uses an estimate of the standard deviation formulated specifically for binomial (i.e. dichotomous, nominal, etc.) data. The same applies to the z test for proportion equivalence. First, the z test for difference i...
Equivalence test for binominal data
While one can use the t test to test for proportion difference, the z test is a tad more precise, since it uses an estimate of the standard deviation formulated specifically for binomial (i.e. dichoto
Equivalence test for binominal data While one can use the t test to test for proportion difference, the z test is a tad more precise, since it uses an estimate of the standard deviation formulated specifically for binomial (i.e. dichotomous, nominal, etc.) data. The same applies to the z test for proportion equivalence...
Equivalence test for binominal data While one can use the t test to test for proportion difference, the z test is a tad more precise, since it uses an estimate of the standard deviation formulated specifically for binomial (i.e. dichoto
45,638
Efficient random generation from truncated Laplace distribution
A straightforward method that is reasonably efficient if the left truncation point is below the median is to just generate a Laplace random variate, then reject it if it falls to the left of the truncation point and generate another, repeating until one is generated that falls above the truncation point. If the Laplac...
Efficient random generation from truncated Laplace distribution
A straightforward method that is reasonably efficient if the left truncation point is below the median is to just generate a Laplace random variate, then reject it if it falls to the left of the trunc
Efficient random generation from truncated Laplace distribution A straightforward method that is reasonably efficient if the left truncation point is below the median is to just generate a Laplace random variate, then reject it if it falls to the left of the truncation point and generate another, repeating until one is...
Efficient random generation from truncated Laplace distribution A straightforward method that is reasonably efficient if the left truncation point is below the median is to just generate a Laplace random variate, then reject it if it falls to the left of the trunc
45,639
Efficient random generation from truncated Laplace distribution
If you need extreme efficiency and don't mind increased code complexity, you could adapt this ziggurat-like rejection sampling technique to the standard laplace distribution directly and use shifts and scaling to produce distributions with arbitrary parameters.
Efficient random generation from truncated Laplace distribution
If you need extreme efficiency and don't mind increased code complexity, you could adapt this ziggurat-like rejection sampling technique to the standard laplace distribution directly and use shifts an
Efficient random generation from truncated Laplace distribution If you need extreme efficiency and don't mind increased code complexity, you could adapt this ziggurat-like rejection sampling technique to the standard laplace distribution directly and use shifts and scaling to produce distributions with arbitrary parame...
Efficient random generation from truncated Laplace distribution If you need extreme efficiency and don't mind increased code complexity, you could adapt this ziggurat-like rejection sampling technique to the standard laplace distribution directly and use shifts an
45,640
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square)
No reconciliation is needed. In one case you are referring to the sampling distribution of the maximum likelihood estimator, which is a function of the data. In the other, you are referring to the posterior distribution of the actual model parameter. Two different referents; two different solutions. The advantage of...
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square)
No reconciliation is needed. In one case you are referring to the sampling distribution of the maximum likelihood estimator, which is a function of the data. In the other, you are referring to the p
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square) No reconciliation is needed. In one case you are referring to the sampling distribution of the maximum likelihood estimator, which is a function of the data. In the other, you are referring to the posterior distribution of...
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square) No reconciliation is needed. In one case you are referring to the sampling distribution of the maximum likelihood estimator, which is a function of the data. In the other, you are referring to the p
45,641
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square)
Eventhough this question was posted 4 years ago, I will make a post since it seems that there is a misconception in the comments of the other post. The question "So, the fact that Gamma is the sampling distribution of the MLE estimator, has nothing to do with the fact that we use the InverseGamma as a prior on the mode...
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square)
Eventhough this question was posted 4 years ago, I will make a post since it seems that there is a misconception in the comments of the other post. The question "So, the fact that Gamma is the samplin
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square) Eventhough this question was posted 4 years ago, I will make a post since it seems that there is a misconception in the comments of the other post. The question "So, the fact that Gamma is the sampling distribution of the ML...
Why do we use inverse Gamma as prior on variance, when empirical variance is Gamma (chi square) Eventhough this question was posted 4 years ago, I will make a post since it seems that there is a misconception in the comments of the other post. The question "So, the fact that Gamma is the samplin
45,642
How to identify if a problem is a good candidate for applying machine learning?
Prof. Yaser Abu-Mostafa talks briefly about this in his Caltech course on machine learning during the first lecture. He identifies 3 essential points you have to consider before considering applying machine learning to your problem: 1st. a pattern exists In order to be able to use your features for predicting anything ...
How to identify if a problem is a good candidate for applying machine learning?
Prof. Yaser Abu-Mostafa talks briefly about this in his Caltech course on machine learning during the first lecture. He identifies 3 essential points you have to consider before considering applying m
How to identify if a problem is a good candidate for applying machine learning? Prof. Yaser Abu-Mostafa talks briefly about this in his Caltech course on machine learning during the first lecture. He identifies 3 essential points you have to consider before considering applying machine learning to your problem: 1st. a ...
How to identify if a problem is a good candidate for applying machine learning? Prof. Yaser Abu-Mostafa talks briefly about this in his Caltech course on machine learning during the first lecture. He identifies 3 essential points you have to consider before considering applying m
45,643
How to identify if a problem is a good candidate for applying machine learning?
I would consider updating #5, as quantifiable metrics are not necessarily easy to optimize. For instance, directly optimizing 0-1 loss is NP-hard. So #5 could instead say: The metric we would like our model to optimize is quantifiable and is feasibly solvable (or has an appropriate surrogate). Other than that, your l...
How to identify if a problem is a good candidate for applying machine learning?
I would consider updating #5, as quantifiable metrics are not necessarily easy to optimize. For instance, directly optimizing 0-1 loss is NP-hard. So #5 could instead say: The metric we would like ou
How to identify if a problem is a good candidate for applying machine learning? I would consider updating #5, as quantifiable metrics are not necessarily easy to optimize. For instance, directly optimizing 0-1 loss is NP-hard. So #5 could instead say: The metric we would like our model to optimize is quantifiable and ...
How to identify if a problem is a good candidate for applying machine learning? I would consider updating #5, as quantifiable metrics are not necessarily easy to optimize. For instance, directly optimizing 0-1 loss is NP-hard. So #5 could instead say: The metric we would like ou
45,644
How to identify if a problem is a good candidate for applying machine learning?
1-5 are ideal, but I do not believe are 100% required. Some will rightfully cringe at that sentence. The most important thing to remember is the "no free lunch theorem", which reminds us that ML is based on theories, hypotheses, and/or assumptions at every stage of the pipeline. We can only define whether ML will help ...
How to identify if a problem is a good candidate for applying machine learning?
1-5 are ideal, but I do not believe are 100% required. Some will rightfully cringe at that sentence. The most important thing to remember is the "no free lunch theorem", which reminds us that ML is ba
How to identify if a problem is a good candidate for applying machine learning? 1-5 are ideal, but I do not believe are 100% required. Some will rightfully cringe at that sentence. The most important thing to remember is the "no free lunch theorem", which reminds us that ML is based on theories, hypotheses, and/or assu...
How to identify if a problem is a good candidate for applying machine learning? 1-5 are ideal, but I do not believe are 100% required. Some will rightfully cringe at that sentence. The most important thing to remember is the "no free lunch theorem", which reminds us that ML is ba
45,645
Optimizing the ridge regression loss function with unpenalized intercept
I'd suggest assimilating via $v = (b, w)$ and doing $$ L(v) = \|y - Xv\|^2 + \lambda v^T \Omega v $$ where $$ \Omega = \text{diag}\left(0, 1, \dots, 1\right) $$ so $$ v^T\Omega v = \sum_{j \geq 2} v_j^2 = w^T w. $$ Now we have $$ L(v) = y^Ty - 2v^TX^Ty + v^T(X^TX + \lambda \Omega)v $$ so $$ \nabla L = -2X^Ty + 2 (X^TX ...
Optimizing the ridge regression loss function with unpenalized intercept
I'd suggest assimilating via $v = (b, w)$ and doing $$ L(v) = \|y - Xv\|^2 + \lambda v^T \Omega v $$ where $$ \Omega = \text{diag}\left(0, 1, \dots, 1\right) $$ so $$ v^T\Omega v = \sum_{j \geq 2} v_j
Optimizing the ridge regression loss function with unpenalized intercept I'd suggest assimilating via $v = (b, w)$ and doing $$ L(v) = \|y - Xv\|^2 + \lambda v^T \Omega v $$ where $$ \Omega = \text{diag}\left(0, 1, \dots, 1\right) $$ so $$ v^T\Omega v = \sum_{j \geq 2} v_j^2 = w^T w. $$ Now we have $$ L(v) = y^Ty - 2v^...
Optimizing the ridge regression loss function with unpenalized intercept I'd suggest assimilating via $v = (b, w)$ and doing $$ L(v) = \|y - Xv\|^2 + \lambda v^T \Omega v $$ where $$ \Omega = \text{diag}\left(0, 1, \dots, 1\right) $$ so $$ v^T\Omega v = \sum_{j \geq 2} v_j
45,646
Sum of predicted values to the power of 10 [closed]
Let's work with natural logarithms, instead of base 10. You are tripped up by a common pitfall in the lognormal distribution: the expectation of the lognormal is not the exponential of the expectation $\mu$ on the log scale. You need to account for the heavy-tailedness by including the residual variance and calculate $...
Sum of predicted values to the power of 10 [closed]
Let's work with natural logarithms, instead of base 10. You are tripped up by a common pitfall in the lognormal distribution: the expectation of the lognormal is not the exponential of the expectation
Sum of predicted values to the power of 10 [closed] Let's work with natural logarithms, instead of base 10. You are tripped up by a common pitfall in the lognormal distribution: the expectation of the lognormal is not the exponential of the expectation $\mu$ on the log scale. You need to account for the heavy-tailednes...
Sum of predicted values to the power of 10 [closed] Let's work with natural logarithms, instead of base 10. You are tripped up by a common pitfall in the lognormal distribution: the expectation of the lognormal is not the exponential of the expectation
45,647
Sum of predicted values to the power of 10 [closed]
Differences in averages You have a function f(x) such that $f(\overline{x}) \neq \overline{f(x)}$. So like the mean squared is not equal to the square of the means you also have that the mean of a power is not a power of the mean. You estimate the model $y_i = \hat{y}_i +e_i$. Then you compare $\overline{10^{y_i}}$ wi...
Sum of predicted values to the power of 10 [closed]
Differences in averages You have a function f(x) such that $f(\overline{x}) \neq \overline{f(x)}$. So like the mean squared is not equal to the square of the means you also have that the mean of a pow
Sum of predicted values to the power of 10 [closed] Differences in averages You have a function f(x) such that $f(\overline{x}) \neq \overline{f(x)}$. So like the mean squared is not equal to the square of the means you also have that the mean of a power is not a power of the mean. You estimate the model $y_i = \hat{y...
Sum of predicted values to the power of 10 [closed] Differences in averages You have a function f(x) such that $f(\overline{x}) \neq \overline{f(x)}$. So like the mean squared is not equal to the square of the means you also have that the mean of a pow
45,648
Fisher's Information for Laplace distribution
Your notation is ridiculously over-complicated for what you're doing. For the Laplace distribution with unit scale (which is the density you have given) you have $l_x(\theta) = - \ln 2 - |x - \theta|$, which has the (weak) derivative: $$\frac{\partial l_x}{\partial \theta}(\theta) = \text{sgn}(x- \theta) \text{ } \tex...
Fisher's Information for Laplace distribution
Your notation is ridiculously over-complicated for what you're doing. For the Laplace distribution with unit scale (which is the density you have given) you have $l_x(\theta) = - \ln 2 - |x - \theta|
Fisher's Information for Laplace distribution Your notation is ridiculously over-complicated for what you're doing. For the Laplace distribution with unit scale (which is the density you have given) you have $l_x(\theta) = - \ln 2 - |x - \theta|$, which has the (weak) derivative: $$\frac{\partial l_x}{\partial \theta}...
Fisher's Information for Laplace distribution Your notation is ridiculously over-complicated for what you're doing. For the Laplace distribution with unit scale (which is the density you have given) you have $l_x(\theta) = - \ln 2 - |x - \theta|
45,649
Meaning of "identically distributed" when there's only one variable
You have $n$ observations, $y\in\mathbb{R}^n$. You correspondingly have $n$ noise terms, $\epsilon\in\mathbb{R}^n$. The last sentence means that each separate noise term $\epsilon_i$ is identically distributed and that they are independent. (In more general situations, the $\epsilon_i$ may not be identically distribute...
Meaning of "identically distributed" when there's only one variable
You have $n$ observations, $y\in\mathbb{R}^n$. You correspondingly have $n$ noise terms, $\epsilon\in\mathbb{R}^n$. The last sentence means that each separate noise term $\epsilon_i$ is identically di
Meaning of "identically distributed" when there's only one variable You have $n$ observations, $y\in\mathbb{R}^n$. You correspondingly have $n$ noise terms, $\epsilon\in\mathbb{R}^n$. The last sentence means that each separate noise term $\epsilon_i$ is identically distributed and that they are independent. (In more ge...
Meaning of "identically distributed" when there's only one variable You have $n$ observations, $y\in\mathbb{R}^n$. You correspondingly have $n$ noise terms, $\epsilon\in\mathbb{R}^n$. The last sentence means that each separate noise term $\epsilon_i$ is identically di
45,650
Meaning of "identically distributed" when there's only one variable
Identically distributed generally means that each observation of a variable was sampled independently from a distribution identical to every other observation on that variable. A simple random walk where $y_{y} = y_{t-1} + 2\mathcal{Bernouli}\left(0.5\right)-1$ is an example of a variable ($y_{t}$) that is not i.i.d.: ...
Meaning of "identically distributed" when there's only one variable
Identically distributed generally means that each observation of a variable was sampled independently from a distribution identical to every other observation on that variable. A simple random walk wh
Meaning of "identically distributed" when there's only one variable Identically distributed generally means that each observation of a variable was sampled independently from a distribution identical to every other observation on that variable. A simple random walk where $y_{y} = y_{t-1} + 2\mathcal{Bernouli}\left(0.5\...
Meaning of "identically distributed" when there's only one variable Identically distributed generally means that each observation of a variable was sampled independently from a distribution identical to every other observation on that variable. A simple random walk wh
45,651
Proof that $K(x,y) = f(x)f(y)$ is a kernel
$\sum_{i=1}^n\sum_{j=1}^nK(x_i, x_j)c_ic_j=\sum_{i=1}^n\sum_{j=1}^nf(x_i)f(x_j)c_ic_j = (\sum_{i=1}^nf(x_i)c_i)^2 \geq 0$
Proof that $K(x,y) = f(x)f(y)$ is a kernel
$\sum_{i=1}^n\sum_{j=1}^nK(x_i, x_j)c_ic_j=\sum_{i=1}^n\sum_{j=1}^nf(x_i)f(x_j)c_ic_j = (\sum_{i=1}^nf(x_i)c_i)^2 \geq 0$
Proof that $K(x,y) = f(x)f(y)$ is a kernel $\sum_{i=1}^n\sum_{j=1}^nK(x_i, x_j)c_ic_j=\sum_{i=1}^n\sum_{j=1}^nf(x_i)f(x_j)c_ic_j = (\sum_{i=1}^nf(x_i)c_i)^2 \geq 0$
Proof that $K(x,y) = f(x)f(y)$ is a kernel $\sum_{i=1}^n\sum_{j=1}^nK(x_i, x_j)c_ic_j=\sum_{i=1}^n\sum_{j=1}^nf(x_i)f(x_j)c_ic_j = (\sum_{i=1}^nf(x_i)c_i)^2 \geq 0$
45,652
Gradient and hessian of the MAPE
The Mean Absolute Percentage Error (MAPE) is defined as $$\text{MAPE} := \frac{1}{N}\sum_{i=1}^N\frac{|\hat{y}_i-y_i|}{y_i},$$ where the $y_i$ are actuals and the $\hat{y}_i$ are predictions. The gradient is the vector collecting the first derivatives: $$\frac{\partial\text{MAPE}}{\partial\hat{y}_i} = \begin{cases} -\...
Gradient and hessian of the MAPE
The Mean Absolute Percentage Error (MAPE) is defined as $$\text{MAPE} := \frac{1}{N}\sum_{i=1}^N\frac{|\hat{y}_i-y_i|}{y_i},$$ where the $y_i$ are actuals and the $\hat{y}_i$ are predictions. The grad
Gradient and hessian of the MAPE The Mean Absolute Percentage Error (MAPE) is defined as $$\text{MAPE} := \frac{1}{N}\sum_{i=1}^N\frac{|\hat{y}_i-y_i|}{y_i},$$ where the $y_i$ are actuals and the $\hat{y}_i$ are predictions. The gradient is the vector collecting the first derivatives: $$\frac{\partial\text{MAPE}}{\part...
Gradient and hessian of the MAPE The Mean Absolute Percentage Error (MAPE) is defined as $$\text{MAPE} := \frac{1}{N}\sum_{i=1}^N\frac{|\hat{y}_i-y_i|}{y_i},$$ where the $y_i$ are actuals and the $\hat{y}_i$ are predictions. The grad
45,653
Object detection - how to annotate negative samples
Short answer: You don't need negative samples to train your model. Focus on improving your model. Long answer: Ideally your detector after being trained on your 2 objects would detect them and place a bounding box around them. When you test it and get wrong results this could be caused by a variety of reasons: A wron...
Object detection - how to annotate negative samples
Short answer: You don't need negative samples to train your model. Focus on improving your model. Long answer: Ideally your detector after being trained on your 2 objects would detect them and place
Object detection - how to annotate negative samples Short answer: You don't need negative samples to train your model. Focus on improving your model. Long answer: Ideally your detector after being trained on your 2 objects would detect them and place a bounding box around them. When you test it and get wrong results t...
Object detection - how to annotate negative samples Short answer: You don't need negative samples to train your model. Focus on improving your model. Long answer: Ideally your detector after being trained on your 2 objects would detect them and place
45,654
Object detection - how to annotate negative samples
You can simply use the verify feature in LabelImg and it will create a file without annotations you can run through your process. Reading through this article it seems negative images are indeed needed.
Object detection - how to annotate negative samples
You can simply use the verify feature in LabelImg and it will create a file without annotations you can run through your process. Reading through this article it seems negative images are indeed neede
Object detection - how to annotate negative samples You can simply use the verify feature in LabelImg and it will create a file without annotations you can run through your process. Reading through this article it seems negative images are indeed needed.
Object detection - how to annotate negative samples You can simply use the verify feature in LabelImg and it will create a file without annotations you can run through your process. Reading through this article it seems negative images are indeed neede
45,655
Object detection - how to annotate negative samples
The OP asked about negative samples in Tensorflow Object detection API. I agree that we do not specific images for negative samples in NNs-based Object Detection. All negatives samples are implicitly available when some areas of the images are not labelled (no bounding box on it).
Object detection - how to annotate negative samples
The OP asked about negative samples in Tensorflow Object detection API. I agree that we do not specific images for negative samples in NNs-based Object Detection. All negatives samples are implicitly
Object detection - how to annotate negative samples The OP asked about negative samples in Tensorflow Object detection API. I agree that we do not specific images for negative samples in NNs-based Object Detection. All negatives samples are implicitly available when some areas of the images are not labelled (no boundin...
Object detection - how to annotate negative samples The OP asked about negative samples in Tensorflow Object detection API. I agree that we do not specific images for negative samples in NNs-based Object Detection. All negatives samples are implicitly
45,656
Why does VGG16 double number of features after each maxpooling layer?
You should really ask in the course forum :) or contact Jeremy on Twitter, he's a great guy. Having said that, the idea is this: subsampling, aka pooling (max pooling, mean pooling, etc.: currently max pooling is the most common choice in CNNs) has three main advantages: it makes your net more robust to noise: if you ...
Why does VGG16 double number of features after each maxpooling layer?
You should really ask in the course forum :) or contact Jeremy on Twitter, he's a great guy. Having said that, the idea is this: subsampling, aka pooling (max pooling, mean pooling, etc.: currently ma
Why does VGG16 double number of features after each maxpooling layer? You should really ask in the course forum :) or contact Jeremy on Twitter, he's a great guy. Having said that, the idea is this: subsampling, aka pooling (max pooling, mean pooling, etc.: currently max pooling is the most common choice in CNNs) has t...
Why does VGG16 double number of features after each maxpooling layer? You should really ask in the course forum :) or contact Jeremy on Twitter, he's a great guy. Having said that, the idea is this: subsampling, aka pooling (max pooling, mean pooling, etc.: currently ma
45,657
Why does VGG16 double number of features after each maxpooling layer?
If you think that first convolutional layer extracts simple features, like lines, than the next convolutional layer has to combine these lines into more complected structure. Basically, each channel in the next layer responsible for the combination of different lines into more complicated structure. Since structure get...
Why does VGG16 double number of features after each maxpooling layer?
If you think that first convolutional layer extracts simple features, like lines, than the next convolutional layer has to combine these lines into more complected structure. Basically, each channel i
Why does VGG16 double number of features after each maxpooling layer? If you think that first convolutional layer extracts simple features, like lines, than the next convolutional layer has to combine these lines into more complected structure. Basically, each channel in the next layer responsible for the combination o...
Why does VGG16 double number of features after each maxpooling layer? If you think that first convolutional layer extracts simple features, like lines, than the next convolutional layer has to combine these lines into more complected structure. Basically, each channel i
45,658
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$?
Corrected after @Glen_b pointed out a glaring error. Sloppy proof, but should work. I think we can prove this using characteristic functions. Let X, Y be iid. Let Z = Y-X Then, $\phi_{X-Y}(t) = E[e^{it(X-Y)}] = \phi_X(t)\phi_{-Y}(t)$. Similar to the CDF, the characteristic function of X uniquely characterizes the dist...
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$?
Corrected after @Glen_b pointed out a glaring error. Sloppy proof, but should work. I think we can prove this using characteristic functions. Let X, Y be iid. Let Z = Y-X Then, $\phi_{X-Y}(t) = E[e^{
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$? Corrected after @Glen_b pointed out a glaring error. Sloppy proof, but should work. I think we can prove this using characteristic functions. Let X, Y be iid. Let Z = Y-X Then, $\phi_{X-Y}(t) = E[e^{it(X-Y)}] = \phi_X(t)\phi_...
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$? Corrected after @Glen_b pointed out a glaring error. Sloppy proof, but should work. I think we can prove this using characteristic functions. Let X, Y be iid. Let Z = Y-X Then, $\phi_{X-Y}(t) = E[e^{
45,659
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$?
Just to clear up the source of my own confusion, I managed to coax just enough (about 4 lines!) out of Google books to resolve the origin of my doubt. It was from Romano and Siegel* and what they actually have there is: 4.34 Identically distributed random variables such that their difference does not have a symmetric ...
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$?
Just to clear up the source of my own confusion, I managed to coax just enough (about 4 lines!) out of Google books to resolve the origin of my doubt. It was from Romano and Siegel* and what they actu
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$? Just to clear up the source of my own confusion, I managed to coax just enough (about 4 lines!) out of Google books to resolve the origin of my doubt. It was from Romano and Siegel* and what they actually have there is: 4.34 ...
$X$, $Y$ independent identically distributed. Are there counterexamples to symmetry of $X-Y$? Just to clear up the source of my own confusion, I managed to coax just enough (about 4 lines!) out of Google books to resolve the origin of my doubt. It was from Romano and Siegel* and what they actu
45,660
What is the probability for an N-char string to appear in an M-length random string?
This answer can be viewed as supplemental to @StephanKolassa's answer, in light of the counterexample provided in the comment by @whuber. Although the accepted answer is not a general solution, it does work for the specific question asked by the OP. We will start with a sufficient condition for the formula to hold. L...
What is the probability for an N-char string to appear in an M-length random string?
This answer can be viewed as supplemental to @StephanKolassa's answer, in light of the counterexample provided in the comment by @whuber. Although the accepted answer is not a general solution, it do
What is the probability for an N-char string to appear in an M-length random string? This answer can be viewed as supplemental to @StephanKolassa's answer, in light of the counterexample provided in the comment by @whuber. Although the accepted answer is not a general solution, it does work for the specific question a...
What is the probability for an N-char string to appear in an M-length random string? This answer can be viewed as supplemental to @StephanKolassa's answer, in light of the counterexample provided in the comment by @whuber. Although the accepted answer is not a general solution, it do
45,661
What is the probability for an N-char string to appear in an M-length random string?
The answer by Stephan Kolassa works, but it is not general as noted in the answer by knrumsey. Several questions here have had similar issues with the overlap/double counting. For methods to solve this see Probability of a similar sub-sequence of length X in two sequences of length Y and Z A fair die is rolled 1,000 ti...
What is the probability for an N-char string to appear in an M-length random string?
The answer by Stephan Kolassa works, but it is not general as noted in the answer by knrumsey. Several questions here have had similar issues with the overlap/double counting. For methods to solve thi
What is the probability for an N-char string to appear in an M-length random string? The answer by Stephan Kolassa works, but it is not general as noted in the answer by knrumsey. Several questions here have had similar issues with the overlap/double counting. For methods to solve this see Probability of a similar sub-...
What is the probability for an N-char string to appear in an M-length random string? The answer by Stephan Kolassa works, but it is not general as noted in the answer by knrumsey. Several questions here have had similar issues with the overlap/double counting. For methods to solve thi
45,662
What is the probability for an N-char string to appear in an M-length random string?
EDIT: The answer by knrumsey is better than mine. I hope the OP will un-accept my answer and accept theirs. (I would consider deleting mine, but it may serve as useful context for knrumsey's.) Overall, there are $62^M$ different possible strings, because you have $62$ choices for each of the $M$ characters. How many o...
What is the probability for an N-char string to appear in an M-length random string?
EDIT: The answer by knrumsey is better than mine. I hope the OP will un-accept my answer and accept theirs. (I would consider deleting mine, but it may serve as useful context for knrumsey's.) Overal
What is the probability for an N-char string to appear in an M-length random string? EDIT: The answer by knrumsey is better than mine. I hope the OP will un-accept my answer and accept theirs. (I would consider deleting mine, but it may serve as useful context for knrumsey's.) Overall, there are $62^M$ different possi...
What is the probability for an N-char string to appear in an M-length random string? EDIT: The answer by knrumsey is better than mine. I hope the OP will un-accept my answer and accept theirs. (I would consider deleting mine, but it may serve as useful context for knrumsey's.) Overal
45,663
Role of delays in LSTM networks
UPDATED Your example is very interesting. On one hand it is constructed in such a way that you really need only one parameter and its value is 1: $$y_t=\beta+w y_{t-1}\\\beta=0\\w=1$$ Your training data set is small (96 observations), but with three layer network you have quite a few parameters. It's very easy to overf...
Role of delays in LSTM networks
UPDATED Your example is very interesting. On one hand it is constructed in such a way that you really need only one parameter and its value is 1: $$y_t=\beta+w y_{t-1}\\\beta=0\\w=1$$ Your training da
Role of delays in LSTM networks UPDATED Your example is very interesting. On one hand it is constructed in such a way that you really need only one parameter and its value is 1: $$y_t=\beta+w y_{t-1}\\\beta=0\\w=1$$ Your training data set is small (96 observations), but with three layer network you have quite a few par...
Role of delays in LSTM networks UPDATED Your example is very interesting. On one hand it is constructed in such a way that you really need only one parameter and its value is 1: $$y_t=\beta+w y_{t-1}\\\beta=0\\w=1$$ Your training da
45,664
Rare Events Logistic Regression
Don't do anything special. However, and this is crucial: choose a good quality measure. And that is not classification accuracy, sensitivity, specificity or similar measures, such as ROC curves. These can be very misleading in the case of unbalanced data, "identifying" that simply labeling everything as the majority cl...
Rare Events Logistic Regression
Don't do anything special. However, and this is crucial: choose a good quality measure. And that is not classification accuracy, sensitivity, specificity or similar measures, such as ROC curves. These
Rare Events Logistic Regression Don't do anything special. However, and this is crucial: choose a good quality measure. And that is not classification accuracy, sensitivity, specificity or similar measures, such as ROC curves. These can be very misleading in the case of unbalanced data, "identifying" that simply labeli...
Rare Events Logistic Regression Don't do anything special. However, and this is crucial: choose a good quality measure. And that is not classification accuracy, sensitivity, specificity or similar measures, such as ROC curves. These
45,665
Rare Events Logistic Regression
What you are asking here is basically food for numerous posts, opinions and a lot of research on the field of imbalanced classes. To answer your main question, there is not a straight answer of what a “rare” event is. I personally would say that your case is basically in the boundary. I will list some approaches here b...
Rare Events Logistic Regression
What you are asking here is basically food for numerous posts, opinions and a lot of research on the field of imbalanced classes. To answer your main question, there is not a straight answer of what a
Rare Events Logistic Regression What you are asking here is basically food for numerous posts, opinions and a lot of research on the field of imbalanced classes. To answer your main question, there is not a straight answer of what a “rare” event is. I personally would say that your case is basically in the boundary. I ...
Rare Events Logistic Regression What you are asking here is basically food for numerous posts, opinions and a lot of research on the field of imbalanced classes. To answer your main question, there is not a straight answer of what a
45,666
Ordinal regression: logit, probit, complementary log-log or negative log-log?
There is no general guidance on this question, except that if you had to pick one model without knowing anything about the fit of any of the models, you might pick the logistic link (proportional odds ordinal logistic model) because its parameters are more interpretable. In my RMS course notes I have an in-depth case ...
Ordinal regression: logit, probit, complementary log-log or negative log-log?
There is no general guidance on this question, except that if you had to pick one model without knowing anything about the fit of any of the models, you might pick the logistic link (proportional odds
Ordinal regression: logit, probit, complementary log-log or negative log-log? There is no general guidance on this question, except that if you had to pick one model without knowing anything about the fit of any of the models, you might pick the logistic link (proportional odds ordinal logistic model) because its param...
Ordinal regression: logit, probit, complementary log-log or negative log-log? There is no general guidance on this question, except that if you had to pick one model without knowing anything about the fit of any of the models, you might pick the logistic link (proportional odds
45,667
Why does including an offset in ordinary regression change $R^2$?
Both are valid summaries of the models, but they should differ because the models involve different responses. The following analysis focuses on $R^2$, because those differ, too, and the adjusted $R^2$ is a simple function of $R^2$ (but a little more complicated to write). The first model is $$\mathbb{E}(Y \mid x) =...
Why does including an offset in ordinary regression change $R^2$?
Both are valid summaries of the models, but they should differ because the models involve different responses. The following analysis focuses on $R^2$, because those differ, too, and the adjusted $R
Why does including an offset in ordinary regression change $R^2$? Both are valid summaries of the models, but they should differ because the models involve different responses. The following analysis focuses on $R^2$, because those differ, too, and the adjusted $R^2$ is a simple function of $R^2$ (but a little more c...
Why does including an offset in ordinary regression change $R^2$? Both are valid summaries of the models, but they should differ because the models involve different responses. The following analysis focuses on $R^2$, because those differ, too, and the adjusted $R
45,668
Kalman filter vs Kalman Smoother for beta calculations
They are not really different approaches in that they are solutions to different problems: one computes the sequence of filtering distributions $p(\beta_t|Y_{1:t})$, and the other the distributions based on all observations $p(\beta_t|Y_{1:T})$, for $t =1,...,T$. The smoother doesn't "hide underlying dynamics" but rath...
Kalman filter vs Kalman Smoother for beta calculations
They are not really different approaches in that they are solutions to different problems: one computes the sequence of filtering distributions $p(\beta_t|Y_{1:t})$, and the other the distributions ba
Kalman filter vs Kalman Smoother for beta calculations They are not really different approaches in that they are solutions to different problems: one computes the sequence of filtering distributions $p(\beta_t|Y_{1:t})$, and the other the distributions based on all observations $p(\beta_t|Y_{1:T})$, for $t =1,...,T$. T...
Kalman filter vs Kalman Smoother for beta calculations They are not really different approaches in that they are solutions to different problems: one computes the sequence of filtering distributions $p(\beta_t|Y_{1:t})$, and the other the distributions ba
45,669
Zero correlation between $x$ and $y = f(x)$?
There are three mutually exclusive possibilities for $f$ (apart from the trivial one where the domain of $f$ has just one element). To be fully general and avoid trivial complications, let's not worry about correlation, but focus on covariance instead: when covariance is zero, correlation is either zero or undefined. ...
Zero correlation between $x$ and $y = f(x)$?
There are three mutually exclusive possibilities for $f$ (apart from the trivial one where the domain of $f$ has just one element). To be fully general and avoid trivial complications, let's not worr
Zero correlation between $x$ and $y = f(x)$? There are three mutually exclusive possibilities for $f$ (apart from the trivial one where the domain of $f$ has just one element). To be fully general and avoid trivial complications, let's not worry about correlation, but focus on covariance instead: when covariance is ze...
Zero correlation between $x$ and $y = f(x)$? There are three mutually exclusive possibilities for $f$ (apart from the trivial one where the domain of $f$ has just one element). To be fully general and avoid trivial complications, let's not worr
45,670
Zero correlation between $x$ and $y = f(x)$?
The answer is no if $f$ is affine-linear. This holds by the linearity and translation invariance of the covariance.
Zero correlation between $x$ and $y = f(x)$?
The answer is no if $f$ is affine-linear. This holds by the linearity and translation invariance of the covariance.
Zero correlation between $x$ and $y = f(x)$? The answer is no if $f$ is affine-linear. This holds by the linearity and translation invariance of the covariance.
Zero correlation between $x$ and $y = f(x)$? The answer is no if $f$ is affine-linear. This holds by the linearity and translation invariance of the covariance.
45,671
Interpreting interactions in a linear model vs quadratic model
My favorite way to understand an interaction between two continuous predictors (e.g. heat and year) is to plot it with the full range of one predictor on the x-axis and a few different lines representing potentially interesting values of the other predictor --- I usually pick three, representing low, medium, and high l...
Interpreting interactions in a linear model vs quadratic model
My favorite way to understand an interaction between two continuous predictors (e.g. heat and year) is to plot it with the full range of one predictor on the x-axis and a few different lines represent
Interpreting interactions in a linear model vs quadratic model My favorite way to understand an interaction between two continuous predictors (e.g. heat and year) is to plot it with the full range of one predictor on the x-axis and a few different lines representing potentially interesting values of the other predictor...
Interpreting interactions in a linear model vs quadratic model My favorite way to understand an interaction between two continuous predictors (e.g. heat and year) is to plot it with the full range of one predictor on the x-axis and a few different lines represent
45,672
Interpreting interactions in a linear model vs quadratic model
Taking derivative respect to heat we get marginal effect that is: $\frac{\partial yield}{\partial heat}=2500-2*632*heat-10*year+2*0.31*year*heat$ We can rearrange part of interest of ME in the following way: $year*(-10+0.62*heat)$ It means that greater heat we have then slower it's positive effect decline with each yea...
Interpreting interactions in a linear model vs quadratic model
Taking derivative respect to heat we get marginal effect that is: $\frac{\partial yield}{\partial heat}=2500-2*632*heat-10*year+2*0.31*year*heat$ We can rearrange part of interest of ME in the followi
Interpreting interactions in a linear model vs quadratic model Taking derivative respect to heat we get marginal effect that is: $\frac{\partial yield}{\partial heat}=2500-2*632*heat-10*year+2*0.31*year*heat$ We can rearrange part of interest of ME in the following way: $year*(-10+0.62*heat)$ It means that greater heat...
Interpreting interactions in a linear model vs quadratic model Taking derivative respect to heat we get marginal effect that is: $\frac{\partial yield}{\partial heat}=2500-2*632*heat-10*year+2*0.31*year*heat$ We can rearrange part of interest of ME in the followi
45,673
Deep Learning: Wild differences after model is retrained on the same data, what to do?
This is normal and exactly what you should expect. Often, deep models are exquisitely sensitive to the initial parameters. It would appear that they are exquisitely sensitive to the first step taken, even. This is because the loss function is non-convex and optimization procedures have difficulty finding any kind of ...
Deep Learning: Wild differences after model is retrained on the same data, what to do?
This is normal and exactly what you should expect. Often, deep models are exquisitely sensitive to the initial parameters. It would appear that they are exquisitely sensitive to the first step taken,
Deep Learning: Wild differences after model is retrained on the same data, what to do? This is normal and exactly what you should expect. Often, deep models are exquisitely sensitive to the initial parameters. It would appear that they are exquisitely sensitive to the first step taken, even. This is because the loss ...
Deep Learning: Wild differences after model is retrained on the same data, what to do? This is normal and exactly what you should expect. Often, deep models are exquisitely sensitive to the initial parameters. It would appear that they are exquisitely sensitive to the first step taken,
45,674
Deep Learning: Wild differences after model is retrained on the same data, what to do?
Assuming that your classifier is trained well and not over- or underfit, this can usually be taken as a measure of uncertainty: for some reason, different models using the same training data get different results. This could be possibly because you simply cannot predict this temperature well from the data, e.g. there i...
Deep Learning: Wild differences after model is retrained on the same data, what to do?
Assuming that your classifier is trained well and not over- or underfit, this can usually be taken as a measure of uncertainty: for some reason, different models using the same training data get diffe
Deep Learning: Wild differences after model is retrained on the same data, what to do? Assuming that your classifier is trained well and not over- or underfit, this can usually be taken as a measure of uncertainty: for some reason, different models using the same training data get different results. This could be possi...
Deep Learning: Wild differences after model is retrained on the same data, what to do? Assuming that your classifier is trained well and not over- or underfit, this can usually be taken as a measure of uncertainty: for some reason, different models using the same training data get diffe
45,675
What exactly is overfitting?
You can't determine which curve is better by staring at them. And by "staring" I mean analyzing them based on pure statistical features of this particular sample. For instance, the black curve is better than the green one if the blue dots that stick out of the blue area into the red are by a pure chance, i.e. random. ...
What exactly is overfitting?
You can't determine which curve is better by staring at them. And by "staring" I mean analyzing them based on pure statistical features of this particular sample. For instance, the black curve is bet
What exactly is overfitting? You can't determine which curve is better by staring at them. And by "staring" I mean analyzing them based on pure statistical features of this particular sample. For instance, the black curve is better than the green one if the blue dots that stick out of the blue area into the red are by...
What exactly is overfitting? You can't determine which curve is better by staring at them. And by "staring" I mean analyzing them based on pure statistical features of this particular sample. For instance, the black curve is bet
45,676
What exactly is overfitting?
From wikipedia: In overfitting, a statistical model describes random error or noise instead of the underlying relationship. Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfit has poor predictive perfor...
What exactly is overfitting?
From wikipedia: In overfitting, a statistical model describes random error or noise instead of the underlying relationship. Overfitting occurs when a model is excessively complex, such as having
What exactly is overfitting? From wikipedia: In overfitting, a statistical model describes random error or noise instead of the underlying relationship. Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfit...
What exactly is overfitting? From wikipedia: In overfitting, a statistical model describes random error or noise instead of the underlying relationship. Overfitting occurs when a model is excessively complex, such as having
45,677
What exactly is overfitting?
Overfitting is when you end up modeling noise in the data which results in lower classification error on training data but reduces the accuracy on not-seen(validation data). Say you have 10 pairs: $(x,2x+e)$ plotted with you with e as a small random error. You can definitely model this perfectly with a 9 degree polyn...
What exactly is overfitting?
Overfitting is when you end up modeling noise in the data which results in lower classification error on training data but reduces the accuracy on not-seen(validation data). Say you have 10 pairs: $(
What exactly is overfitting? Overfitting is when you end up modeling noise in the data which results in lower classification error on training data but reduces the accuracy on not-seen(validation data). Say you have 10 pairs: $(x,2x+e)$ plotted with you with e as a small random error. You can definitely model this pe...
What exactly is overfitting? Overfitting is when you end up modeling noise in the data which results in lower classification error on training data but reduces the accuracy on not-seen(validation data). Say you have 10 pairs: $(
45,678
What exactly is overfitting?
ELI5 version: Consider a neural network having 1M parameters for classifying cat-dog. We can assume each parameter of 1M as a linear line fitting a yes/no question for features like if the img has tail, claws etc. what happens here is we have more than enough questions/parameters to infer if a given img is cat/dog and ...
What exactly is overfitting?
ELI5 version: Consider a neural network having 1M parameters for classifying cat-dog. We can assume each parameter of 1M as a linear line fitting a yes/no question for features like if the img has tai
What exactly is overfitting? ELI5 version: Consider a neural network having 1M parameters for classifying cat-dog. We can assume each parameter of 1M as a linear line fitting a yes/no question for features like if the img has tail, claws etc. what happens here is we have more than enough questions/parameters to infer i...
What exactly is overfitting? ELI5 version: Consider a neural network having 1M parameters for classifying cat-dog. We can assume each parameter of 1M as a linear line fitting a yes/no question for features like if the img has tai
45,679
What is the expectation of the absolute value of the Skellam distribution?
It's possible to write the expectation in terms of easy-to-compute special functions. Let $z$ follow a Skellam distribution with rates $\lambda_1$ and $\lambda_2$, and $k = |z|$. The pmf for $k$ is: $$p(k; \lambda_1, \lambda_2) = \begin{cases} e^{-\lambda_1 - \lambda_2} \left( \left(\frac{\lambda_1}{\lambda_2}\right)^{...
What is the expectation of the absolute value of the Skellam distribution?
It's possible to write the expectation in terms of easy-to-compute special functions. Let $z$ follow a Skellam distribution with rates $\lambda_1$ and $\lambda_2$, and $k = |z|$. The pmf for $k$ is: $
What is the expectation of the absolute value of the Skellam distribution? It's possible to write the expectation in terms of easy-to-compute special functions. Let $z$ follow a Skellam distribution with rates $\lambda_1$ and $\lambda_2$, and $k = |z|$. The pmf for $k$ is: $$p(k; \lambda_1, \lambda_2) = \begin{cases} e...
What is the expectation of the absolute value of the Skellam distribution? It's possible to write the expectation in terms of easy-to-compute special functions. Let $z$ follow a Skellam distribution with rates $\lambda_1$ and $\lambda_2$, and $k = |z|$. The pmf for $k$ is: $
45,680
Is "Shannon entropy" used incorrectly in machine learning related literature?
It's not a problem. In fact Shannon himself suggested that other units could be used, see in his paper "A Mathematical Theory of Communication" the very first equation (bottom of page 1). Here's a quote from the paper: In analytical work where integration and differentiation are involved the base e is sometimes usef...
Is "Shannon entropy" used incorrectly in machine learning related literature?
It's not a problem. In fact Shannon himself suggested that other units could be used, see in his paper "A Mathematical Theory of Communication" the very first equation (bottom of page 1). Here's a quo
Is "Shannon entropy" used incorrectly in machine learning related literature? It's not a problem. In fact Shannon himself suggested that other units could be used, see in his paper "A Mathematical Theory of Communication" the very first equation (bottom of page 1). Here's a quote from the paper: In analytical work whe...
Is "Shannon entropy" used incorrectly in machine learning related literature? It's not a problem. In fact Shannon himself suggested that other units could be used, see in his paper "A Mathematical Theory of Communication" the very first equation (bottom of page 1). Here's a quo
45,681
Intuition for Rayleigh PDF
Here's some intuition: The bivariate distribution of the errors has its maximum at 0. However, the distribution of the distance from the center does not, since the only point contributing density there is the one at the center. As you move out a little the bivariate density has decreased only a little but you get a lit...
Intuition for Rayleigh PDF
Here's some intuition: The bivariate distribution of the errors has its maximum at 0. However, the distribution of the distance from the center does not, since the only point contributing density ther
Intuition for Rayleigh PDF Here's some intuition: The bivariate distribution of the errors has its maximum at 0. However, the distribution of the distance from the center does not, since the only point contributing density there is the one at the center. As you move out a little the bivariate density has decreased only...
Intuition for Rayleigh PDF Here's some intuition: The bivariate distribution of the errors has its maximum at 0. However, the distribution of the distance from the center does not, since the only point contributing density ther
45,682
Intuition for Rayleigh PDF
No, the OP's claim that "the two components of $\mathbf{e}_i = (e_{x,i}, e_{y,i})^T $ are independent of each other and normally distributed with equal mean $\mu$ and variance $\sigma^2$" is incorrect: the means are $0$ because the OP subtracted off $\mu$ when you defined $\mathbf{e}_i $ as being $ \mathbf{x}_i - \m...
Intuition for Rayleigh PDF
No, the OP's claim that "the two components of $\mathbf{e}_i = (e_{x,i}, e_{y,i})^T $ are independent of each other and normally distributed with equal mean $\mu$ and variance $\sigma^2$" is incorr
Intuition for Rayleigh PDF No, the OP's claim that "the two components of $\mathbf{e}_i = (e_{x,i}, e_{y,i})^T $ are independent of each other and normally distributed with equal mean $\mu$ and variance $\sigma^2$" is incorrect: the means are $0$ because the OP subtracted off $\mu$ when you defined $\mathbf{e}_i $ a...
Intuition for Rayleigh PDF No, the OP's claim that "the two components of $\mathbf{e}_i = (e_{x,i}, e_{y,i})^T $ are independent of each other and normally distributed with equal mean $\mu$ and variance $\sigma^2$" is incorr
45,683
Intuition for Rayleigh PDF
Your "absolute error" random variable is Rayleigh distributed, with PDF $$\frac{x}{\sigma^2} e^{-x^2/(2\sigma^2)}, \quad x \geq 0$$ only if the mean of $e_i$ is centered on the point $x^*$. (If not, then you do have a "systematic error" which, if you can't factor out, would require you to use the Rice distribution to ...
Intuition for Rayleigh PDF
Your "absolute error" random variable is Rayleigh distributed, with PDF $$\frac{x}{\sigma^2} e^{-x^2/(2\sigma^2)}, \quad x \geq 0$$ only if the mean of $e_i$ is centered on the point $x^*$. (If not,
Intuition for Rayleigh PDF Your "absolute error" random variable is Rayleigh distributed, with PDF $$\frac{x}{\sigma^2} e^{-x^2/(2\sigma^2)}, \quad x \geq 0$$ only if the mean of $e_i$ is centered on the point $x^*$. (If not, then you do have a "systematic error" which, if you can't factor out, would require you to us...
Intuition for Rayleigh PDF Your "absolute error" random variable is Rayleigh distributed, with PDF $$\frac{x}{\sigma^2} e^{-x^2/(2\sigma^2)}, \quad x \geq 0$$ only if the mean of $e_i$ is centered on the point $x^*$. (If not,
45,684
How does Stigler derive this result from Bernoulli's weak law of large numbers?
It is indeed very simple algebra. Clearly you need to end up with no term in $P \big( \left| \frac{X}{N} - p \right| \leq \epsilon \big)$; since there's already a term in the complementary event, you have an obvious substitution you can perform. $$ P \bigg( \left| \frac{X}{N} - p \right| \leq \epsilon \bigg) > c\,P \bi...
How does Stigler derive this result from Bernoulli's weak law of large numbers?
It is indeed very simple algebra. Clearly you need to end up with no term in $P \big( \left| \frac{X}{N} - p \right| \leq \epsilon \big)$; since there's already a term in the complementary event, you
How does Stigler derive this result from Bernoulli's weak law of large numbers? It is indeed very simple algebra. Clearly you need to end up with no term in $P \big( \left| \frac{X}{N} - p \right| \leq \epsilon \big)$; since there's already a term in the complementary event, you have an obvious substitution you can per...
How does Stigler derive this result from Bernoulli's weak law of large numbers? It is indeed very simple algebra. Clearly you need to end up with no term in $P \big( \left| \frac{X}{N} - p \right| \leq \epsilon \big)$; since there's already a term in the complementary event, you
45,685
How to speed up hyperparameter optimization?
Here are some general techniques to speed up hyperparameter optimization. If you have a large dataset, use a simple validation set instead of cross validation. This will increase the speed by a factor of ~k, compared to k-fold cross validation. This won't work well if you don't have enough data. Parallelize the problem...
How to speed up hyperparameter optimization?
Here are some general techniques to speed up hyperparameter optimization. If you have a large dataset, use a simple validation set instead of cross validation. This will increase the speed by a factor
How to speed up hyperparameter optimization? Here are some general techniques to speed up hyperparameter optimization. If you have a large dataset, use a simple validation set instead of cross validation. This will increase the speed by a factor of ~k, compared to k-fold cross validation. This won't work well if you do...
How to speed up hyperparameter optimization? Here are some general techniques to speed up hyperparameter optimization. If you have a large dataset, use a simple validation set instead of cross validation. This will increase the speed by a factor
45,686
How to speed up hyperparameter optimization?
The primary approach is allow to perform evaluations on sub-sets of your data or for a limited number of iterations or for a limited amount of time (if your algorithm is anytime algorithm). Then, you can exploit it by a hyperparameter optimization algorithm which supports multifidelity evaluations, i.e., can exploit lo...
How to speed up hyperparameter optimization?
The primary approach is allow to perform evaluations on sub-sets of your data or for a limited number of iterations or for a limited amount of time (if your algorithm is anytime algorithm). Then, you
How to speed up hyperparameter optimization? The primary approach is allow to perform evaluations on sub-sets of your data or for a limited number of iterations or for a limited amount of time (if your algorithm is anytime algorithm). Then, you can exploit it by a hyperparameter optimization algorithm which supports mu...
How to speed up hyperparameter optimization? The primary approach is allow to perform evaluations on sub-sets of your data or for a limited number of iterations or for a limited amount of time (if your algorithm is anytime algorithm). Then, you
45,687
How to speed up hyperparameter optimization?
Take a look at the optimiser built in to dlib. The technique it uses is "mathematically [proven to be] better than random search in a number of non-trivial situations".
How to speed up hyperparameter optimization?
Take a look at the optimiser built in to dlib. The technique it uses is "mathematically [proven to be] better than random search in a number of non-trivial situations".
How to speed up hyperparameter optimization? Take a look at the optimiser built in to dlib. The technique it uses is "mathematically [proven to be] better than random search in a number of non-trivial situations".
How to speed up hyperparameter optimization? Take a look at the optimiser built in to dlib. The technique it uses is "mathematically [proven to be] better than random search in a number of non-trivial situations".
45,688
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$?
The confidence interval with $\frac{1}{\sqrt{n}}$ is based on the same idea as the confidence interval with $1.96\sqrt{\frac{p(1-p)}{n}}$ but is more "conservative", in the sense that it is larger. The reason for that is that the function $$f(x) = x \left(1-x\right), x \in [0,1] $$ can be shown (with elementary calcul...
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$?
The confidence interval with $\frac{1}{\sqrt{n}}$ is based on the same idea as the confidence interval with $1.96\sqrt{\frac{p(1-p)}{n}}$ but is more "conservative", in the sense that it is larger. Th
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$? The confidence interval with $\frac{1}{\sqrt{n}}$ is based on the same idea as the confidence interval with $1.96\sqrt{\frac{p(1-p)}{n}}$ but is more "conservative", in the sense that it is larger. The reason for that is that th...
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$? The confidence interval with $\frac{1}{\sqrt{n}}$ is based on the same idea as the confidence interval with $1.96\sqrt{\frac{p(1-p)}{n}}$ but is more "conservative", in the sense that it is larger. Th
45,689
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$?
For confidence intervals we always use the form $\hat{p} \pm z\sqrt{\frac{p(1-p)}{n}}$. For the 95% confidence interval $z=1.96$ Since the term $z\sqrt{\frac{p(1-p)}{n}}$ depends on $p$ there are some proportions which have bigger confidence intervals. The worst case scenario is where $p=0.5$, this has the most variati...
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$?
For confidence intervals we always use the form $\hat{p} \pm z\sqrt{\frac{p(1-p)}{n}}$. For the 95% confidence interval $z=1.96$ Since the term $z\sqrt{\frac{p(1-p)}{n}}$ depends on $p$ there are some
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$? For confidence intervals we always use the form $\hat{p} \pm z\sqrt{\frac{p(1-p)}{n}}$. For the 95% confidence interval $z=1.96$ Since the term $z\sqrt{\frac{p(1-p)}{n}}$ depends on $p$ there are some proportions which have bigg...
When does one use $\frac{1}{\sqrt{n}}$ and when does one use $1.96\sqrt{\frac{p(1-p)}{n}}$? For confidence intervals we always use the form $\hat{p} \pm z\sqrt{\frac{p(1-p)}{n}}$. For the 95% confidence interval $z=1.96$ Since the term $z\sqrt{\frac{p(1-p)}{n}}$ depends on $p$ there are some
45,690
concurvity in negative binomial GAM
If you have concurvity values that high, I would want to do some additional checks to see if the concurvity was leading to problems in the estimation of the smooths. Much as I would if I was fitting a GLM with highly collinear covariates (i.e. with large VIFs). The reason for the difference between full = TRUE (the def...
concurvity in negative binomial GAM
If you have concurvity values that high, I would want to do some additional checks to see if the concurvity was leading to problems in the estimation of the smooths. Much as I would if I was fitting a
concurvity in negative binomial GAM If you have concurvity values that high, I would want to do some additional checks to see if the concurvity was leading to problems in the estimation of the smooths. Much as I would if I was fitting a GLM with highly collinear covariates (i.e. with large VIFs). The reason for the dif...
concurvity in negative binomial GAM If you have concurvity values that high, I would want to do some additional checks to see if the concurvity was leading to problems in the estimation of the smooths. Much as I would if I was fitting a
45,691
Does the Cox proportional hazards model process past values for time-varying covariates?
It seems that your question concerns not the specific R function coxph, but survival models in general. The vignette, when speaking about "covariate values of each subject just prior to the event time", refers to the hazard function $h(t)$. This function in fact only takes into account the current values of covariates,...
Does the Cox proportional hazards model process past values for time-varying covariates?
It seems that your question concerns not the specific R function coxph, but survival models in general. The vignette, when speaking about "covariate values of each subject just prior to the event time
Does the Cox proportional hazards model process past values for time-varying covariates? It seems that your question concerns not the specific R function coxph, but survival models in general. The vignette, when speaking about "covariate values of each subject just prior to the event time", refers to the hazard functio...
Does the Cox proportional hazards model process past values for time-varying covariates? It seems that your question concerns not the specific R function coxph, but survival models in general. The vignette, when speaking about "covariate values of each subject just prior to the event time
45,692
Does the Cox proportional hazards model process past values for time-varying covariates?
The answer from @juod gets to the essential point: calculations in Cox regressions are based on instantaneous values of covariates "just prior to" each event. Prior history is taken into account in the following way: each individual still at risk at an event time survived up until then, so those individuals' covariate ...
Does the Cox proportional hazards model process past values for time-varying covariates?
The answer from @juod gets to the essential point: calculations in Cox regressions are based on instantaneous values of covariates "just prior to" each event. Prior history is taken into account in th
Does the Cox proportional hazards model process past values for time-varying covariates? The answer from @juod gets to the essential point: calculations in Cox regressions are based on instantaneous values of covariates "just prior to" each event. Prior history is taken into account in the following way: each individua...
Does the Cox proportional hazards model process past values for time-varying covariates? The answer from @juod gets to the essential point: calculations in Cox regressions are based on instantaneous values of covariates "just prior to" each event. Prior history is taken into account in th
45,693
Contrasting covariance calculation using R, Matlab, Pandas, NumPy cov, NumPy linalg.svd
Note that numpy.cov() considers its input data matrix to have observations in each column, and variables in each row, so to get numpy.cov() to return what other packages do, you have to pass the transpose of the data matrix to numpy.cov(). The Python code that you linked can be used to simulate what other packages do, ...
Contrasting covariance calculation using R, Matlab, Pandas, NumPy cov, NumPy linalg.svd
Note that numpy.cov() considers its input data matrix to have observations in each column, and variables in each row, so to get numpy.cov() to return what other packages do, you have to pass the trans
Contrasting covariance calculation using R, Matlab, Pandas, NumPy cov, NumPy linalg.svd Note that numpy.cov() considers its input data matrix to have observations in each column, and variables in each row, so to get numpy.cov() to return what other packages do, you have to pass the transpose of the data matrix to numpy...
Contrasting covariance calculation using R, Matlab, Pandas, NumPy cov, NumPy linalg.svd Note that numpy.cov() considers its input data matrix to have observations in each column, and variables in each row, so to get numpy.cov() to return what other packages do, you have to pass the trans
45,694
Interpreting seasonality in ACF and PACF plots
As you've rightly pointed out, the ACF in the first image clearly shows an annual seasonal trend wrt. peaks at yearly lag at about 12, 24, etc. The log-transformed series represents the series scaled to a logarithmic scale. This represents the size of the seasonal fluctuations and random fluctuations in the log-transf...
Interpreting seasonality in ACF and PACF plots
As you've rightly pointed out, the ACF in the first image clearly shows an annual seasonal trend wrt. peaks at yearly lag at about 12, 24, etc. The log-transformed series represents the series scaled
Interpreting seasonality in ACF and PACF plots As you've rightly pointed out, the ACF in the first image clearly shows an annual seasonal trend wrt. peaks at yearly lag at about 12, 24, etc. The log-transformed series represents the series scaled to a logarithmic scale. This represents the size of the seasonal fluctua...
Interpreting seasonality in ACF and PACF plots As you've rightly pointed out, the ACF in the first image clearly shows an annual seasonal trend wrt. peaks at yearly lag at about 12, 24, etc. The log-transformed series represents the series scaled
45,695
Interpreting seasonality in ACF and PACF plots
Seasonal differencing is relevant when the time series is seasonally integrated. Consider the simplest form of seasonal integration -- a SARIMA$(0,0,0)\times(0,1,0)_h$ model with a seasonal period $h$. The original time series under this model is made up of $h$ random walks that alternate every season. I.e. each season...
Interpreting seasonality in ACF and PACF plots
Seasonal differencing is relevant when the time series is seasonally integrated. Consider the simplest form of seasonal integration -- a SARIMA$(0,0,0)\times(0,1,0)_h$ model with a seasonal period $h$
Interpreting seasonality in ACF and PACF plots Seasonal differencing is relevant when the time series is seasonally integrated. Consider the simplest form of seasonal integration -- a SARIMA$(0,0,0)\times(0,1,0)_h$ model with a seasonal period $h$. The original time series under this model is made up of $h$ random walk...
Interpreting seasonality in ACF and PACF plots Seasonal differencing is relevant when the time series is seasonally integrated. Consider the simplest form of seasonal integration -- a SARIMA$(0,0,0)\times(0,1,0)_h$ model with a seasonal period $h$
45,696
Probability of class in binary classification
You are considering different classifiers, but in fact this is not a classification problem. You are not interested in classifying your data as zeros and ones, but in predicting probabilities that individual cases are zeros and ones. In this case the usual method of choice, that is designed especially for such problems...
Probability of class in binary classification
You are considering different classifiers, but in fact this is not a classification problem. You are not interested in classifying your data as zeros and ones, but in predicting probabilities that ind
Probability of class in binary classification You are considering different classifiers, but in fact this is not a classification problem. You are not interested in classifying your data as zeros and ones, but in predicting probabilities that individual cases are zeros and ones. In this case the usual method of choice,...
Probability of class in binary classification You are considering different classifiers, but in fact this is not a classification problem. You are not interested in classifying your data as zeros and ones, but in predicting probabilities that ind
45,697
What is the inverse square of a distance (Euclidean)?
Imagine that we want to classify as red or blue the unknown gray point in the data cloud. Your algorithm is set up to measure Euclidean distances to the $k =3$ closest neighbors: Two of them are blue, and the third one is red. Even assigning uniform weight (equal vote) to each one of the three points, the algorithm co...
What is the inverse square of a distance (Euclidean)?
Imagine that we want to classify as red or blue the unknown gray point in the data cloud. Your algorithm is set up to measure Euclidean distances to the $k =3$ closest neighbors: Two of them are blue
What is the inverse square of a distance (Euclidean)? Imagine that we want to classify as red or blue the unknown gray point in the data cloud. Your algorithm is set up to measure Euclidean distances to the $k =3$ closest neighbors: Two of them are blue, and the third one is red. Even assigning uniform weight (equal v...
What is the inverse square of a distance (Euclidean)? Imagine that we want to classify as red or blue the unknown gray point in the data cloud. Your algorithm is set up to measure Euclidean distances to the $k =3$ closest neighbors: Two of them are blue
45,698
How to interpret log-log regression coefficients with a different base to the natural log
If you understand how to convert units--such as kilograms to pounds or meters to feet--then you will understand perfectly what is going on here, too, because it involves a simple change of units. (For more about this, please see Gung's answer to How will changing the units of explanatory variables affect a regression ...
How to interpret log-log regression coefficients with a different base to the natural log
If you understand how to convert units--such as kilograms to pounds or meters to feet--then you will understand perfectly what is going on here, too, because it involves a simple change of units. (Fo
How to interpret log-log regression coefficients with a different base to the natural log If you understand how to convert units--such as kilograms to pounds or meters to feet--then you will understand perfectly what is going on here, too, because it involves a simple change of units. (For more about this, please see ...
How to interpret log-log regression coefficients with a different base to the natural log If you understand how to convert units--such as kilograms to pounds or meters to feet--then you will understand perfectly what is going on here, too, because it involves a simple change of units. (Fo
45,699
How to interpret log-log regression coefficients with a different base to the natural log
The key issue here is that you have the same base of the log on both sides. So an estimated $\beta_1=1$ tells you that if you multiple body by the base of the logs you multiply brain by $\beta_1 \times$ the base of the logs. Since the base of the logs is the same on both sides it factors out. If you raise the base of t...
How to interpret log-log regression coefficients with a different base to the natural log
The key issue here is that you have the same base of the log on both sides. So an estimated $\beta_1=1$ tells you that if you multiple body by the base of the logs you multiply brain by $\beta_1 \time
How to interpret log-log regression coefficients with a different base to the natural log The key issue here is that you have the same base of the log on both sides. So an estimated $\beta_1=1$ tells you that if you multiple body by the base of the logs you multiply brain by $\beta_1 \times$ the base of the logs. Since...
How to interpret log-log regression coefficients with a different base to the natural log The key issue here is that you have the same base of the log on both sides. So an estimated $\beta_1=1$ tells you that if you multiple body by the base of the logs you multiply brain by $\beta_1 \time
45,700
n observations from a random variable VS. 1 observation from n i.i.d random variables
When modelling a sample $(x_1,\ldots,x_n)$ as an $i.i.d$ sample from a given distribution $F$, the correct way of modelling is to see this sample as the realisation of $n$ random variables $(X_1,\ldots,X_n)$ made of $n$ independent random variables identically distributed from $F$: $$(x_1,\ldots,x_n)=(X_1,\ldots,X_n)(\...
n observations from a random variable VS. 1 observation from n i.i.d random variables
When modelling a sample $(x_1,\ldots,x_n)$ as an $i.i.d$ sample from a given distribution $F$, the correct way of modelling is to see this sample as the realisation of $n$ random variables $(X_1,\ldot
n observations from a random variable VS. 1 observation from n i.i.d random variables When modelling a sample $(x_1,\ldots,x_n)$ as an $i.i.d$ sample from a given distribution $F$, the correct way of modelling is to see this sample as the realisation of $n$ random variables $(X_1,\ldots,X_n)$ made of $n$ independent ra...
n observations from a random variable VS. 1 observation from n i.i.d random variables When modelling a sample $(x_1,\ldots,x_n)$ as an $i.i.d$ sample from a given distribution $F$, the correct way of modelling is to see this sample as the realisation of $n$ random variables $(X_1,\ldot