idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
47,001 | How to know if the p value will increase or decrease | I thought I would add some code so OP can easily see this effect in action. This answer supports that by @Romain, except I used a one-sample t-test instead of a one-sample Z-test. That won't make much of a difference.
A is a sample of 100 observations with a mean of approximately 10 and a standard deviation of approx... | How to know if the p value will increase or decrease | I thought I would add some code so OP can easily see this effect in action. This answer supports that by @Romain, except I used a one-sample t-test instead of a one-sample Z-test. That won't make mu | How to know if the p value will increase or decrease
I thought I would add some code so OP can easily see this effect in action. This answer supports that by @Romain, except I used a one-sample t-test instead of a one-sample Z-test. That won't make much of a difference.
A is a sample of 100 observations with a mean o... | How to know if the p value will increase or decrease
I thought I would add some code so OP can easily see this effect in action. This answer supports that by @Romain, except I used a one-sample t-test instead of a one-sample Z-test. That won't make mu |
47,002 | Projections in new coordinates in PCA example | We are subtracting from the mean, hence shifting the mean as the new origin.
We are told the mean is $[0, -1]$.
Hence we first subtract $0$ from $X_1$ and we subtract $-1$ from $X_2$. Hence that is how we obtain $(X_1, X_2+1)$.
After that we project it to the eigenvector, $(0.95, -0.31)$, this is done by taking the inn... | Projections in new coordinates in PCA example | We are subtracting from the mean, hence shifting the mean as the new origin.
We are told the mean is $[0, -1]$.
Hence we first subtract $0$ from $X_1$ and we subtract $-1$ from $X_2$. Hence that is ho | Projections in new coordinates in PCA example
We are subtracting from the mean, hence shifting the mean as the new origin.
We are told the mean is $[0, -1]$.
Hence we first subtract $0$ from $X_1$ and we subtract $-1$ from $X_2$. Hence that is how we obtain $(X_1, X_2+1)$.
After that we project it to the eigenvector, $... | Projections in new coordinates in PCA example
We are subtracting from the mean, hence shifting the mean as the new origin.
We are told the mean is $[0, -1]$.
Hence we first subtract $0$ from $X_1$ and we subtract $-1$ from $X_2$. Hence that is ho |
47,003 | one example on naive bayes? | According to Bayes formula, $$P(B| (X,Y)=(3,1))=\frac{P((X,Y)=(3,1)|B)P(B)}{P((X,Y)=(3,1))}$$
$P(B)$ is $4/7$ based on number of examples in each class. And, we’ll apply the Naive assumption, i.e. conditional independence, on features given class as follows:
$$P((X,Y)=(3,1)|B) =P(X=3|B)P(Y=1|B)=1/4\times1/2=1/8$$
So, t... | one example on naive bayes? | According to Bayes formula, $$P(B| (X,Y)=(3,1))=\frac{P((X,Y)=(3,1)|B)P(B)}{P((X,Y)=(3,1))}$$
$P(B)$ is $4/7$ based on number of examples in each class. And, we’ll apply the Naive assumption, i.e. con | one example on naive bayes?
According to Bayes formula, $$P(B| (X,Y)=(3,1))=\frac{P((X,Y)=(3,1)|B)P(B)}{P((X,Y)=(3,1))}$$
$P(B)$ is $4/7$ based on number of examples in each class. And, we’ll apply the Naive assumption, i.e. conditional independence, on features given class as follows:
$$P((X,Y)=(3,1)|B) =P(X=3|B)P(Y=1... | one example on naive bayes?
According to Bayes formula, $$P(B| (X,Y)=(3,1))=\frac{P((X,Y)=(3,1)|B)P(B)}{P((X,Y)=(3,1))}$$
$P(B)$ is $4/7$ based on number of examples in each class. And, we’ll apply the Naive assumption, i.e. con |
47,004 | Is there a formal proof that Autoencoders perform non-linear PCA? | There is not a formal proof because the assertion is false: autoencoders do not perform non-linear PCA.
PCA is defined as a (reversible) linear transformation into a space where variables are now orthogonal that captures maximal variance.
Autoencoders do not do that in general.
Linear autoencoders with $k$-dimensional ... | Is there a formal proof that Autoencoders perform non-linear PCA? | There is not a formal proof because the assertion is false: autoencoders do not perform non-linear PCA.
PCA is defined as a (reversible) linear transformation into a space where variables are now orth | Is there a formal proof that Autoencoders perform non-linear PCA?
There is not a formal proof because the assertion is false: autoencoders do not perform non-linear PCA.
PCA is defined as a (reversible) linear transformation into a space where variables are now orthogonal that captures maximal variance.
Autoencoders do... | Is there a formal proof that Autoencoders perform non-linear PCA?
There is not a formal proof because the assertion is false: autoencoders do not perform non-linear PCA.
PCA is defined as a (reversible) linear transformation into a space where variables are now orth |
47,005 | Linear regression coefficient when residuals are regressed against each other | As a hint, consider this simulation in R:
rm(list=ls())
set.seed(42)
n=1000
x3= rnorm(n)
x2 = 1 + 2 * x3 + rnorm(n)
lm(x2 ~ x3)
Coefficients:
(Intercept) x3
0.9949 2.0098
# let's store the residual v2:
v2 = lm(x2~x3)$res
# now let's consider the second model:
lm(x3~x2)
Coefficients:
(Intercept) x2 ... | Linear regression coefficient when residuals are regressed against each other | As a hint, consider this simulation in R:
rm(list=ls())
set.seed(42)
n=1000
x3= rnorm(n)
x2 = 1 + 2 * x3 + rnorm(n)
lm(x2 ~ x3)
Coefficients:
(Intercept) x3
0.9949 2.0098
# let's store | Linear regression coefficient when residuals are regressed against each other
As a hint, consider this simulation in R:
rm(list=ls())
set.seed(42)
n=1000
x3= rnorm(n)
x2 = 1 + 2 * x3 + rnorm(n)
lm(x2 ~ x3)
Coefficients:
(Intercept) x3
0.9949 2.0098
# let's store the residual v2:
v2 = lm(x2~x3)$res
# n... | Linear regression coefficient when residuals are regressed against each other
As a hint, consider this simulation in R:
rm(list=ls())
set.seed(42)
n=1000
x3= rnorm(n)
x2 = 1 + 2 * x3 + rnorm(n)
lm(x2 ~ x3)
Coefficients:
(Intercept) x3
0.9949 2.0098
# let's store |
47,006 | Linear regression coefficient when residuals are regressed against each other | From the first regression, we have $$X_3 = -\frac{a}{b} + \frac{1}{b} X_2 - \frac{v_2}{b}$$
Comparing this with $$ X_3 = c + dX_2 + v_3$$
and assuming the two regressions are performed similarly, we have $$c = -\frac{a}{b}$$ $$d = \frac{1}{b} $$ and $$v_3 = \frac{-1}{b}v2$$
Therefore we have $$v3 = -d v2$$ | Linear regression coefficient when residuals are regressed against each other | From the first regression, we have $$X_3 = -\frac{a}{b} + \frac{1}{b} X_2 - \frac{v_2}{b}$$
Comparing this with $$ X_3 = c + dX_2 + v_3$$
and assuming the two regressions are performed similarly, we h | Linear regression coefficient when residuals are regressed against each other
From the first regression, we have $$X_3 = -\frac{a}{b} + \frac{1}{b} X_2 - \frac{v_2}{b}$$
Comparing this with $$ X_3 = c + dX_2 + v_3$$
and assuming the two regressions are performed similarly, we have $$c = -\frac{a}{b}$$ $$d = \frac{1}{b}... | Linear regression coefficient when residuals are regressed against each other
From the first regression, we have $$X_3 = -\frac{a}{b} + \frac{1}{b} X_2 - \frac{v_2}{b}$$
Comparing this with $$ X_3 = c + dX_2 + v_3$$
and assuming the two regressions are performed similarly, we h |
47,007 | Is Bayesian estimation useful for causal analyses? | While you say we want unbiased estimators of the causal effect, generally we are interested in obtaining an accurate/precise estimate of a quantity of interest. When offered a range of estimators to choose from, a sensible selection criterion is to choose one that minimizes the expected loss, where loss is due to the e... | Is Bayesian estimation useful for causal analyses? | While you say we want unbiased estimators of the causal effect, generally we are interested in obtaining an accurate/precise estimate of a quantity of interest. When offered a range of estimators to c | Is Bayesian estimation useful for causal analyses?
While you say we want unbiased estimators of the causal effect, generally we are interested in obtaining an accurate/precise estimate of a quantity of interest. When offered a range of estimators to choose from, a sensible selection criterion is to choose one that mini... | Is Bayesian estimation useful for causal analyses?
While you say we want unbiased estimators of the causal effect, generally we are interested in obtaining an accurate/precise estimate of a quantity of interest. When offered a range of estimators to c |
47,008 | KL-divergence: P||Q vs. Q||P | In
$$\DeclareMathOperator{\E}{\mathbb{E}}
D_{KL}(P || Q) = \int_{-\infty}^{\infty}p(x)\log\left(\frac{p(x)}{q(x)}\right)\;dx
= \E_{P}\log\left(\frac{p(X)}{q(X)}\right)
$$ we see this is the expectation of the loglikelihood ratio when $P$ is the truth, see Intuition on the Kullback-Leibler (KL) Divergence.
If, in hypot... | KL-divergence: P||Q vs. Q||P | In
$$\DeclareMathOperator{\E}{\mathbb{E}}
D_{KL}(P || Q) = \int_{-\infty}^{\infty}p(x)\log\left(\frac{p(x)}{q(x)}\right)\;dx
= \E_{P}\log\left(\frac{p(X)}{q(X)}\right)
$$ we see this is the expectati | KL-divergence: P||Q vs. Q||P
In
$$\DeclareMathOperator{\E}{\mathbb{E}}
D_{KL}(P || Q) = \int_{-\infty}^{\infty}p(x)\log\left(\frac{p(x)}{q(x)}\right)\;dx
= \E_{P}\log\left(\frac{p(X)}{q(X)}\right)
$$ we see this is the expectation of the loglikelihood ratio when $P$ is the truth, see Intuition on the Kullback-Leibler ... | KL-divergence: P||Q vs. Q||P
In
$$\DeclareMathOperator{\E}{\mathbb{E}}
D_{KL}(P || Q) = \int_{-\infty}^{\infty}p(x)\log\left(\frac{p(x)}{q(x)}\right)\;dx
= \E_{P}\log\left(\frac{p(X)}{q(X)}\right)
$$ we see this is the expectati |
47,009 | Uncertainty propagation for the solution of an integral equation | Let's break this down into easier problems. To keep the post reasonably short, I will only sketch a good confidence interval procedure without going into all the details.
What is interesting about this situation is that because $Y$ varies in such a complex, nonlinear fashion with the distribution parameters, a careful... | Uncertainty propagation for the solution of an integral equation | Let's break this down into easier problems. To keep the post reasonably short, I will only sketch a good confidence interval procedure without going into all the details.
What is interesting about th | Uncertainty propagation for the solution of an integral equation
Let's break this down into easier problems. To keep the post reasonably short, I will only sketch a good confidence interval procedure without going into all the details.
What is interesting about this situation is that because $Y$ varies in such a compl... | Uncertainty propagation for the solution of an integral equation
Let's break this down into easier problems. To keep the post reasonably short, I will only sketch a good confidence interval procedure without going into all the details.
What is interesting about th |
47,010 | Understanding Simpson's paradox with random effects | but again this is a singular fit because the variance of the random slopes is zero - which also does not make sense because it is clearly quite variable (from the plot).
The first thing I notice here is, just eyeballing the plot, I have to disagree that the variation in the slopes is clear. The slopes all appear fairl... | Understanding Simpson's paradox with random effects | but again this is a singular fit because the variance of the random slopes is zero - which also does not make sense because it is clearly quite variable (from the plot).
The first thing I notice here | Understanding Simpson's paradox with random effects
but again this is a singular fit because the variance of the random slopes is zero - which also does not make sense because it is clearly quite variable (from the plot).
The first thing I notice here is, just eyeballing the plot, I have to disagree that the variation... | Understanding Simpson's paradox with random effects
but again this is a singular fit because the variance of the random slopes is zero - which also does not make sense because it is clearly quite variable (from the plot).
The first thing I notice here |
47,011 | Exchangeability and joint distribution | Instead of focusing only on the distribution function, let's focus on equality in distribution.
A finite sequence of random variables $X_1, \ldots, X_n$ is exchangeable if for every permutation $\pi$ we have
$$
X_1, \ldots, X_N =_d X_{\pi(1)}, \ldots, X_{\pi(n)}
$$
where $=_d$ means equality in distribution.
Equality... | Exchangeability and joint distribution | Instead of focusing only on the distribution function, let's focus on equality in distribution.
A finite sequence of random variables $X_1, \ldots, X_n$ is exchangeable if for every permutation $\pi$ | Exchangeability and joint distribution
Instead of focusing only on the distribution function, let's focus on equality in distribution.
A finite sequence of random variables $X_1, \ldots, X_n$ is exchangeable if for every permutation $\pi$ we have
$$
X_1, \ldots, X_N =_d X_{\pi(1)}, \ldots, X_{\pi(n)}
$$
where $=_d$ m... | Exchangeability and joint distribution
Instead of focusing only on the distribution function, let's focus on equality in distribution.
A finite sequence of random variables $X_1, \ldots, X_n$ is exchangeable if for every permutation $\pi$ |
47,012 | Exchangeability and joint distribution | Something that might be helpful in unifying these two views is the Hewitt-Savage(-de Finetti) representation theorem.
The theorem says that $X_1,\dots,X_n$ are exchangeable precisely when they are independent and identically distributed conditional on some additional information. This is important in Bayesian statisti... | Exchangeability and joint distribution | Something that might be helpful in unifying these two views is the Hewitt-Savage(-de Finetti) representation theorem.
The theorem says that $X_1,\dots,X_n$ are exchangeable precisely when they are ind | Exchangeability and joint distribution
Something that might be helpful in unifying these two views is the Hewitt-Savage(-de Finetti) representation theorem.
The theorem says that $X_1,\dots,X_n$ are exchangeable precisely when they are independent and identically distributed conditional on some additional information. ... | Exchangeability and joint distribution
Something that might be helpful in unifying these two views is the Hewitt-Savage(-de Finetti) representation theorem.
The theorem says that $X_1,\dots,X_n$ are exchangeable precisely when they are ind |
47,013 | Mixed models in ecology: when to use a random effect [duplicate] | For almost all variables you have the choice to model them with a fixed or random effect. I personally find the term random effect quite confusing, since random effects are usually just grouping factors for which we are trying to control. They are always categorical, as you can’t force R to treat a continuous variable ... | Mixed models in ecology: when to use a random effect [duplicate] | For almost all variables you have the choice to model them with a fixed or random effect. I personally find the term random effect quite confusing, since random effects are usually just grouping facto | Mixed models in ecology: when to use a random effect [duplicate]
For almost all variables you have the choice to model them with a fixed or random effect. I personally find the term random effect quite confusing, since random effects are usually just grouping factors for which we are trying to control. They are always ... | Mixed models in ecology: when to use a random effect [duplicate]
For almost all variables you have the choice to model them with a fixed or random effect. I personally find the term random effect quite confusing, since random effects are usually just grouping facto |
47,014 | Mixed models in ecology: when to use a random effect [duplicate] | Yes, when you include the location and date as independent variables (as in your formula), you are separating their effects from X.
However, you do want to be sure that you are not missing variables in your formula that impact the dependent variable. If you are missing variables, the effect of X that you get may not be... | Mixed models in ecology: when to use a random effect [duplicate] | Yes, when you include the location and date as independent variables (as in your formula), you are separating their effects from X.
However, you do want to be sure that you are not missing variables i | Mixed models in ecology: when to use a random effect [duplicate]
Yes, when you include the location and date as independent variables (as in your formula), you are separating their effects from X.
However, you do want to be sure that you are not missing variables in your formula that impact the dependent variable. If y... | Mixed models in ecology: when to use a random effect [duplicate]
Yes, when you include the location and date as independent variables (as in your formula), you are separating their effects from X.
However, you do want to be sure that you are not missing variables i |
47,015 | Mixed models in ecology: when to use a random effect [duplicate] | Yes, the proposed mixed model will separate the sources of variability, leaving the fixed effects (X) separated from the random effects of the combination of location/pair nested variables and date.
Essentially, what the introduction of random effects do is to identify sources of variability, and by estimating them you... | Mixed models in ecology: when to use a random effect [duplicate] | Yes, the proposed mixed model will separate the sources of variability, leaving the fixed effects (X) separated from the random effects of the combination of location/pair nested variables and date.
E | Mixed models in ecology: when to use a random effect [duplicate]
Yes, the proposed mixed model will separate the sources of variability, leaving the fixed effects (X) separated from the random effects of the combination of location/pair nested variables and date.
Essentially, what the introduction of random effects do ... | Mixed models in ecology: when to use a random effect [duplicate]
Yes, the proposed mixed model will separate the sources of variability, leaving the fixed effects (X) separated from the random effects of the combination of location/pair nested variables and date.
E |
47,016 | Should I put outcome variable in Matchit::matchit () | DO NOT include the outcome in the propensity score calculation. To analyze your data after matching, don't use match.data(). Just use your original data set, which hopefully contains the treatment and the outcome, and include the weights in the matchit output object in the outcome model. You can do this as follows:
m.o... | Should I put outcome variable in Matchit::matchit () | DO NOT include the outcome in the propensity score calculation. To analyze your data after matching, don't use match.data(). Just use your original data set, which hopefully contains the treatment and | Should I put outcome variable in Matchit::matchit ()
DO NOT include the outcome in the propensity score calculation. To analyze your data after matching, don't use match.data(). Just use your original data set, which hopefully contains the treatment and the outcome, and include the weights in the matchit output object ... | Should I put outcome variable in Matchit::matchit ()
DO NOT include the outcome in the propensity score calculation. To analyze your data after matching, don't use match.data(). Just use your original data set, which hopefully contains the treatment and |
47,017 | Why don't we use OLS estimator to test hypothesis in linear regression? | The test you are proposing is exactly what is done in the T-test for an individual coefficient, which is presented in the coefficient estimates table. One of the major theorems of regression analysis is that the F-test reduces to equivalence to the T-test when you apply it to a single coefficient. Thus, for an indivi... | Why don't we use OLS estimator to test hypothesis in linear regression? | The test you are proposing is exactly what is done in the T-test for an individual coefficient, which is presented in the coefficient estimates table. One of the major theorems of regression analysis | Why don't we use OLS estimator to test hypothesis in linear regression?
The test you are proposing is exactly what is done in the T-test for an individual coefficient, which is presented in the coefficient estimates table. One of the major theorems of regression analysis is that the F-test reduces to equivalence to th... | Why don't we use OLS estimator to test hypothesis in linear regression?
The test you are proposing is exactly what is done in the T-test for an individual coefficient, which is presented in the coefficient estimates table. One of the major theorems of regression analysis |
47,018 | Does Shannon Entropy uniquely characterise distribution function $f$? | The answer is in the negative. For any real number $a$ define the function
$$f_a(x) = f(x-a).$$
It is clear that when $f$ is a distribution function, so is $f_a;$ that when $f$ is supported on the real line, so is $f_a;$ and that both $f$ and $f_a$ have equal entropy. For $a\ne 0$ it is impossible that $f=f_a,$ thoug... | Does Shannon Entropy uniquely characterise distribution function $f$? | The answer is in the negative. For any real number $a$ define the function
$$f_a(x) = f(x-a).$$
It is clear that when $f$ is a distribution function, so is $f_a;$ that when $f$ is supported on the re | Does Shannon Entropy uniquely characterise distribution function $f$?
The answer is in the negative. For any real number $a$ define the function
$$f_a(x) = f(x-a).$$
It is clear that when $f$ is a distribution function, so is $f_a;$ that when $f$ is supported on the real line, so is $f_a;$ and that both $f$ and $f_a$ ... | Does Shannon Entropy uniquely characterise distribution function $f$?
The answer is in the negative. For any real number $a$ define the function
$$f_a(x) = f(x-a).$$
It is clear that when $f$ is a distribution function, so is $f_a;$ that when $f$ is supported on the re |
47,019 | Confused about meaning of subject-specific coefficients in a binomial generalised mixed-effects model | The point that is made in this paper is with regard to the conditional versus marginal interpretation of the regression coefficients. Namely, because of the nonlinear link function used in the mixed effects logistic regression, the fixed effects coefficients have an interpretation conditional on the random effects. Mos... | Confused about meaning of subject-specific coefficients in a binomial generalised mixed-effects mode | The point that is made in this paper is with regard to the conditional versus marginal interpretation of the regression coefficients. Namely, because of the nonlinear link function used in the mixed e | Confused about meaning of subject-specific coefficients in a binomial generalised mixed-effects model
The point that is made in this paper is with regard to the conditional versus marginal interpretation of the regression coefficients. Namely, because of the nonlinear link function used in the mixed effects logistic re... | Confused about meaning of subject-specific coefficients in a binomial generalised mixed-effects mode
The point that is made in this paper is with regard to the conditional versus marginal interpretation of the regression coefficients. Namely, because of the nonlinear link function used in the mixed e |
47,020 | Confused about meaning of subject-specific coefficients in a binomial generalised mixed-effects model | I agree that this can be a little confusing. Some authors avoid setting it up in this way. The important point is that the $\alpha_{i}$ are not estimated individually, instead they are subsumed into a general model and the usual assumption is that they are normally distributed, with an unknown variance, which is to be ... | Confused about meaning of subject-specific coefficients in a binomial generalised mixed-effects mode | I agree that this can be a little confusing. Some authors avoid setting it up in this way. The important point is that the $\alpha_{i}$ are not estimated individually, instead they are subsumed into a | Confused about meaning of subject-specific coefficients in a binomial generalised mixed-effects model
I agree that this can be a little confusing. Some authors avoid setting it up in this way. The important point is that the $\alpha_{i}$ are not estimated individually, instead they are subsumed into a general model and... | Confused about meaning of subject-specific coefficients in a binomial generalised mixed-effects mode
I agree that this can be a little confusing. Some authors avoid setting it up in this way. The important point is that the $\alpha_{i}$ are not estimated individually, instead they are subsumed into a |
47,021 | Random Forest pruning vs stopping criteria | As you say, Breiman himself suggests pruning over stopping, and the reason for this is that stopping might be short-sighted, as blocking a "bad" split now might prevent some very "good" splits from happening later. Pruning, on the other hand, starts from the fully grown tree (so it takes longer to run) but it does not ... | Random Forest pruning vs stopping criteria | As you say, Breiman himself suggests pruning over stopping, and the reason for this is that stopping might be short-sighted, as blocking a "bad" split now might prevent some very "good" splits from ha | Random Forest pruning vs stopping criteria
As you say, Breiman himself suggests pruning over stopping, and the reason for this is that stopping might be short-sighted, as blocking a "bad" split now might prevent some very "good" splits from happening later. Pruning, on the other hand, starts from the fully grown tree (... | Random Forest pruning vs stopping criteria
As you say, Breiman himself suggests pruning over stopping, and the reason for this is that stopping might be short-sighted, as blocking a "bad" split now might prevent some very "good" splits from ha |
47,022 | Is the issue of multiple testing related to doing several tests on the same sample? | The concern over multiple testing is, at its root, a reflection of what a "significant" result actually means. A significant result means that the observed data were unlikely to have occurred due to chance if the null hypothesis is true.
If your alpha is 0.05, then roughly 1 in 20 times that you run a statistical test ... | Is the issue of multiple testing related to doing several tests on the same sample? | The concern over multiple testing is, at its root, a reflection of what a "significant" result actually means. A significant result means that the observed data were unlikely to have occurred due to c | Is the issue of multiple testing related to doing several tests on the same sample?
The concern over multiple testing is, at its root, a reflection of what a "significant" result actually means. A significant result means that the observed data were unlikely to have occurred due to chance if the null hypothesis is true... | Is the issue of multiple testing related to doing several tests on the same sample?
The concern over multiple testing is, at its root, a reflection of what a "significant" result actually means. A significant result means that the observed data were unlikely to have occurred due to c |
47,023 | Is the issue of multiple testing related to doing several tests on the same sample? | Technically, the probability of making at least one false positive will increase assuming the null is true. However, there is typically no need to correct for this as the two measures can be considered independent (arguments about safety and efficacy being related aside. If efficacy impacts safety, or the other way ar... | Is the issue of multiple testing related to doing several tests on the same sample? | Technically, the probability of making at least one false positive will increase assuming the null is true. However, there is typically no need to correct for this as the two measures can be considere | Is the issue of multiple testing related to doing several tests on the same sample?
Technically, the probability of making at least one false positive will increase assuming the null is true. However, there is typically no need to correct for this as the two measures can be considered independent (arguments about safet... | Is the issue of multiple testing related to doing several tests on the same sample?
Technically, the probability of making at least one false positive will increase assuming the null is true. However, there is typically no need to correct for this as the two measures can be considere |
47,024 | Sample Variance and Dividing by $n-1$ | A somewhat intuitive argument (though one that can be made rigorous):
The population variance is itself a population average. Specifically if you define a new variable to be the square of the difference of the original variable from its population mean, $Y=(X-\mu_X)^2$ (NB when using capital letters I am referring to r... | Sample Variance and Dividing by $n-1$ | A somewhat intuitive argument (though one that can be made rigorous):
The population variance is itself a population average. Specifically if you define a new variable to be the square of the differen | Sample Variance and Dividing by $n-1$
A somewhat intuitive argument (though one that can be made rigorous):
The population variance is itself a population average. Specifically if you define a new variable to be the square of the difference of the original variable from its population mean, $Y=(X-\mu_X)^2$ (NB when usi... | Sample Variance and Dividing by $n-1$
A somewhat intuitive argument (though one that can be made rigorous):
The population variance is itself a population average. Specifically if you define a new variable to be the square of the differen |
47,025 | Sample Variance and Dividing by $n-1$ | Let $X_1, X_2, \cdots X_n$ be iid with mean $\mu$ and variance $\sigma^2$. Lets look at the class of estimators
$$S^2_j = \frac{1}{n-j}\sum_{i=1}^n(X_i- \bar X)^2$$
Using this notation, $S_1^2$ is the usual sample variance and $S_0^2$ is the variant where we divide by the sample size.
The sample variance is unbiased f... | Sample Variance and Dividing by $n-1$ | Let $X_1, X_2, \cdots X_n$ be iid with mean $\mu$ and variance $\sigma^2$. Lets look at the class of estimators
$$S^2_j = \frac{1}{n-j}\sum_{i=1}^n(X_i- \bar X)^2$$
Using this notation, $S_1^2$ is the | Sample Variance and Dividing by $n-1$
Let $X_1, X_2, \cdots X_n$ be iid with mean $\mu$ and variance $\sigma^2$. Lets look at the class of estimators
$$S^2_j = \frac{1}{n-j}\sum_{i=1}^n(X_i- \bar X)^2$$
Using this notation, $S_1^2$ is the usual sample variance and $S_0^2$ is the variant where we divide by the sample si... | Sample Variance and Dividing by $n-1$
Let $X_1, X_2, \cdots X_n$ be iid with mean $\mu$ and variance $\sigma^2$. Lets look at the class of estimators
$$S^2_j = \frac{1}{n-j}\sum_{i=1}^n(X_i- \bar X)^2$$
Using this notation, $S_1^2$ is the |
47,026 | When is performing back-transformation of inferences on transformed variables Ok, and when is it not Ok? | You actually lay out most of the important points in your question.
I assume we're restricting attention to strictly monotonic transformations.
Monotonic transformations preserve order so quantiles are "preserved" (more precisely, quantiles are equivariant to monotonic transformation) but they don't preserve relative l... | When is performing back-transformation of inferences on transformed variables Ok, and when is it not | You actually lay out most of the important points in your question.
I assume we're restricting attention to strictly monotonic transformations.
Monotonic transformations preserve order so quantiles ar | When is performing back-transformation of inferences on transformed variables Ok, and when is it not Ok?
You actually lay out most of the important points in your question.
I assume we're restricting attention to strictly monotonic transformations.
Monotonic transformations preserve order so quantiles are "preserved" (... | When is performing back-transformation of inferences on transformed variables Ok, and when is it not
You actually lay out most of the important points in your question.
I assume we're restricting attention to strictly monotonic transformations.
Monotonic transformations preserve order so quantiles ar |
47,027 | Optimality of AIC w.r.t. loss functions used for evaluation | I think the answer to 1) should be "no", as there is no reason in general to expect that the model which minimizes the expected likelihood will also minimize
the MSE, MAE, etc. One might even think of the case in which the likelihood is well defined and the MSE will diverge as the sample size increases (e.g. a distribu... | Optimality of AIC w.r.t. loss functions used for evaluation | I think the answer to 1) should be "no", as there is no reason in general to expect that the model which minimizes the expected likelihood will also minimize
the MSE, MAE, etc. One might even think of | Optimality of AIC w.r.t. loss functions used for evaluation
I think the answer to 1) should be "no", as there is no reason in general to expect that the model which minimizes the expected likelihood will also minimize
the MSE, MAE, etc. One might even think of the case in which the likelihood is well defined and the MS... | Optimality of AIC w.r.t. loss functions used for evaluation
I think the answer to 1) should be "no", as there is no reason in general to expect that the model which minimizes the expected likelihood will also minimize
the MSE, MAE, etc. One might even think of |
47,028 | Optimality of AIC w.r.t. loss functions used for evaluation | I have to disagree with F. Tusell's answer, which I believe reflects a confusion about what the AIC and other loss functions evaluate.
The AIC evaluates a "modeling" density. (I use quotes around "modeling", to distinguish it from a predictive density, where we would use proper scoring rules for evaluation.) Loss funct... | Optimality of AIC w.r.t. loss functions used for evaluation | I have to disagree with F. Tusell's answer, which I believe reflects a confusion about what the AIC and other loss functions evaluate.
The AIC evaluates a "modeling" density. (I use quotes around "mod | Optimality of AIC w.r.t. loss functions used for evaluation
I have to disagree with F. Tusell's answer, which I believe reflects a confusion about what the AIC and other loss functions evaluate.
The AIC evaluates a "modeling" density. (I use quotes around "modeling", to distinguish it from a predictive density, where w... | Optimality of AIC w.r.t. loss functions used for evaluation
I have to disagree with F. Tusell's answer, which I believe reflects a confusion about what the AIC and other loss functions evaluate.
The AIC evaluates a "modeling" density. (I use quotes around "mod |
47,029 | Favored methods for overcoming selection bias (special attention to healthcare fields)? | There is no single magic bullet to estimate treatment effects in the context of confounding (note: "selection bias" can mean something else). There is also no agreement in the field about the best method, and the best method for a given problem may differ from the best method for another (and neither will be immediatel... | Favored methods for overcoming selection bias (special attention to healthcare fields)? | There is no single magic bullet to estimate treatment effects in the context of confounding (note: "selection bias" can mean something else). There is also no agreement in the field about the best met | Favored methods for overcoming selection bias (special attention to healthcare fields)?
There is no single magic bullet to estimate treatment effects in the context of confounding (note: "selection bias" can mean something else). There is also no agreement in the field about the best method, and the best method for a g... | Favored methods for overcoming selection bias (special attention to healthcare fields)?
There is no single magic bullet to estimate treatment effects in the context of confounding (note: "selection bias" can mean something else). There is also no agreement in the field about the best met |
47,030 | Favored methods for overcoming selection bias (special attention to healthcare fields)? | I don't disagree with Noah's answer. I have never heard of Bayesian Additive Regression Trees or with targeted minimum-loss estimation, so I can't comment on those specifically. Methods involving weighting and propensity scores are well-accepted in epidemiological circles.
You should also consider instrumental variable... | Favored methods for overcoming selection bias (special attention to healthcare fields)? | I don't disagree with Noah's answer. I have never heard of Bayesian Additive Regression Trees or with targeted minimum-loss estimation, so I can't comment on those specifically. Methods involving weig | Favored methods for overcoming selection bias (special attention to healthcare fields)?
I don't disagree with Noah's answer. I have never heard of Bayesian Additive Regression Trees or with targeted minimum-loss estimation, so I can't comment on those specifically. Methods involving weighting and propensity scores are ... | Favored methods for overcoming selection bias (special attention to healthcare fields)?
I don't disagree with Noah's answer. I have never heard of Bayesian Additive Regression Trees or with targeted minimum-loss estimation, so I can't comment on those specifically. Methods involving weig |
47,031 | Why is fisher transformation necessary? | The boundedness is not the real problem it just explains the skewness of the sampling distribution. Basically, the transformation approach is so established for historic reasons (just like the prevailing recommendation to log- or whatever-transform your skewed variables to make them more normal for a linear regression ... | Why is fisher transformation necessary? | The boundedness is not the real problem it just explains the skewness of the sampling distribution. Basically, the transformation approach is so established for historic reasons (just like the prevail | Why is fisher transformation necessary?
The boundedness is not the real problem it just explains the skewness of the sampling distribution. Basically, the transformation approach is so established for historic reasons (just like the prevailing recommendation to log- or whatever-transform your skewed variables to make t... | Why is fisher transformation necessary?
The boundedness is not the real problem it just explains the skewness of the sampling distribution. Basically, the transformation approach is so established for historic reasons (just like the prevail |
47,032 | Expanding initial sample when the result isn't significant | 1.) Signal-to-Noise ratio has a well defined meaning in many engineering problems. In the posting, I was referring to the less formal way it is often used in statistical inference settings.
That is, there is a parameter we want to estimate we'll define as $\mu$. Through data collection and analysis, we get an estimato... | Expanding initial sample when the result isn't significant | 1.) Signal-to-Noise ratio has a well defined meaning in many engineering problems. In the posting, I was referring to the less formal way it is often used in statistical inference settings.
That is, | Expanding initial sample when the result isn't significant
1.) Signal-to-Noise ratio has a well defined meaning in many engineering problems. In the posting, I was referring to the less formal way it is often used in statistical inference settings.
That is, there is a parameter we want to estimate we'll define as $\mu... | Expanding initial sample when the result isn't significant
1.) Signal-to-Noise ratio has a well defined meaning in many engineering problems. In the posting, I was referring to the less formal way it is often used in statistical inference settings.
That is, |
47,033 | Distribution of $XY$ when $(X,Y) \sim BVN(0,0,1,1,\rho)$ | Your original question does not require you to find the distribution of $XY$ when $(X,Y)$ is jointly normal. Here is a hint for that question:
Let $Z_i=X_iY_i$, so that $Z_1,Z_2,\ldots,Z_n$ are i.i.d variables. Hence by classical CLT we have
$$\frac{\sqrt n(W_n-\operatorname E(Z_1))}{\sqrt{\operatorname{Var}(Z_1)}}\sta... | Distribution of $XY$ when $(X,Y) \sim BVN(0,0,1,1,\rho)$ | Your original question does not require you to find the distribution of $XY$ when $(X,Y)$ is jointly normal. Here is a hint for that question:
Let $Z_i=X_iY_i$, so that $Z_1,Z_2,\ldots,Z_n$ are i.i.d | Distribution of $XY$ when $(X,Y) \sim BVN(0,0,1,1,\rho)$
Your original question does not require you to find the distribution of $XY$ when $(X,Y)$ is jointly normal. Here is a hint for that question:
Let $Z_i=X_iY_i$, so that $Z_1,Z_2,\ldots,Z_n$ are i.i.d variables. Hence by classical CLT we have
$$\frac{\sqrt n(W_n-\... | Distribution of $XY$ when $(X,Y) \sim BVN(0,0,1,1,\rho)$
Your original question does not require you to find the distribution of $XY$ when $(X,Y)$ is jointly normal. Here is a hint for that question:
Let $Z_i=X_iY_i$, so that $Z_1,Z_2,\ldots,Z_n$ are i.i.d |
47,034 | Distribution of $XY$ when $(X,Y) \sim BVN(0,0,1,1,\rho)$ | An exact answer is given in Probability Distributions Involving Gaussian Random Variables: A Handbook for Engineers, Scientists and Mathematicians (along with many other results.)
If $(X,Y)$ is bivariate normal distributed with zero means and correlation $\rho$, then the density function of the product $XY$ is given by... | Distribution of $XY$ when $(X,Y) \sim BVN(0,0,1,1,\rho)$ | An exact answer is given in Probability Distributions Involving Gaussian Random Variables: A Handbook for Engineers, Scientists and Mathematicians (along with many other results.)
If $(X,Y)$ is bivari | Distribution of $XY$ when $(X,Y) \sim BVN(0,0,1,1,\rho)$
An exact answer is given in Probability Distributions Involving Gaussian Random Variables: A Handbook for Engineers, Scientists and Mathematicians (along with many other results.)
If $(X,Y)$ is bivariate normal distributed with zero means and correlation $\rho$, ... | Distribution of $XY$ when $(X,Y) \sim BVN(0,0,1,1,\rho)$
An exact answer is given in Probability Distributions Involving Gaussian Random Variables: A Handbook for Engineers, Scientists and Mathematicians (along with many other results.)
If $(X,Y)$ is bivari |
47,035 | Why we try to capture variability? | Statistics is the interface between math (models of the world) and our perception of reality. I suspect what you are looking for is not proof, but an understanding of the assumptions.
Math proofs are a formal logic system that works because it is self contained (in my background as a chemist this would be termed a adi... | Why we try to capture variability? | Statistics is the interface between math (models of the world) and our perception of reality. I suspect what you are looking for is not proof, but an understanding of the assumptions.
Math proofs are | Why we try to capture variability?
Statistics is the interface between math (models of the world) and our perception of reality. I suspect what you are looking for is not proof, but an understanding of the assumptions.
Math proofs are a formal logic system that works because it is self contained (in my background as a... | Why we try to capture variability?
Statistics is the interface between math (models of the world) and our perception of reality. I suspect what you are looking for is not proof, but an understanding of the assumptions.
Math proofs are |
47,036 | Why we try to capture variability? | In many cases the reason we use regression is to explain variability. In that sense, how much variability is explained is one of the key measures of success.
This may be more clear with an example. I recently worked on a project where we created a regression model to explain employee performance. We did this because ou... | Why we try to capture variability? | In many cases the reason we use regression is to explain variability. In that sense, how much variability is explained is one of the key measures of success.
This may be more clear with an example. I | Why we try to capture variability?
In many cases the reason we use regression is to explain variability. In that sense, how much variability is explained is one of the key measures of success.
This may be more clear with an example. I recently worked on a project where we created a regression model to explain employee ... | Why we try to capture variability?
In many cases the reason we use regression is to explain variability. In that sense, how much variability is explained is one of the key measures of success.
This may be more clear with an example. I |
47,037 | Why we try to capture variability? | Here's my few cents..
Co-movement of independent and dependent variable is the key here. Let's say we want to find out how height changes with age and we have data for 100 people. Let's say we know that our independent variable (height) varies a lot across the 100 observations, but we want to find out how much of it c... | Why we try to capture variability? | Here's my few cents..
Co-movement of independent and dependent variable is the key here. Let's say we want to find out how height changes with age and we have data for 100 people. Let's say we know t | Why we try to capture variability?
Here's my few cents..
Co-movement of independent and dependent variable is the key here. Let's say we want to find out how height changes with age and we have data for 100 people. Let's say we know that our independent variable (height) varies a lot across the 100 observations, but w... | Why we try to capture variability?
Here's my few cents..
Co-movement of independent and dependent variable is the key here. Let's say we want to find out how height changes with age and we have data for 100 people. Let's say we know t |
47,038 | Order Statistics of Poisson Distribution | Given: $(X_1, ...,X_n)$ denotes a random sample of size $n$ drawn on $X$, where $X \sim \text{Poisson}(\lambda)$ with pmf $f(x)$:
Then, the pmf of the $2^{\text{nd}}$ order statistic, in a sample of size $n$, is $g(x)$:
... where:
I am using the OrderStat function from the mathStatica package for Mathematica to auto... | Order Statistics of Poisson Distribution | Given: $(X_1, ...,X_n)$ denotes a random sample of size $n$ drawn on $X$, where $X \sim \text{Poisson}(\lambda)$ with pmf $f(x)$:
Then, the pmf of the $2^{\text{nd}}$ order statistic, in a sample of | Order Statistics of Poisson Distribution
Given: $(X_1, ...,X_n)$ denotes a random sample of size $n$ drawn on $X$, where $X \sim \text{Poisson}(\lambda)$ with pmf $f(x)$:
Then, the pmf of the $2^{\text{nd}}$ order statistic, in a sample of size $n$, is $g(x)$:
... where:
I am using the OrderStat function from the ma... | Order Statistics of Poisson Distribution
Given: $(X_1, ...,X_n)$ denotes a random sample of size $n$ drawn on $X$, where $X \sim \text{Poisson}(\lambda)$ with pmf $f(x)$:
Then, the pmf of the $2^{\text{nd}}$ order statistic, in a sample of |
47,039 | Order Statistics of Poisson Distribution | $\mathbb{P}(X_{(2)} = 0)$ asked for the probability where the second least r.v. is zero. In other words, it asked for the probability where at least two of $X_1, \cdots X_n$ are zero.
The statement "there are at least two zeros among $X_1, \cdots, X_n$" is false when we have an event where "at least $n-1$ of $X_1, \cdo... | Order Statistics of Poisson Distribution | $\mathbb{P}(X_{(2)} = 0)$ asked for the probability where the second least r.v. is zero. In other words, it asked for the probability where at least two of $X_1, \cdots X_n$ are zero.
The statement "t | Order Statistics of Poisson Distribution
$\mathbb{P}(X_{(2)} = 0)$ asked for the probability where the second least r.v. is zero. In other words, it asked for the probability where at least two of $X_1, \cdots X_n$ are zero.
The statement "there are at least two zeros among $X_1, \cdots, X_n$" is false when we have an ... | Order Statistics of Poisson Distribution
$\mathbb{P}(X_{(2)} = 0)$ asked for the probability where the second least r.v. is zero. In other words, it asked for the probability where at least two of $X_1, \cdots X_n$ are zero.
The statement "t |
47,040 | Determining confidence interval with one observation (for Poisson distribution) | If $X \sim \mathsf{Pois}(\lambda).$ then $E(X) = \lambda$ and $SD(X) = \sqrt{\lambda}.$ For sufficiently large $\lambda,$ the random variable $X$ is approximately normally distributed. Then one says that $Z = \frac{X -\lambda}{\sqrt{\lambda}}$ is
approximately standard normal, so that
$$P\left(-1.96 < \frac{X -\lambda... | Determining confidence interval with one observation (for Poisson distribution) | If $X \sim \mathsf{Pois}(\lambda).$ then $E(X) = \lambda$ and $SD(X) = \sqrt{\lambda}.$ For sufficiently large $\lambda,$ the random variable $X$ is approximately normally distributed. Then one says t | Determining confidence interval with one observation (for Poisson distribution)
If $X \sim \mathsf{Pois}(\lambda).$ then $E(X) = \lambda$ and $SD(X) = \sqrt{\lambda}.$ For sufficiently large $\lambda,$ the random variable $X$ is approximately normally distributed. Then one says that $Z = \frac{X -\lambda}{\sqrt{\lambda... | Determining confidence interval with one observation (for Poisson distribution)
If $X \sim \mathsf{Pois}(\lambda).$ then $E(X) = \lambda$ and $SD(X) = \sqrt{\lambda}.$ For sufficiently large $\lambda,$ the random variable $X$ is approximately normally distributed. Then one says t |
47,041 | Extending a neural network to classify new objects | I don't know of any method to do exactly what you're asking, but in general you may want to look at Transfer Learning. In deep learning. this technique consists of loading a network trained to predict images on a large dataset such as ImageNet and replace the last, fully connected layer with your own, then train on you... | Extending a neural network to classify new objects | I don't know of any method to do exactly what you're asking, but in general you may want to look at Transfer Learning. In deep learning. this technique consists of loading a network trained to predict | Extending a neural network to classify new objects
I don't know of any method to do exactly what you're asking, but in general you may want to look at Transfer Learning. In deep learning. this technique consists of loading a network trained to predict images on a large dataset such as ImageNet and replace the last, ful... | Extending a neural network to classify new objects
I don't know of any method to do exactly what you're asking, but in general you may want to look at Transfer Learning. In deep learning. this technique consists of loading a network trained to predict |
47,042 | Extending a neural network to classify new objects | This general area is called "continual", "incremental" or "life-long" learning, and it's quite an active area of research.
There are many approaches to continual learning, including different forms of regularization which penalize forgetting, dynamically expanding architectures, and explicit model memories. For more d... | Extending a neural network to classify new objects | This general area is called "continual", "incremental" or "life-long" learning, and it's quite an active area of research.
There are many approaches to continual learning, including different forms o | Extending a neural network to classify new objects
This general area is called "continual", "incremental" or "life-long" learning, and it's quite an active area of research.
There are many approaches to continual learning, including different forms of regularization which penalize forgetting, dynamically expanding arc... | Extending a neural network to classify new objects
This general area is called "continual", "incremental" or "life-long" learning, and it's quite an active area of research.
There are many approaches to continual learning, including different forms o |
47,043 | Extending a neural network to classify new objects | EDIT: now that I understand your question a little better. I don't have enough rep to comment so using responses.
So the labels on the dataset you have are binary, pear = 1/0. And the no class contains oranges and bananas which you already have a model for.
What if you did a two-step approach where you classify an obj... | Extending a neural network to classify new objects | EDIT: now that I understand your question a little better. I don't have enough rep to comment so using responses.
So the labels on the dataset you have are binary, pear = 1/0. And the no class contai | Extending a neural network to classify new objects
EDIT: now that I understand your question a little better. I don't have enough rep to comment so using responses.
So the labels on the dataset you have are binary, pear = 1/0. And the no class contains oranges and bananas which you already have a model for.
What if yo... | Extending a neural network to classify new objects
EDIT: now that I understand your question a little better. I don't have enough rep to comment so using responses.
So the labels on the dataset you have are binary, pear = 1/0. And the no class contai |
47,044 | Which is more numerically stable for OLS: pinv vs QR | Using the Moore-Penrose pseudo-inverse $X^{\dagger}$ of an matrix $X$ is more stable in the sense that can directly account for rank-deficient design matrices $X$.
$X^{\dagger}$ allows us to naturally employ the identities: $X^{\dagger} X X^{\dagger} = X$ and $X X^{\dagger} X= X^{\dagger}$; the matrix $X^{\dagger}$ ca... | Which is more numerically stable for OLS: pinv vs QR | Using the Moore-Penrose pseudo-inverse $X^{\dagger}$ of an matrix $X$ is more stable in the sense that can directly account for rank-deficient design matrices $X$.
$X^{\dagger}$ allows us to naturall | Which is more numerically stable for OLS: pinv vs QR
Using the Moore-Penrose pseudo-inverse $X^{\dagger}$ of an matrix $X$ is more stable in the sense that can directly account for rank-deficient design matrices $X$.
$X^{\dagger}$ allows us to naturally employ the identities: $X^{\dagger} X X^{\dagger} = X$ and $X X^{... | Which is more numerically stable for OLS: pinv vs QR
Using the Moore-Penrose pseudo-inverse $X^{\dagger}$ of an matrix $X$ is more stable in the sense that can directly account for rank-deficient design matrices $X$.
$X^{\dagger}$ allows us to naturall |
47,045 | The exact value of Welch's t test degrees of freedom | Short answer: There is no exact degrees-of-freedom because the variance estimator in this test does not follow an exact chi-squared distribution.
Longer answer: The Welch T-test gives an approximate solution to the Fisher-Behrens problem (comparing the means of two samples with different variances). It uses the stude... | The exact value of Welch's t test degrees of freedom | Short answer: There is no exact degrees-of-freedom because the variance estimator in this test does not follow an exact chi-squared distribution.
Longer answer: The Welch T-test gives an approximate | The exact value of Welch's t test degrees of freedom
Short answer: There is no exact degrees-of-freedom because the variance estimator in this test does not follow an exact chi-squared distribution.
Longer answer: The Welch T-test gives an approximate solution to the Fisher-Behrens problem (comparing the means of two ... | The exact value of Welch's t test degrees of freedom
Short answer: There is no exact degrees-of-freedom because the variance estimator in this test does not follow an exact chi-squared distribution.
Longer answer: The Welch T-test gives an approximate |
47,046 | The exact value of Welch's t test degrees of freedom | @Ben's answer is very clear about why an exact solution for the degrees of freedom isn't possible.
As for
The approximate degrees of freedom are rounded down to the nearest integer [citation needed]
This seems unusual. There is a section on the Talk page for the article that questions whether this is usual or not, a... | The exact value of Welch's t test degrees of freedom | @Ben's answer is very clear about why an exact solution for the degrees of freedom isn't possible.
As for
The approximate degrees of freedom are rounded down to the nearest integer [citation needed] | The exact value of Welch's t test degrees of freedom
@Ben's answer is very clear about why an exact solution for the degrees of freedom isn't possible.
As for
The approximate degrees of freedom are rounded down to the nearest integer [citation needed]
This seems unusual. There is a section on the Talk page for the a... | The exact value of Welch's t test degrees of freedom
@Ben's answer is very clear about why an exact solution for the degrees of freedom isn't possible.
As for
The approximate degrees of freedom are rounded down to the nearest integer [citation needed] |
47,047 | Covariance of products of dependent random variables | If I did this correctly:
\begin{eqnarray}
\text{Cov}(AC,BD)
&=&E(ABCD) - E(AC)E(BD)\\
&=&E(AB)E(CD) - E(A)E(C)E(B)E(D)\\
&=&[E(AB)-E(A)E(B)][E(CD)-E(C)E(D)]+E(A)E(B)[E(CD)-E(C)E(D)]+E(C)E(D)[E(AB)-E(A)E(B)]\\
&=&\text{Cov}(A,B)\text{Cov}(C,D)+E(A)E(B)\text{Cov}(C,D)+E(C)E(D)\text{Cov}(A,B)\end{eqnarray} | Covariance of products of dependent random variables | If I did this correctly:
\begin{eqnarray}
\text{Cov}(AC,BD)
&=&E(ABCD) - E(AC)E(BD)\\
&=&E(AB)E(CD) - E(A)E(C)E(B)E(D)\\
&=&[E(AB)-E(A)E(B)][E(CD)-E(C)E(D)]+E(A)E(B)[E(CD)-E(C)E(D)]+E(C)E(D)[E(AB)-E(A | Covariance of products of dependent random variables
If I did this correctly:
\begin{eqnarray}
\text{Cov}(AC,BD)
&=&E(ABCD) - E(AC)E(BD)\\
&=&E(AB)E(CD) - E(A)E(C)E(B)E(D)\\
&=&[E(AB)-E(A)E(B)][E(CD)-E(C)E(D)]+E(A)E(B)[E(CD)-E(C)E(D)]+E(C)E(D)[E(AB)-E(A)E(B)]\\
&=&\text{Cov}(A,B)\text{Cov}(C,D)+E(A)E(B)\text{Cov}(C,D)+... | Covariance of products of dependent random variables
If I did this correctly:
\begin{eqnarray}
\text{Cov}(AC,BD)
&=&E(ABCD) - E(AC)E(BD)\\
&=&E(AB)E(CD) - E(A)E(C)E(B)E(D)\\
&=&[E(AB)-E(A)E(B)][E(CD)-E(C)E(D)]+E(A)E(B)[E(CD)-E(C)E(D)]+E(C)E(D)[E(AB)-E(A |
47,048 | Covariance of products of dependent random variables | From https://www.jstor.org/stable/2286081 (Exact Covariance of Products of Random Variables) provides the general formulae. Assuming multivariate normality it is:
$$ \mathrm{cov}(xy, uv) = \mathrm{E}(x)\,\mathrm{E}(u)\, \mathrm{cov}(y, v) +
\mathrm{E}(x)\,\mathrm{E}(v)\,\mathrm{cov}(y, u) + \\
... | Covariance of products of dependent random variables | From https://www.jstor.org/stable/2286081 (Exact Covariance of Products of Random Variables) provides the general formulae. Assuming multivariate normality it is:
$$ \mathrm{cov}(xy, uv) = \mathrm{E | Covariance of products of dependent random variables
From https://www.jstor.org/stable/2286081 (Exact Covariance of Products of Random Variables) provides the general formulae. Assuming multivariate normality it is:
$$ \mathrm{cov}(xy, uv) = \mathrm{E}(x)\,\mathrm{E}(u)\, \mathrm{cov}(y, v) +
\mathrm{E}(x)\,\ma... | Covariance of products of dependent random variables
From https://www.jstor.org/stable/2286081 (Exact Covariance of Products of Random Variables) provides the general formulae. Assuming multivariate normality it is:
$$ \mathrm{cov}(xy, uv) = \mathrm{E |
47,049 | Does it make sense to use the slope of trend line from a regression as a ratio between x and y | I presume $x$ is number of jobs and $y$ is number of hours to complete it. It's not correct to say that time for each job is 0.4 hours because you have a (pretty large) bias term. This means you have a fix cost. Performing one job takes $90.4$ hours, two jobs $90.8$ hours etc. So, you can say that each additional job t... | Does it make sense to use the slope of trend line from a regression as a ratio between x and y | I presume $x$ is number of jobs and $y$ is number of hours to complete it. It's not correct to say that time for each job is 0.4 hours because you have a (pretty large) bias term. This means you have | Does it make sense to use the slope of trend line from a regression as a ratio between x and y
I presume $x$ is number of jobs and $y$ is number of hours to complete it. It's not correct to say that time for each job is 0.4 hours because you have a (pretty large) bias term. This means you have a fix cost. Performing on... | Does it make sense to use the slope of trend line from a regression as a ratio between x and y
I presume $x$ is number of jobs and $y$ is number of hours to complete it. It's not correct to say that time for each job is 0.4 hours because you have a (pretty large) bias term. This means you have |
47,050 | Does it make sense to use the slope of trend line from a regression as a ratio between x and y | Not exactly. Assuming a simple linear model $y = \beta_0 + \beta_1 x + \epsilon$, the parameter $\beta_1$ (in this case $0.4$) represents the effect of a one-unit change in the corresponding $x$ (here jobs) covariate on the mean value of the dependent variable, $y$ (here hours), assuming that any other covariates remai... | Does it make sense to use the slope of trend line from a regression as a ratio between x and y | Not exactly. Assuming a simple linear model $y = \beta_0 + \beta_1 x + \epsilon$, the parameter $\beta_1$ (in this case $0.4$) represents the effect of a one-unit change in the corresponding $x$ (here | Does it make sense to use the slope of trend line from a regression as a ratio between x and y
Not exactly. Assuming a simple linear model $y = \beta_0 + \beta_1 x + \epsilon$, the parameter $\beta_1$ (in this case $0.4$) represents the effect of a one-unit change in the corresponding $x$ (here jobs) covariate on the m... | Does it make sense to use the slope of trend line from a regression as a ratio between x and y
Not exactly. Assuming a simple linear model $y = \beta_0 + \beta_1 x + \epsilon$, the parameter $\beta_1$ (in this case $0.4$) represents the effect of a one-unit change in the corresponding $x$ (here |
47,051 | Independence of ratios of independent variates | It is a "well-known" property of the Gamma distributions that $x_1/(x_1+x_2)$ and $(x_1+x_2)$ are independent, and that $x_1+x_2+x_3$ and $Y= (x_1+x_2)/(x_1+x_2+x_3)$ are independent. For instance, writing
\begin{align*}
x_1&=\{\varrho \sin(\theta)\}^2\\
x_1&=\{\varrho \cos(\theta)\}^2\\
\end{align*}
we get that
$$X=\f... | Independence of ratios of independent variates | It is a "well-known" property of the Gamma distributions that $x_1/(x_1+x_2)$ and $(x_1+x_2)$ are independent, and that $x_1+x_2+x_3$ and $Y= (x_1+x_2)/(x_1+x_2+x_3)$ are independent. For instance, wr | Independence of ratios of independent variates
It is a "well-known" property of the Gamma distributions that $x_1/(x_1+x_2)$ and $(x_1+x_2)$ are independent, and that $x_1+x_2+x_3$ and $Y= (x_1+x_2)/(x_1+x_2+x_3)$ are independent. For instance, writing
\begin{align*}
x_1&=\{\varrho \sin(\theta)\}^2\\
x_1&=\{\varrho \co... | Independence of ratios of independent variates
It is a "well-known" property of the Gamma distributions that $x_1/(x_1+x_2)$ and $(x_1+x_2)$ are independent, and that $x_1+x_2+x_3$ and $Y= (x_1+x_2)/(x_1+x_2+x_3)$ are independent. For instance, wr |
47,052 | Independence of ratios of independent variates | A geometrical interpretation/intuition
You could view the chi-squared variables $x_1,x_2,x_3$ as relating to independent standard normal distributed variables which in it's turn relates to uniformly distributed variables on a n-sphere https://en.wikipedia.org/wiki/N-sphere#Generating_random_points
In the same way as yo... | Independence of ratios of independent variates | A geometrical interpretation/intuition
You could view the chi-squared variables $x_1,x_2,x_3$ as relating to independent standard normal distributed variables which in it's turn relates to uniformly d | Independence of ratios of independent variates
A geometrical interpretation/intuition
You could view the chi-squared variables $x_1,x_2,x_3$ as relating to independent standard normal distributed variables which in it's turn relates to uniformly distributed variables on a n-sphere https://en.wikipedia.org/wiki/N-sphere... | Independence of ratios of independent variates
A geometrical interpretation/intuition
You could view the chi-squared variables $x_1,x_2,x_3$ as relating to independent standard normal distributed variables which in it's turn relates to uniformly d |
47,053 | Should log-likelihood values increase when the sample size of a simulation increases? | It depends. More importantly though, it doesn't really matter.
Remember, in an iid setting, the Likelihood is the product of PDFs (or PMFs) as a function of $\theta$. If each $f(x_i|\theta) < 1$ then the Likelihood will get smaller for each additional point. Uniform distributions make this point clear.
Let $X_1, \cdo... | Should log-likelihood values increase when the sample size of a simulation increases? | It depends. More importantly though, it doesn't really matter.
Remember, in an iid setting, the Likelihood is the product of PDFs (or PMFs) as a function of $\theta$. If each $f(x_i|\theta) < 1$ then | Should log-likelihood values increase when the sample size of a simulation increases?
It depends. More importantly though, it doesn't really matter.
Remember, in an iid setting, the Likelihood is the product of PDFs (or PMFs) as a function of $\theta$. If each $f(x_i|\theta) < 1$ then the Likelihood will get smaller f... | Should log-likelihood values increase when the sample size of a simulation increases?
It depends. More importantly though, it doesn't really matter.
Remember, in an iid setting, the Likelihood is the product of PDFs (or PMFs) as a function of $\theta$. If each $f(x_i|\theta) < 1$ then |
47,054 | Distribution of difference of two random variables with chi-squared distribution | This is not a chi-squared density, $X-Y$ will have support on $(-\infty, +\infty)$.
If the two variables are independent, it has mean 0 and variance $4k$.
If $k$ is large, its density is well approximated a normal variable with 0 mean and $4k$ variance. In the general case, its MGF has a closed form:
$$E(\exp(t(X-Y)))... | Distribution of difference of two random variables with chi-squared distribution | This is not a chi-squared density, $X-Y$ will have support on $(-\infty, +\infty)$.
If the two variables are independent, it has mean 0 and variance $4k$.
If $k$ is large, its density is well approxi | Distribution of difference of two random variables with chi-squared distribution
This is not a chi-squared density, $X-Y$ will have support on $(-\infty, +\infty)$.
If the two variables are independent, it has mean 0 and variance $4k$.
If $k$ is large, its density is well approximated a normal variable with 0 mean and... | Distribution of difference of two random variables with chi-squared distribution
This is not a chi-squared density, $X-Y$ will have support on $(-\infty, +\infty)$.
If the two variables are independent, it has mean 0 and variance $4k$.
If $k$ is large, its density is well approxi |
47,055 | Distribution of difference of two random variables with chi-squared distribution | Since $X$ and $Y$ can also be considered to be Gamma random variables with (order, rate) parameters $\left(\frac k2, \frac 12\right)$, then, as Sebapi points out, $X-Y$ has support $(-\infty,\infty)$. Furthermore, if $X$ and $Y$ are assumed to be independent, then the pdf of $X-Y$ is also symmetric about $0$ and has m... | Distribution of difference of two random variables with chi-squared distribution | Since $X$ and $Y$ can also be considered to be Gamma random variables with (order, rate) parameters $\left(\frac k2, \frac 12\right)$, then, as Sebapi points out, $X-Y$ has support $(-\infty,\infty)$. | Distribution of difference of two random variables with chi-squared distribution
Since $X$ and $Y$ can also be considered to be Gamma random variables with (order, rate) parameters $\left(\frac k2, \frac 12\right)$, then, as Sebapi points out, $X-Y$ has support $(-\infty,\infty)$. Furthermore, if $X$ and $Y$ are assum... | Distribution of difference of two random variables with chi-squared distribution
Since $X$ and $Y$ can also be considered to be Gamma random variables with (order, rate) parameters $\left(\frac k2, \frac 12\right)$, then, as Sebapi points out, $X-Y$ has support $(-\infty,\infty)$. |
47,056 | Distribution of difference of two random variables with chi-squared distribution | Using the following links and parameter correspondence between the characteristic function(chf) of difference of $\Gamma(\alpha,\nu_{\Gamma})$ and $VG(\sigma,\nu)$ symm. variance gamma rvs we can obtain density of $X-Y$ indirectly, alternative to the method of chf inversion,
\begin{equation}
\phi_{VG}(u,\sigma,\nu)=\le... | Distribution of difference of two random variables with chi-squared distribution | Using the following links and parameter correspondence between the characteristic function(chf) of difference of $\Gamma(\alpha,\nu_{\Gamma})$ and $VG(\sigma,\nu)$ symm. variance gamma rvs we can obta | Distribution of difference of two random variables with chi-squared distribution
Using the following links and parameter correspondence between the characteristic function(chf) of difference of $\Gamma(\alpha,\nu_{\Gamma})$ and $VG(\sigma,\nu)$ symm. variance gamma rvs we can obtain density of $X-Y$ indirectly, alterna... | Distribution of difference of two random variables with chi-squared distribution
Using the following links and parameter correspondence between the characteristic function(chf) of difference of $\Gamma(\alpha,\nu_{\Gamma})$ and $VG(\sigma,\nu)$ symm. variance gamma rvs we can obta |
47,057 | Who invented the hazard function? | The term for it seems to be relatively recent but the notion is considerably older.
Jeff Miller's Earliest Known Uses of Some of the Words of Mathematics discusses the use of the term 'hazard rate', and it looks like that's from the 50s and 60s. It reports that the term "death-hazard rate" occurs in D. J. Davis "An Ana... | Who invented the hazard function? | The term for it seems to be relatively recent but the notion is considerably older.
Jeff Miller's Earliest Known Uses of Some of the Words of Mathematics discusses the use of the term 'hazard rate', a | Who invented the hazard function?
The term for it seems to be relatively recent but the notion is considerably older.
Jeff Miller's Earliest Known Uses of Some of the Words of Mathematics discusses the use of the term 'hazard rate', and it looks like that's from the 50s and 60s. It reports that the term "death-hazard r... | Who invented the hazard function?
The term for it seems to be relatively recent but the notion is considerably older.
Jeff Miller's Earliest Known Uses of Some of the Words of Mathematics discusses the use of the term 'hazard rate', a |
47,058 | Machine Learning - Prediction Interval - Cheating? | In general, prediction intervals are considered better than point estimates. While it's great to have a good estimate for what a stock price will be tomorrow, it's much better to be able to be able to give a range of values that the stock price is very likely to be in.
That being said, it's generally more difficult to... | Machine Learning - Prediction Interval - Cheating? | In general, prediction intervals are considered better than point estimates. While it's great to have a good estimate for what a stock price will be tomorrow, it's much better to be able to be able to | Machine Learning - Prediction Interval - Cheating?
In general, prediction intervals are considered better than point estimates. While it's great to have a good estimate for what a stock price will be tomorrow, it's much better to be able to be able to give a range of values that the stock price is very likely to be in.... | Machine Learning - Prediction Interval - Cheating?
In general, prediction intervals are considered better than point estimates. While it's great to have a good estimate for what a stock price will be tomorrow, it's much better to be able to be able to |
47,059 | Machine Learning - Prediction Interval - Cheating? | I don't understand your manager's attitude. If the model predicts that the stock will be 173.56, and it's actually 173.55, will they consider that a "failure"? If you're trying to make money from the stock market, you shouldn't be depending on getting the price exactly right. Stock investment is all about reducing vari... | Machine Learning - Prediction Interval - Cheating? | I don't understand your manager's attitude. If the model predicts that the stock will be 173.56, and it's actually 173.55, will they consider that a "failure"? If you're trying to make money from the | Machine Learning - Prediction Interval - Cheating?
I don't understand your manager's attitude. If the model predicts that the stock will be 173.56, and it's actually 173.55, will they consider that a "failure"? If you're trying to make money from the stock market, you shouldn't be depending on getting the price exactly... | Machine Learning - Prediction Interval - Cheating?
I don't understand your manager's attitude. If the model predicts that the stock will be 173.56, and it's actually 173.55, will they consider that a "failure"? If you're trying to make money from the |
47,060 | question about MSE mean square error | I will try to give an intuitive example to understand why the arithmetic mean
\begin{equation} \overline x_1 = \sum_{i=1}^{n} \frac{x_i}{n}
\end{equation}
is not as good as
\begin{equation} \overline x_2 = \frac{a + b}{2}
\end{equation}
In the case where $X \sim \mathrm{unif}(\alpha,\beta)$
Imagine that you have 10 ... | question about MSE mean square error | I will try to give an intuitive example to understand why the arithmetic mean
\begin{equation} \overline x_1 = \sum_{i=1}^{n} \frac{x_i}{n}
\end{equation}
is not as good as
\begin{equation} \overlin | question about MSE mean square error
I will try to give an intuitive example to understand why the arithmetic mean
\begin{equation} \overline x_1 = \sum_{i=1}^{n} \frac{x_i}{n}
\end{equation}
is not as good as
\begin{equation} \overline x_2 = \frac{a + b}{2}
\end{equation}
In the case where $X \sim \mathrm{unif}(\al... | question about MSE mean square error
I will try to give an intuitive example to understand why the arithmetic mean
\begin{equation} \overline x_1 = \sum_{i=1}^{n} \frac{x_i}{n}
\end{equation}
is not as good as
\begin{equation} \overlin |
47,061 | question about MSE mean square error | Note firstly that MSE, unlike the related measurement; variance (Var), is a biased estimator of a sample variability, which is one source of confusion in the text quoted. For a normal distribution (only) the relationship is $MSE(\bar{X})=\frac{n-1}{n}Var(\bar{X})$, with a more general relationship given through excess ... | question about MSE mean square error | Note firstly that MSE, unlike the related measurement; variance (Var), is a biased estimator of a sample variability, which is one source of confusion in the text quoted. For a normal distribution (on | question about MSE mean square error
Note firstly that MSE, unlike the related measurement; variance (Var), is a biased estimator of a sample variability, which is one source of confusion in the text quoted. For a normal distribution (only) the relationship is $MSE(\bar{X})=\frac{n-1}{n}Var(\bar{X})$, with a more gener... | question about MSE mean square error
Note firstly that MSE, unlike the related measurement; variance (Var), is a biased estimator of a sample variability, which is one source of confusion in the text quoted. For a normal distribution (on |
47,062 | Profile Likelihood: why optimize all other parameters while tracing a profile for a partitcular one? | You can think of the profile confidence interval as an inversion of the likelihood ratio test; you are comparing a model in which your parameter of interest is allowed to vary against a set of nested models in which the parameter of interest is fixed. Your confidence interval is the set of values for which the paramete... | Profile Likelihood: why optimize all other parameters while tracing a profile for a partitcular one? | You can think of the profile confidence interval as an inversion of the likelihood ratio test; you are comparing a model in which your parameter of interest is allowed to vary against a set of nested | Profile Likelihood: why optimize all other parameters while tracing a profile for a partitcular one?
You can think of the profile confidence interval as an inversion of the likelihood ratio test; you are comparing a model in which your parameter of interest is allowed to vary against a set of nested models in which the... | Profile Likelihood: why optimize all other parameters while tracing a profile for a partitcular one?
You can think of the profile confidence interval as an inversion of the likelihood ratio test; you are comparing a model in which your parameter of interest is allowed to vary against a set of nested |
47,063 | Value Iteration For Terminal States in MDP | The point of visiting a state in value iteration is in order to update its value, using the update:
$$v(s) \leftarrow \text{max}_a[\sum_{r,s'} p(s', r|s,a)(r + \gamma v(s'))]$$
First thing to note is that the state value of terminal state $s^T$ is $v(s^T) = 0$, always, since by definition there are no future rewards to... | Value Iteration For Terminal States in MDP | The point of visiting a state in value iteration is in order to update its value, using the update:
$$v(s) \leftarrow \text{max}_a[\sum_{r,s'} p(s', r|s,a)(r + \gamma v(s'))]$$
First thing to note is | Value Iteration For Terminal States in MDP
The point of visiting a state in value iteration is in order to update its value, using the update:
$$v(s) \leftarrow \text{max}_a[\sum_{r,s'} p(s', r|s,a)(r + \gamma v(s'))]$$
First thing to note is that the state value of terminal state $s^T$ is $v(s^T) = 0$, always, since b... | Value Iteration For Terminal States in MDP
The point of visiting a state in value iteration is in order to update its value, using the update:
$$v(s) \leftarrow \text{max}_a[\sum_{r,s'} p(s', r|s,a)(r + \gamma v(s'))]$$
First thing to note is |
47,064 | Training one model to work for many time series | Yes, there are ways of doing this. You could apply some kind of meta learning to adapt the learning process to each separate time series, or use transfer learning to transfer the knowledge learned from one series to another. I don't have pointers, since this is certainly not the first thing I would do, see below.
You c... | Training one model to work for many time series | Yes, there are ways of doing this. You could apply some kind of meta learning to adapt the learning process to each separate time series, or use transfer learning to transfer the knowledge learned fro | Training one model to work for many time series
Yes, there are ways of doing this. You could apply some kind of meta learning to adapt the learning process to each separate time series, or use transfer learning to transfer the knowledge learned from one series to another. I don't have pointers, since this is certainly ... | Training one model to work for many time series
Yes, there are ways of doing this. You could apply some kind of meta learning to adapt the learning process to each separate time series, or use transfer learning to transfer the knowledge learned fro |
47,065 | Training one model to work for many time series | Is there some sort of methodology for training a model to make predictions on many (seemingly unrelated) time series data?
The closest thing to an actual methodology for this is hierarchical forecasting.
On my team (I work in demand forecasting) we use a type of hierarchical forecasting to generate forecasts for prod... | Training one model to work for many time series | Is there some sort of methodology for training a model to make predictions on many (seemingly unrelated) time series data?
The closest thing to an actual methodology for this is hierarchical forecast | Training one model to work for many time series
Is there some sort of methodology for training a model to make predictions on many (seemingly unrelated) time series data?
The closest thing to an actual methodology for this is hierarchical forecasting.
On my team (I work in demand forecasting) we use a type of hierarc... | Training one model to work for many time series
Is there some sort of methodology for training a model to make predictions on many (seemingly unrelated) time series data?
The closest thing to an actual methodology for this is hierarchical forecast |
47,066 | What is the actual significance of a difference in AIC or BIC values? | The difference in AIC (or BIC) for two models is twice the log-likelihood ratio minus a constant: it follows immediately that in any particular case selecting the AIC corresponds to performing a likelihood-ratio test, but that in different cases it corresponds to tests of different significance levels.
With nested mode... | What is the actual significance of a difference in AIC or BIC values? | The difference in AIC (or BIC) for two models is twice the log-likelihood ratio minus a constant: it follows immediately that in any particular case selecting the AIC corresponds to performing a likel | What is the actual significance of a difference in AIC or BIC values?
The difference in AIC (or BIC) for two models is twice the log-likelihood ratio minus a constant: it follows immediately that in any particular case selecting the AIC corresponds to performing a likelihood-ratio test, but that in different cases it c... | What is the actual significance of a difference in AIC or BIC values?
The difference in AIC (or BIC) for two models is twice the log-likelihood ratio minus a constant: it follows immediately that in any particular case selecting the AIC corresponds to performing a likel |
47,067 | What is the actual significance of a difference in AIC or BIC values? | Suppose one generates values from a standard normal distribution, $\mathcal{N}(0,1)$. If we have only generated two values, $n=2$, then we have a discrete uniform distribution, not a convincingly discrete approximation of a normal distribution. Indeed, this is true for any $n=2$, no matter which generating distribution... | What is the actual significance of a difference in AIC or BIC values? | Suppose one generates values from a standard normal distribution, $\mathcal{N}(0,1)$. If we have only generated two values, $n=2$, then we have a discrete uniform distribution, not a convincingly disc | What is the actual significance of a difference in AIC or BIC values?
Suppose one generates values from a standard normal distribution, $\mathcal{N}(0,1)$. If we have only generated two values, $n=2$, then we have a discrete uniform distribution, not a convincingly discrete approximation of a normal distribution. Indee... | What is the actual significance of a difference in AIC or BIC values?
Suppose one generates values from a standard normal distribution, $\mathcal{N}(0,1)$. If we have only generated two values, $n=2$, then we have a discrete uniform distribution, not a convincingly disc |
47,068 | Inconsistent mgcv gam.check results | The issue is due to the basis dimension test used in gam.check() being based on permutations of model residuals. These permutations are computed using a pseudo random number generator; by design each time you call gam.check() (or directly k.check() itself), a different set of permutations are produced, which subtly alt... | Inconsistent mgcv gam.check results | The issue is due to the basis dimension test used in gam.check() being based on permutations of model residuals. These permutations are computed using a pseudo random number generator; by design each | Inconsistent mgcv gam.check results
The issue is due to the basis dimension test used in gam.check() being based on permutations of model residuals. These permutations are computed using a pseudo random number generator; by design each time you call gam.check() (or directly k.check() itself), a different set of permuta... | Inconsistent mgcv gam.check results
The issue is due to the basis dimension test used in gam.check() being based on permutations of model residuals. These permutations are computed using a pseudo random number generator; by design each |
47,069 | Deriving the canonical link for a binomial distribution | You're almost right, and it's such an easy fix:
$$\mu_i = p_i n$$ so $$\log(\frac{\mu_i}{n - \mu_i}) = \log(\frac{np_i}{n - np_i}) = ...$$
So, the $n$ can be a 1, as long as you swap out $\mu_i$ for $p_i$. | Deriving the canonical link for a binomial distribution | You're almost right, and it's such an easy fix:
$$\mu_i = p_i n$$ so $$\log(\frac{\mu_i}{n - \mu_i}) = \log(\frac{np_i}{n - np_i}) = ...$$
So, the $n$ can be a 1, as long as you swap out $\mu_i$ for $ | Deriving the canonical link for a binomial distribution
You're almost right, and it's such an easy fix:
$$\mu_i = p_i n$$ so $$\log(\frac{\mu_i}{n - \mu_i}) = \log(\frac{np_i}{n - np_i}) = ...$$
So, the $n$ can be a 1, as long as you swap out $\mu_i$ for $p_i$. | Deriving the canonical link for a binomial distribution
You're almost right, and it's such an easy fix:
$$\mu_i = p_i n$$ so $$\log(\frac{\mu_i}{n - \mu_i}) = \log(\frac{np_i}{n - np_i}) = ...$$
So, the $n$ can be a 1, as long as you swap out $\mu_i$ for $ |
47,070 | Autoregression in nlme - undestanding how to specify corAR1 | The AR1 structure specifies that the correlations between the repeated measurements of each subject decrease with the time lag, i.e., the distance in time between the measurements.
When you specify lme(..., correlation = corAR1()) this is equivalent to lme(..., correlation = corAR1(form = ~ 1 | id)) and assumes that t... | Autoregression in nlme - undestanding how to specify corAR1 | The AR1 structure specifies that the correlations between the repeated measurements of each subject decrease with the time lag, i.e., the distance in time between the measurements.
When you specify l | Autoregression in nlme - undestanding how to specify corAR1
The AR1 structure specifies that the correlations between the repeated measurements of each subject decrease with the time lag, i.e., the distance in time between the measurements.
When you specify lme(..., correlation = corAR1()) this is equivalent to lme(..... | Autoregression in nlme - undestanding how to specify corAR1
The AR1 structure specifies that the correlations between the repeated measurements of each subject decrease with the time lag, i.e., the distance in time between the measurements.
When you specify l |
47,071 | How to handle too many categorical features with too many categories for XGBoost? | There are possibly many ways to tackle this, depending on your data, feature cardinality, etc.:
After one-hot-encoding, it may turn out some new features are almost always zero and have negligible statistical significance and you can just drop them
Whole features (before encoding) may turn out to be insignificant
For ... | How to handle too many categorical features with too many categories for XGBoost? | There are possibly many ways to tackle this, depending on your data, feature cardinality, etc.:
After one-hot-encoding, it may turn out some new features are almost always zero and have negligible st | How to handle too many categorical features with too many categories for XGBoost?
There are possibly many ways to tackle this, depending on your data, feature cardinality, etc.:
After one-hot-encoding, it may turn out some new features are almost always zero and have negligible statistical significance and you can jus... | How to handle too many categorical features with too many categories for XGBoost?
There are possibly many ways to tackle this, depending on your data, feature cardinality, etc.:
After one-hot-encoding, it may turn out some new features are almost always zero and have negligible st |
47,072 | Statistics : Why is the Cramer-Rao Lower Bound (CRLB) inverse of the Fisher Information I(θ) ? | Almost two years later comes the longer answer: This is not a rigorous explanation but hopefully gives some intuition that the variance of the ML-estimator increases with the curvature of the log-likelihood (at least in the following simple example).
Assume that we have $m$ samples of size $n$ from $N(0, \sigma_1^2)$ a... | Statistics : Why is the Cramer-Rao Lower Bound (CRLB) inverse of the Fisher Information I(θ) ? | Almost two years later comes the longer answer: This is not a rigorous explanation but hopefully gives some intuition that the variance of the ML-estimator increases with the curvature of the log-like | Statistics : Why is the Cramer-Rao Lower Bound (CRLB) inverse of the Fisher Information I(θ) ?
Almost two years later comes the longer answer: This is not a rigorous explanation but hopefully gives some intuition that the variance of the ML-estimator increases with the curvature of the log-likelihood (at least in the f... | Statistics : Why is the Cramer-Rao Lower Bound (CRLB) inverse of the Fisher Information I(θ) ?
Almost two years later comes the longer answer: This is not a rigorous explanation but hopefully gives some intuition that the variance of the ML-estimator increases with the curvature of the log-like |
47,073 | Statistics : Why is the Cramer-Rao Lower Bound (CRLB) inverse of the Fisher Information I(θ) ? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I think this video gives a neat intuition, as it discu... | Statistics : Why is the Cramer-Rao Lower Bound (CRLB) inverse of the Fisher Information I(θ) ? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Statistics : Why is the Cramer-Rao Lower Bound (CRLB) inverse of the Fisher Information I(θ) ?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
... | Statistics : Why is the Cramer-Rao Lower Bound (CRLB) inverse of the Fisher Information I(θ) ?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
47,074 | Statistics : Why is the Cramer-Rao Lower Bound (CRLB) inverse of the Fisher Information I(θ) ? | There is a certain correspondence between the variance of the estimator and the variance of the score or derivative of the likelihood. This becomes possibly more clear when we slightly rewrite the expression for the Cramer Rao bound instead of $\text{var}( \hat{\theta} ) \geq \frac{1}{I(\theta)}$ we can write it also a... | Statistics : Why is the Cramer-Rao Lower Bound (CRLB) inverse of the Fisher Information I(θ) ? | There is a certain correspondence between the variance of the estimator and the variance of the score or derivative of the likelihood. This becomes possibly more clear when we slightly rewrite the exp | Statistics : Why is the Cramer-Rao Lower Bound (CRLB) inverse of the Fisher Information I(θ) ?
There is a certain correspondence between the variance of the estimator and the variance of the score or derivative of the likelihood. This becomes possibly more clear when we slightly rewrite the expression for the Cramer Ra... | Statistics : Why is the Cramer-Rao Lower Bound (CRLB) inverse of the Fisher Information I(θ) ?
There is a certain correspondence between the variance of the estimator and the variance of the score or derivative of the likelihood. This becomes possibly more clear when we slightly rewrite the exp |
47,075 | How to estimate probability density function (pdf) from empirical cumulative distribution function (ecdf)? | This is an outline rather than a complete answer. There are two main issues: (a) finding the data values used to make the ECDF plot, and (b) using a histogram and KDE methods to estimate the PDF.
Data for an example: Here is a simple demo in R for a sample with $n=100$ unique values from
$\mathsf{Norm}(\mu=50,\, \sigma... | How to estimate probability density function (pdf) from empirical cumulative distribution function ( | This is an outline rather than a complete answer. There are two main issues: (a) finding the data values used to make the ECDF plot, and (b) using a histogram and KDE methods to estimate the PDF.
Data | How to estimate probability density function (pdf) from empirical cumulative distribution function (ecdf)?
This is an outline rather than a complete answer. There are two main issues: (a) finding the data values used to make the ECDF plot, and (b) using a histogram and KDE methods to estimate the PDF.
Data for an examp... | How to estimate probability density function (pdf) from empirical cumulative distribution function (
This is an outline rather than a complete answer. There are two main issues: (a) finding the data values used to make the ECDF plot, and (b) using a histogram and KDE methods to estimate the PDF.
Data |
47,076 | Interpreting a matrix calculation | Consider the random variable $$y = c^T x$$Then the mean of $y$ is
$$E(y) = c^T E(x) = c^T \mu$$
and variance of $y$ is
$$var(y) = E(y - c^T \mu)^2 = E(y^2) - (c^T \mu)^2 = E(y^2) + c^T \mu\mu^T c \tag{*}$$
But
$$y^2 = (c^Tx)^2 = c^Tx c^Tx= c^Txx^Tc$$
So $$E(y^2) = c^TE(xx^T)c = c^T(\Sigma - \mu\mu^T)c \tag{**}$$
Repla... | Interpreting a matrix calculation | Consider the random variable $$y = c^T x$$Then the mean of $y$ is
$$E(y) = c^T E(x) = c^T \mu$$
and variance of $y$ is
$$var(y) = E(y - c^T \mu)^2 = E(y^2) - (c^T \mu)^2 = E(y^2) + c^T \mu\mu^T c \ta | Interpreting a matrix calculation
Consider the random variable $$y = c^T x$$Then the mean of $y$ is
$$E(y) = c^T E(x) = c^T \mu$$
and variance of $y$ is
$$var(y) = E(y - c^T \mu)^2 = E(y^2) - (c^T \mu)^2 = E(y^2) + c^T \mu\mu^T c \tag{*}$$
But
$$y^2 = (c^Tx)^2 = c^Tx c^Tx= c^Txx^Tc$$
So $$E(y^2) = c^TE(xx^T)c = c^T(\S... | Interpreting a matrix calculation
Consider the random variable $$y = c^T x$$Then the mean of $y$ is
$$E(y) = c^T E(x) = c^T \mu$$
and variance of $y$ is
$$var(y) = E(y - c^T \mu)^2 = E(y^2) - (c^T \mu)^2 = E(y^2) + c^T \mu\mu^T c \ta |
47,077 | How to test if two RMSE are significantly different? | To test whether two (root) mean squared prediction errors are significantly different, the standard test is the Diebold-Mariano test (Diebold & Mariano, 1995, Journal of Business and Econonomic Statistics). We have a diebold-mariano tag, which may be useful. I also recommend Diebold's (2015, Journal of Business and Eco... | How to test if two RMSE are significantly different? | To test whether two (root) mean squared prediction errors are significantly different, the standard test is the Diebold-Mariano test (Diebold & Mariano, 1995, Journal of Business and Econonomic Statis | How to test if two RMSE are significantly different?
To test whether two (root) mean squared prediction errors are significantly different, the standard test is the Diebold-Mariano test (Diebold & Mariano, 1995, Journal of Business and Econonomic Statistics). We have a diebold-mariano tag, which may be useful. I also r... | How to test if two RMSE are significantly different?
To test whether two (root) mean squared prediction errors are significantly different, the standard test is the Diebold-Mariano test (Diebold & Mariano, 1995, Journal of Business and Econonomic Statis |
47,078 | RNN vs Convolution 1D | Yes the interpretation of the dimensions is pretty similar in both cases.
An important case where RNNs are easier to use is with data of unknown lengths. For example, in sentence translation (e.g. translating Chinese to Icelandic) both the input and output sizes are dynamic. In this case, it is easier and more intuiti... | RNN vs Convolution 1D | Yes the interpretation of the dimensions is pretty similar in both cases.
An important case where RNNs are easier to use is with data of unknown lengths. For example, in sentence translation (e.g. tr | RNN vs Convolution 1D
Yes the interpretation of the dimensions is pretty similar in both cases.
An important case where RNNs are easier to use is with data of unknown lengths. For example, in sentence translation (e.g. translating Chinese to Icelandic) both the input and output sizes are dynamic. In this case, it is e... | RNN vs Convolution 1D
Yes the interpretation of the dimensions is pretty similar in both cases.
An important case where RNNs are easier to use is with data of unknown lengths. For example, in sentence translation (e.g. tr |
47,079 | Bootstrap confidence intervals - how many replications to choose? | Why does it make sense to perform a bootstrap procedure before calculating the confidence intervals? Will they be more precise? And if so, can anyone explain why?
You can calculate bootstrap confidence intervals for complex situations, i.e. properties ("statistics") that are not easily accessible analytically. I'm th... | Bootstrap confidence intervals - how many replications to choose? | Why does it make sense to perform a bootstrap procedure before calculating the confidence intervals? Will they be more precise? And if so, can anyone explain why?
You can calculate bootstrap confide | Bootstrap confidence intervals - how many replications to choose?
Why does it make sense to perform a bootstrap procedure before calculating the confidence intervals? Will they be more precise? And if so, can anyone explain why?
You can calculate bootstrap confidence intervals for complex situations, i.e. properties ... | Bootstrap confidence intervals - how many replications to choose?
Why does it make sense to perform a bootstrap procedure before calculating the confidence intervals? Will they be more precise? And if so, can anyone explain why?
You can calculate bootstrap confide |
47,080 | Bootstrap confidence intervals - how many replications to choose? | Let's take the simplest case of using just the percentiles to compute the confidence interval. In that case you repeatedly sample with replacement from your data, compute your statistic in each of these samples and store those estimates. The 2.5th percentile of those stored estimates represents the lower bound and the ... | Bootstrap confidence intervals - how many replications to choose? | Let's take the simplest case of using just the percentiles to compute the confidence interval. In that case you repeatedly sample with replacement from your data, compute your statistic in each of the | Bootstrap confidence intervals - how many replications to choose?
Let's take the simplest case of using just the percentiles to compute the confidence interval. In that case you repeatedly sample with replacement from your data, compute your statistic in each of these samples and store those estimates. The 2.5th percen... | Bootstrap confidence intervals - how many replications to choose?
Let's take the simplest case of using just the percentiles to compute the confidence interval. In that case you repeatedly sample with replacement from your data, compute your statistic in each of the |
47,081 | Bootstrap confidence intervals - how many replications to choose? | According to Efron (the "inventor" of the boostrap technique), you should make 1600 replicas. I have no other clue about where this number comes from, except that its square root is 40, an easy number to divide by. I suggest you go like in any other Monte-Carlo. Try 1600, then increase the bootstrap samples until it st... | Bootstrap confidence intervals - how many replications to choose? | According to Efron (the "inventor" of the boostrap technique), you should make 1600 replicas. I have no other clue about where this number comes from, except that its square root is 40, an easy number | Bootstrap confidence intervals - how many replications to choose?
According to Efron (the "inventor" of the boostrap technique), you should make 1600 replicas. I have no other clue about where this number comes from, except that its square root is 40, an easy number to divide by. I suggest you go like in any other Mont... | Bootstrap confidence intervals - how many replications to choose?
According to Efron (the "inventor" of the boostrap technique), you should make 1600 replicas. I have no other clue about where this number comes from, except that its square root is 40, an easy number |
47,082 | Bootstrap confidence intervals - how many replications to choose? | 1. Bootstrap before calculating CIs
Not sure if I understood your question correctly, but if you were asking, ‘Should I be using bootstrap to compute the CIs’, then, the missing part of the question is, ‘bootstrap instead of what’, ‘more precise’ – ‘precise compared to what and in which sense’.
There are multiple ways ... | Bootstrap confidence intervals - how many replications to choose? | 1. Bootstrap before calculating CIs
Not sure if I understood your question correctly, but if you were asking, ‘Should I be using bootstrap to compute the CIs’, then, the missing part of the question i | Bootstrap confidence intervals - how many replications to choose?
1. Bootstrap before calculating CIs
Not sure if I understood your question correctly, but if you were asking, ‘Should I be using bootstrap to compute the CIs’, then, the missing part of the question is, ‘bootstrap instead of what’, ‘more precise’ – ‘prec... | Bootstrap confidence intervals - how many replications to choose?
1. Bootstrap before calculating CIs
Not sure if I understood your question correctly, but if you were asking, ‘Should I be using bootstrap to compute the CIs’, then, the missing part of the question i |
47,083 | In a linear regression, should I include independent variables that is already known to be predictive of the dependent variable? | $$ \text{profit} = (\text{price}-\text{cost})\times\text{sales}. $$
I don't think it makes sense to regress profit on price or cost. Or sales, for that matter. We know the relationship above.
Instead, work on understanding the three drivers above. For instance, as a first approximation, you can treat cost as fixed, sin... | In a linear regression, should I include independent variables that is already known to be predictiv | $$ \text{profit} = (\text{price}-\text{cost})\times\text{sales}. $$
I don't think it makes sense to regress profit on price or cost. Or sales, for that matter. We know the relationship above.
Instead, | In a linear regression, should I include independent variables that is already known to be predictive of the dependent variable?
$$ \text{profit} = (\text{price}-\text{cost})\times\text{sales}. $$
I don't think it makes sense to regress profit on price or cost. Or sales, for that matter. We know the relationship above.... | In a linear regression, should I include independent variables that is already known to be predictiv
$$ \text{profit} = (\text{price}-\text{cost})\times\text{sales}. $$
I don't think it makes sense to regress profit on price or cost. Or sales, for that matter. We know the relationship above.
Instead, |
47,084 | In a linear regression, should I include independent variables that is already known to be predictive of the dependent variable? | I think something that might help is slightly rephrase the question. I agree for profitability, the largest difference in buying vs selling price would be a good indicator (or perhaps the only indicator you might need), but that's hardly insightful. What I would do is to normalize the profit by the buying or selling pr... | In a linear regression, should I include independent variables that is already known to be predictiv | I think something that might help is slightly rephrase the question. I agree for profitability, the largest difference in buying vs selling price would be a good indicator (or perhaps the only indicat | In a linear regression, should I include independent variables that is already known to be predictive of the dependent variable?
I think something that might help is slightly rephrase the question. I agree for profitability, the largest difference in buying vs selling price would be a good indicator (or perhaps the onl... | In a linear regression, should I include independent variables that is already known to be predictiv
I think something that might help is slightly rephrase the question. I agree for profitability, the largest difference in buying vs selling price would be a good indicator (or perhaps the only indicat |
47,085 | Question about Xgboost paper weights and decision-rules | The basic idea is behind boosting with (regression) trees is that we are learning functions $f$ (here in the form of trees $w_{q(x)}$). The weights $w$ at the $T$ leafs of the tree are representing the prediction of the $k$-th tree.
$q$ is the tree structure (e.g. that of stump with only a root node, or that of an ela... | Question about Xgboost paper weights and decision-rules | The basic idea is behind boosting with (regression) trees is that we are learning functions $f$ (here in the form of trees $w_{q(x)}$). The weights $w$ at the $T$ leafs of the tree are representing th | Question about Xgboost paper weights and decision-rules
The basic idea is behind boosting with (regression) trees is that we are learning functions $f$ (here in the form of trees $w_{q(x)}$). The weights $w$ at the $T$ leafs of the tree are representing the prediction of the $k$-th tree.
$q$ is the tree structure (e.g... | Question about Xgboost paper weights and decision-rules
The basic idea is behind boosting with (regression) trees is that we are learning functions $f$ (here in the form of trees $w_{q(x)}$). The weights $w$ at the $T$ leafs of the tree are representing th |
47,086 | Build a regression model with multiple small time series | I'd consider a mixed model with an effect of time-in-career (maybe an additive/smooth term to allow for nonlinear effects) and a random effect of (time-in-career|player), which allows for variation in the pattern for different players. That doesn't explicitly consider number of points in the previous year, but it seems... | Build a regression model with multiple small time series | I'd consider a mixed model with an effect of time-in-career (maybe an additive/smooth term to allow for nonlinear effects) and a random effect of (time-in-career|player), which allows for variation in | Build a regression model with multiple small time series
I'd consider a mixed model with an effect of time-in-career (maybe an additive/smooth term to allow for nonlinear effects) and a random effect of (time-in-career|player), which allows for variation in the pattern for different players. That doesn't explicitly con... | Build a regression model with multiple small time series
I'd consider a mixed model with an effect of time-in-career (maybe an additive/smooth term to allow for nonlinear effects) and a random effect of (time-in-career|player), which allows for variation in |
47,087 | Marginal density of $X_1$ given that $X_1 + X_2 = d$ where $X_1$ and $X_2$ are iid Weibull? | Let's apply Bayes theorem:
$$f(X_1 \vert X_1+X_2 = d) = \frac{f(X_1+X_2 = d \vert X_1)f(X_1)}{f(X_1+X_2 = d)} = cte \cdot f(X_2=d-X_1)f(X_1)$$
Substituting expressions for Weibull distributions:
$$f(X_1 \vert X_1+X_2 = d) = cte \cdot \left(\frac{k}{\lambda}\right) \left(\frac{d-x_1}{\lambda}\right)^{k-1} e^{((d-x_1)/\l... | Marginal density of $X_1$ given that $X_1 + X_2 = d$ where $X_1$ and $X_2$ are iid Weibull? | Let's apply Bayes theorem:
$$f(X_1 \vert X_1+X_2 = d) = \frac{f(X_1+X_2 = d \vert X_1)f(X_1)}{f(X_1+X_2 = d)} = cte \cdot f(X_2=d-X_1)f(X_1)$$
Substituting expressions for Weibull distributions:
$$f(X | Marginal density of $X_1$ given that $X_1 + X_2 = d$ where $X_1$ and $X_2$ are iid Weibull?
Let's apply Bayes theorem:
$$f(X_1 \vert X_1+X_2 = d) = \frac{f(X_1+X_2 = d \vert X_1)f(X_1)}{f(X_1+X_2 = d)} = cte \cdot f(X_2=d-X_1)f(X_1)$$
Substituting expressions for Weibull distributions:
$$f(X_1 \vert X_1+X_2 = d) = cte ... | Marginal density of $X_1$ given that $X_1 + X_2 = d$ where $X_1$ and $X_2$ are iid Weibull?
Let's apply Bayes theorem:
$$f(X_1 \vert X_1+X_2 = d) = \frac{f(X_1+X_2 = d \vert X_1)f(X_1)}{f(X_1+X_2 = d)} = cte \cdot f(X_2=d-X_1)f(X_1)$$
Substituting expressions for Weibull distributions:
$$f(X |
47,088 | Cross-entropy for comparing images | The cross-entropy between a single label and prediction would be
$$L = -\sum_{c \in C} y_{c} \log \hat y_{c}$$
where $C$ is the set of all classes. This is the first expression in your post. However, we need to sum over all pixels in an image to apply this:
$$L = -\sum_{i \in I} \sum_{c \in C} y_{i,c} \log \hat y_{i,c}... | Cross-entropy for comparing images | The cross-entropy between a single label and prediction would be
$$L = -\sum_{c \in C} y_{c} \log \hat y_{c}$$
where $C$ is the set of all classes. This is the first expression in your post. However, | Cross-entropy for comparing images
The cross-entropy between a single label and prediction would be
$$L = -\sum_{c \in C} y_{c} \log \hat y_{c}$$
where $C$ is the set of all classes. This is the first expression in your post. However, we need to sum over all pixels in an image to apply this:
$$L = -\sum_{i \in I} \sum_... | Cross-entropy for comparing images
The cross-entropy between a single label and prediction would be
$$L = -\sum_{c \in C} y_{c} \log \hat y_{c}$$
where $C$ is the set of all classes. This is the first expression in your post. However, |
47,089 | Distribution of $\sum_{j=1}^n\ln\left(\frac{X_{(j)}}{X_{(1)}}\right)$ when $X_i$'s are i.i.d Pareto variables | A simpler approach might be to use the fact that if $x \sim \text{Pareto}(\theta,a)$, then conditioning upon $x \geq b$ results in $x \sim \text{Pareto}(b,a)$. Consequently, $x | x_{(1)} \sim \text{Pareto}(x_{(1)}, a)$, except for the single observation corresponding to $x_{(1)}$. When we then take the ratio $x/x_{(1... | Distribution of $\sum_{j=1}^n\ln\left(\frac{X_{(j)}}{X_{(1)}}\right)$ when $X_i$'s are i.i.d Pareto | A simpler approach might be to use the fact that if $x \sim \text{Pareto}(\theta,a)$, then conditioning upon $x \geq b$ results in $x \sim \text{Pareto}(b,a)$. Consequently, $x | x_{(1)} \sim \text{P | Distribution of $\sum_{j=1}^n\ln\left(\frac{X_{(j)}}{X_{(1)}}\right)$ when $X_i$'s are i.i.d Pareto variables
A simpler approach might be to use the fact that if $x \sim \text{Pareto}(\theta,a)$, then conditioning upon $x \geq b$ results in $x \sim \text{Pareto}(b,a)$. Consequently, $x | x_{(1)} \sim \text{Pareto}(x_{... | Distribution of $\sum_{j=1}^n\ln\left(\frac{X_{(j)}}{X_{(1)}}\right)$ when $X_i$'s are i.i.d Pareto
A simpler approach might be to use the fact that if $x \sim \text{Pareto}(\theta,a)$, then conditioning upon $x \geq b$ results in $x \sim \text{Pareto}(b,a)$. Consequently, $x | x_{(1)} \sim \text{P |
47,090 | Likelihood comparable across different distributions | To get a sense of the problem, contemplate that density functions used to define likelihood functions are defined with respect to some dominating measure. So if we change the dominating measure, the likelihood function will change.
With more details (but informally) let the statistical model be given as a family of pro... | Likelihood comparable across different distributions | To get a sense of the problem, contemplate that density functions used to define likelihood functions are defined with respect to some dominating measure. So if we change the dominating measure, the l | Likelihood comparable across different distributions
To get a sense of the problem, contemplate that density functions used to define likelihood functions are defined with respect to some dominating measure. So if we change the dominating measure, the likelihood function will change.
With more details (but informally) ... | Likelihood comparable across different distributions
To get a sense of the problem, contemplate that density functions used to define likelihood functions are defined with respect to some dominating measure. So if we change the dominating measure, the l |
47,091 | What to do for AUC less than 0.5? | "Reversing" the AUC by taking AUC = 1 - AUC would be appropriate if you had no a priori information about whether to expect larger or lower values for the positive group. For instance if you were measuring a molecular biomarker, it could be present with a decreased concentration in the cancer patients. Unless and until... | What to do for AUC less than 0.5? | "Reversing" the AUC by taking AUC = 1 - AUC would be appropriate if you had no a priori information about whether to expect larger or lower values for the positive group. For instance if you were meas | What to do for AUC less than 0.5?
"Reversing" the AUC by taking AUC = 1 - AUC would be appropriate if you had no a priori information about whether to expect larger or lower values for the positive group. For instance if you were measuring a molecular biomarker, it could be present with a decreased concentration in the... | What to do for AUC less than 0.5?
"Reversing" the AUC by taking AUC = 1 - AUC would be appropriate if you had no a priori information about whether to expect larger or lower values for the positive group. For instance if you were meas |
47,092 | What to do for AUC less than 0.5? | How large are your training and test samples ? You have to keep in mind that, when training a classifier, there is some variance over the estimation of your model and you may achieve an accuracy which is lower than one of a constant classifier (or an AUC which is lower than 0.5). If you work with a small sample, this i... | What to do for AUC less than 0.5? | How large are your training and test samples ? You have to keep in mind that, when training a classifier, there is some variance over the estimation of your model and you may achieve an accuracy which | What to do for AUC less than 0.5?
How large are your training and test samples ? You have to keep in mind that, when training a classifier, there is some variance over the estimation of your model and you may achieve an accuracy which is lower than one of a constant classifier (or an AUC which is lower than 0.5). If yo... | What to do for AUC less than 0.5?
How large are your training and test samples ? You have to keep in mind that, when training a classifier, there is some variance over the estimation of your model and you may achieve an accuracy which |
47,093 | Effect size for Wilcoxon signed rank test that incorporates the possible range of the attribute | The effect size statistic Z/sqrt(N) --- sometimes called r --- in the paired observations case, is related to the probability that one group is larger than the other, or if you'd rather, that the differences are consistently greater than zero.
It doesn't measure the difference in values between the two groups. Other... | Effect size for Wilcoxon signed rank test that incorporates the possible range of the attribute | The effect size statistic Z/sqrt(N) --- sometimes called r --- in the paired observations case, is related to the probability that one group is larger than the other, or if you'd rather, that the diff | Effect size for Wilcoxon signed rank test that incorporates the possible range of the attribute
The effect size statistic Z/sqrt(N) --- sometimes called r --- in the paired observations case, is related to the probability that one group is larger than the other, or if you'd rather, that the differences are consistently... | Effect size for Wilcoxon signed rank test that incorporates the possible range of the attribute
The effect size statistic Z/sqrt(N) --- sometimes called r --- in the paired observations case, is related to the probability that one group is larger than the other, or if you'd rather, that the diff |
47,094 | Effect size for Wilcoxon signed rank test that incorporates the possible range of the attribute | After some digging and talking to my professor I came up with a solution for further reference.
The problem was, that I had the wrong idea about the Wilcoxon signed-rank test. The purpose of the test is to indicate if there is a shift between the two variables. The p-value suggests, that there is a statistically signif... | Effect size for Wilcoxon signed rank test that incorporates the possible range of the attribute | After some digging and talking to my professor I came up with a solution for further reference.
The problem was, that I had the wrong idea about the Wilcoxon signed-rank test. The purpose of the test | Effect size for Wilcoxon signed rank test that incorporates the possible range of the attribute
After some digging and talking to my professor I came up with a solution for further reference.
The problem was, that I had the wrong idea about the Wilcoxon signed-rank test. The purpose of the test is to indicate if there ... | Effect size for Wilcoxon signed rank test that incorporates the possible range of the attribute
After some digging and talking to my professor I came up with a solution for further reference.
The problem was, that I had the wrong idea about the Wilcoxon signed-rank test. The purpose of the test |
47,095 | Effect size for Wilcoxon signed rank test that incorporates the possible range of the attribute | If you have this many data, you really could have used a $t$-test with no problems. It's worth noting that the Wilcoxon signed-rank test is really testing a slightly different null hypothesis1,2. Often the reason for choosing the Wilcoxon signed-rank test is that people are not willing to assume the numbers are equal ... | Effect size for Wilcoxon signed rank test that incorporates the possible range of the attribute | If you have this many data, you really could have used a $t$-test with no problems. It's worth noting that the Wilcoxon signed-rank test is really testing a slightly different null hypothesis1,2. Ofte | Effect size for Wilcoxon signed rank test that incorporates the possible range of the attribute
If you have this many data, you really could have used a $t$-test with no problems. It's worth noting that the Wilcoxon signed-rank test is really testing a slightly different null hypothesis1,2. Often the reason for choosin... | Effect size for Wilcoxon signed rank test that incorporates the possible range of the attribute
If you have this many data, you really could have used a $t$-test with no problems. It's worth noting that the Wilcoxon signed-rank test is really testing a slightly different null hypothesis1,2. Ofte |
47,096 | In Bayesian Information Criterion (BIC), why does having bigger n get penalized? | I think the following will answer both of your questions.
First of all, you select the model that has the minimum value when using such criteria, therefore n has the opposite effect than you wrote down since increase in n alone will decrease the value.
Secondly, the information criteria is used to select between differ... | In Bayesian Information Criterion (BIC), why does having bigger n get penalized? | I think the following will answer both of your questions.
First of all, you select the model that has the minimum value when using such criteria, therefore n has the opposite effect than you wrote dow | In Bayesian Information Criterion (BIC), why does having bigger n get penalized?
I think the following will answer both of your questions.
First of all, you select the model that has the minimum value when using such criteria, therefore n has the opposite effect than you wrote down since increase in n alone will decrea... | In Bayesian Information Criterion (BIC), why does having bigger n get penalized?
I think the following will answer both of your questions.
First of all, you select the model that has the minimum value when using such criteria, therefore n has the opposite effect than you wrote dow |
47,097 | In Bayesian Information Criterion (BIC), why does having bigger n get penalized? | $C_p$ (and AIC) penalize each parameter with a factor of 2. BIC penalizes each parameter with a factor $\ln(n)$ which, for $n>7$ is greater than two, as stated in the paragraph you quote. Therefore, BIC places a greater penalty on each parameter and will tend to select more parsimonious models than AIC or $C_p$.
As you... | In Bayesian Information Criterion (BIC), why does having bigger n get penalized? | $C_p$ (and AIC) penalize each parameter with a factor of 2. BIC penalizes each parameter with a factor $\ln(n)$ which, for $n>7$ is greater than two, as stated in the paragraph you quote. Therefore, B | In Bayesian Information Criterion (BIC), why does having bigger n get penalized?
$C_p$ (and AIC) penalize each parameter with a factor of 2. BIC penalizes each parameter with a factor $\ln(n)$ which, for $n>7$ is greater than two, as stated in the paragraph you quote. Therefore, BIC places a greater penalty on each par... | In Bayesian Information Criterion (BIC), why does having bigger n get penalized?
$C_p$ (and AIC) penalize each parameter with a factor of 2. BIC penalizes each parameter with a factor $\ln(n)$ which, for $n>7$ is greater than two, as stated in the paragraph you quote. Therefore, B |
47,098 | In Bayesian Information Criterion (BIC), why does having bigger n get penalized? | There is a mistake in your inference. BIC does not penalize more data. The actual dependence on n is $\frac{\ln(n)}{n}$ which is a monotonically decreasing function for n>2 and thus, decreases (not increases) the penalty when n increases. When compared to AIC which is simply $\frac{1}{n}$, the decrease in penalty due t... | In Bayesian Information Criterion (BIC), why does having bigger n get penalized? | There is a mistake in your inference. BIC does not penalize more data. The actual dependence on n is $\frac{\ln(n)}{n}$ which is a monotonically decreasing function for n>2 and thus, decreases (not in | In Bayesian Information Criterion (BIC), why does having bigger n get penalized?
There is a mistake in your inference. BIC does not penalize more data. The actual dependence on n is $\frac{\ln(n)}{n}$ which is a monotonically decreasing function for n>2 and thus, decreases (not increases) the penalty when n increases. ... | In Bayesian Information Criterion (BIC), why does having bigger n get penalized?
There is a mistake in your inference. BIC does not penalize more data. The actual dependence on n is $\frac{\ln(n)}{n}$ which is a monotonically decreasing function for n>2 and thus, decreases (not in |
47,099 | Why is $L(z)=\phi(z) - z \left(1 - \Phi(z)\right) \ge 0$? Why is it (sorta) linear? | Since on the interval $x \in [z, \infty)$ it is clear that $x \ge z,$ use the fact that $\phi^\prime(x) = -x \phi(x)$ to conclude
$$\phi(z) = \int_z^\infty (-\phi^\prime(x))dx = \int_{z}^\infty x \phi(x) dx \ge z \int_z^\infty \phi(x) dx = z(1-\Phi(z)),$$
QED.
Concerning the second question, observe that $$L^\prime(z)... | Why is $L(z)=\phi(z) - z \left(1 - \Phi(z)\right) \ge 0$? Why is it (sorta) linear? | Since on the interval $x \in [z, \infty)$ it is clear that $x \ge z,$ use the fact that $\phi^\prime(x) = -x \phi(x)$ to conclude
$$\phi(z) = \int_z^\infty (-\phi^\prime(x))dx = \int_{z}^\infty x \phi | Why is $L(z)=\phi(z) - z \left(1 - \Phi(z)\right) \ge 0$? Why is it (sorta) linear?
Since on the interval $x \in [z, \infty)$ it is clear that $x \ge z,$ use the fact that $\phi^\prime(x) = -x \phi(x)$ to conclude
$$\phi(z) = \int_z^\infty (-\phi^\prime(x))dx = \int_{z}^\infty x \phi(x) dx \ge z \int_z^\infty \phi(x) d... | Why is $L(z)=\phi(z) - z \left(1 - \Phi(z)\right) \ge 0$? Why is it (sorta) linear?
Since on the interval $x \in [z, \infty)$ it is clear that $x \ge z,$ use the fact that $\phi^\prime(x) = -x \phi(x)$ to conclude
$$\phi(z) = \int_z^\infty (-\phi^\prime(x))dx = \int_{z}^\infty x \phi |
47,100 | General hints regarding the use of the binomial distribution in conditional probabilty problems | Specifically, why is $$P(Y=k|X=15, B)=P(Y=k|B)$$
The expression P(Y=k|X=15,B) = P(Y=k|B) is allowed because
$$P(Y=k|X=i,B) = {{20}\choose {k}} 0.6^k0.4^{20-k} $$
independent from $X=i$, so it can be left out
This logic is indeed not generally true, ie $P(a|b,c)$ may be different from $P(a|c)$, and is just true for t... | General hints regarding the use of the binomial distribution in conditional probabilty problems | Specifically, why is $$P(Y=k|X=15, B)=P(Y=k|B)$$
The expression P(Y=k|X=15,B) = P(Y=k|B) is allowed because
$$P(Y=k|X=i,B) = {{20}\choose {k}} 0.6^k0.4^{20-k} $$
independent from $X=i$, so it can b | General hints regarding the use of the binomial distribution in conditional probabilty problems
Specifically, why is $$P(Y=k|X=15, B)=P(Y=k|B)$$
The expression P(Y=k|X=15,B) = P(Y=k|B) is allowed because
$$P(Y=k|X=i,B) = {{20}\choose {k}} 0.6^k0.4^{20-k} $$
independent from $X=i$, so it can be left out
This logic is... | General hints regarding the use of the binomial distribution in conditional probabilty problems
Specifically, why is $$P(Y=k|X=15, B)=P(Y=k|B)$$
The expression P(Y=k|X=15,B) = P(Y=k|B) is allowed because
$$P(Y=k|X=i,B) = {{20}\choose {k}} 0.6^k0.4^{20-k} $$
independent from $X=i$, so it can b |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.