idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
44,701
Linear Combination of multivariate t distribution
It is true that the components of a multivariate-$t$ vector and linear combination thereof are $t$-distributed. But linear combinations of arbitrary $t$-variables are not necessarily $t$-distributed. In fact, linear combinations of independent $t_\nu$ variables are not $t$-distributed. The comment by Joram Soch starts ...
Linear Combination of multivariate t distribution
It is true that the components of a multivariate-$t$ vector and linear combination thereof are $t$-distributed. But linear combinations of arbitrary $t$-variables are not necessarily $t$-distributed.
Linear Combination of multivariate t distribution It is true that the components of a multivariate-$t$ vector and linear combination thereof are $t$-distributed. But linear combinations of arbitrary $t$-variables are not necessarily $t$-distributed. In fact, linear combinations of independent $t_\nu$ variables are not ...
Linear Combination of multivariate t distribution It is true that the components of a multivariate-$t$ vector and linear combination thereof are $t$-distributed. But linear combinations of arbitrary $t$-variables are not necessarily $t$-distributed.
44,702
Linear Combination of multivariate t distribution
Please have a look at Walker, Glenn A., and John G. Saw. "The distribution of linear combinations of t-variables." Journal of the American Statistical Association 73.364 (1978): 876-878. The resulting PDF is described as a weighted sum of student-t distribution, and the paper shows how to obtain the weight. The author ...
Linear Combination of multivariate t distribution
Please have a look at Walker, Glenn A., and John G. Saw. "The distribution of linear combinations of t-variables." Journal of the American Statistical Association 73.364 (1978): 876-878. The resulting
Linear Combination of multivariate t distribution Please have a look at Walker, Glenn A., and John G. Saw. "The distribution of linear combinations of t-variables." Journal of the American Statistical Association 73.364 (1978): 876-878. The resulting PDF is described as a weighted sum of student-t distribution, and the...
Linear Combination of multivariate t distribution Please have a look at Walker, Glenn A., and John G. Saw. "The distribution of linear combinations of t-variables." Journal of the American Statistical Association 73.364 (1978): 876-878. The resulting
44,703
Linear Combination of multivariate t distribution
P15 of Multivariate T distribution and their application (Kotz and Nadarajah) says " If X has the p-variate t distribution with degrees of freedom v, mean vector p, and correlation matrix R, then, for any nonsingular scalar matrix C and for any a, CX + a has the p-variate t distribution with degrees of freedom v, mean ...
Linear Combination of multivariate t distribution
P15 of Multivariate T distribution and their application (Kotz and Nadarajah) says " If X has the p-variate t distribution with degrees of freedom v, mean vector p, and correlation matrix R, then, for
Linear Combination of multivariate t distribution P15 of Multivariate T distribution and their application (Kotz and Nadarajah) says " If X has the p-variate t distribution with degrees of freedom v, mean vector p, and correlation matrix R, then, for any nonsingular scalar matrix C and for any a, CX + a has the p-varia...
Linear Combination of multivariate t distribution P15 of Multivariate T distribution and their application (Kotz and Nadarajah) says " If X has the p-variate t distribution with degrees of freedom v, mean vector p, and correlation matrix R, then, for
44,704
Linear Combination of multivariate t distribution
Student-t distribution is a special case of the Generalised Hyperbolic Distribution which is closed under affine transform according to wikipedia page (all linear transforms are affine transforms). Hence I would think all linear transformations of student-t random variables (with same degree of freedom) are student-t d...
Linear Combination of multivariate t distribution
Student-t distribution is a special case of the Generalised Hyperbolic Distribution which is closed under affine transform according to wikipedia page (all linear transforms are affine transforms). He
Linear Combination of multivariate t distribution Student-t distribution is a special case of the Generalised Hyperbolic Distribution which is closed under affine transform according to wikipedia page (all linear transforms are affine transforms). Hence I would think all linear transformations of student-t random varia...
Linear Combination of multivariate t distribution Student-t distribution is a special case of the Generalised Hyperbolic Distribution which is closed under affine transform according to wikipedia page (all linear transforms are affine transforms). He
44,705
Linear Combination of multivariate t distribution
If $X$ follows a multivariate t-distribution, then any linear combination of $X$ also follows a multivariate t-distribution with the same degrees of freedom: $$ X \sim t(\mu, \Sigma, \nu) \quad \Rightarrow \quad Y = AX + b \sim t(A\mu + b, A\Sigma A^\mathrm{T}, \nu) \; . $$ EDIT: Following @GeorgiBoshnakov's answer, I ...
Linear Combination of multivariate t distribution
If $X$ follows a multivariate t-distribution, then any linear combination of $X$ also follows a multivariate t-distribution with the same degrees of freedom: $$ X \sim t(\mu, \Sigma, \nu) \quad \Right
Linear Combination of multivariate t distribution If $X$ follows a multivariate t-distribution, then any linear combination of $X$ also follows a multivariate t-distribution with the same degrees of freedom: $$ X \sim t(\mu, \Sigma, \nu) \quad \Rightarrow \quad Y = AX + b \sim t(A\mu + b, A\Sigma A^\mathrm{T}, \nu) \; ...
Linear Combination of multivariate t distribution If $X$ follows a multivariate t-distribution, then any linear combination of $X$ also follows a multivariate t-distribution with the same degrees of freedom: $$ X \sim t(\mu, \Sigma, \nu) \quad \Right
44,706
How to prove that $X^T$e = 0
$$\mathbf X'\mathbf e = \mathbf X'(\mathbf y -\mathbf {\hat y})= \mathbf X'(\mathbf y -\mathbf X\hat \beta) =...$$ ADDENDUM $$=\mathbf X'\left(\mathbf y -\mathbf X (\mathbf X'\mathbf X)^{-1}\mathbf X' \mathbf y\right) =\mathbf X'\mathbf y -\mathbf X'\mathbf X (\mathbf X'\mathbf X)^{-1}\mathbf X' \mathbf y$$ $$\mathbf...
How to prove that $X^T$e = 0
$$\mathbf X'\mathbf e = \mathbf X'(\mathbf y -\mathbf {\hat y})= \mathbf X'(\mathbf y -\mathbf X\hat \beta) =...$$ ADDENDUM $$=\mathbf X'\left(\mathbf y -\mathbf X (\mathbf X'\mathbf X)^{-1}\mathbf
How to prove that $X^T$e = 0 $$\mathbf X'\mathbf e = \mathbf X'(\mathbf y -\mathbf {\hat y})= \mathbf X'(\mathbf y -\mathbf X\hat \beta) =...$$ ADDENDUM $$=\mathbf X'\left(\mathbf y -\mathbf X (\mathbf X'\mathbf X)^{-1}\mathbf X' \mathbf y\right) =\mathbf X'\mathbf y -\mathbf X'\mathbf X (\mathbf X'\mathbf X)^{-1}\ma...
How to prove that $X^T$e = 0 $$\mathbf X'\mathbf e = \mathbf X'(\mathbf y -\mathbf {\hat y})= \mathbf X'(\mathbf y -\mathbf X\hat \beta) =...$$ ADDENDUM $$=\mathbf X'\left(\mathbf y -\mathbf X (\mathbf X'\mathbf X)^{-1}\mathbf
44,707
How to sample using MCMC from a posterior distribution in general?
We don't use MCMC to calculate the $p(\theta | y)$ for each value (or many values) of $\theta$. What MCMC (or the special case of Gibbs sampling) does is generate a (large) random sample from $p(\theta | y)$. Note that $p(\theta | y)$ is not being calculated; you have to do something with that vector (or matrix) of r...
How to sample using MCMC from a posterior distribution in general?
We don't use MCMC to calculate the $p(\theta | y)$ for each value (or many values) of $\theta$. What MCMC (or the special case of Gibbs sampling) does is generate a (large) random sample from $p(\the
How to sample using MCMC from a posterior distribution in general? We don't use MCMC to calculate the $p(\theta | y)$ for each value (or many values) of $\theta$. What MCMC (or the special case of Gibbs sampling) does is generate a (large) random sample from $p(\theta | y)$. Note that $p(\theta | y)$ is not being cal...
How to sample using MCMC from a posterior distribution in general? We don't use MCMC to calculate the $p(\theta | y)$ for each value (or many values) of $\theta$. What MCMC (or the special case of Gibbs sampling) does is generate a (large) random sample from $p(\the
44,708
How to sample using MCMC from a posterior distribution in general?
MCMC is a family of sampling methods (Gibbs, MH, etc.). The point of MCMC is that you cannot sample directly from the posterior distribution that you mentioned. The way MCMC works is a Markov Chain (the first MC in MCMC) is identified whose stationary distribution is the posterior that you are interested in. You can...
How to sample using MCMC from a posterior distribution in general?
MCMC is a family of sampling methods (Gibbs, MH, etc.). The point of MCMC is that you cannot sample directly from the posterior distribution that you mentioned. The way MCMC works is a Markov Chain
How to sample using MCMC from a posterior distribution in general? MCMC is a family of sampling methods (Gibbs, MH, etc.). The point of MCMC is that you cannot sample directly from the posterior distribution that you mentioned. The way MCMC works is a Markov Chain (the first MC in MCMC) is identified whose stationary...
How to sample using MCMC from a posterior distribution in general? MCMC is a family of sampling methods (Gibbs, MH, etc.). The point of MCMC is that you cannot sample directly from the posterior distribution that you mentioned. The way MCMC works is a Markov Chain
44,709
Is it correct that no statistical simulation is done using $2^{256}$ (or more) outputs from a MWC prng?
It's a bit hard to say 'no statistical simulation will ever', since forever is a very long time and we may find ways to do things we can't see any way to do now. However, for the foreseeable future, $2^{256}$ or $\sim 10^{77}$ simulations is so many orders of magnitude beyond what we'd be able to generate in reasonable...
Is it correct that no statistical simulation is done using $2^{256}$ (or more) outputs from a MWC pr
It's a bit hard to say 'no statistical simulation will ever', since forever is a very long time and we may find ways to do things we can't see any way to do now. However, for the foreseeable future, $
Is it correct that no statistical simulation is done using $2^{256}$ (or more) outputs from a MWC prng? It's a bit hard to say 'no statistical simulation will ever', since forever is a very long time and we may find ways to do things we can't see any way to do now. However, for the foreseeable future, $2^{256}$ or $\si...
Is it correct that no statistical simulation is done using $2^{256}$ (or more) outputs from a MWC pr It's a bit hard to say 'no statistical simulation will ever', since forever is a very long time and we may find ways to do things we can't see any way to do now. However, for the foreseeable future, $
44,710
Normalizing constant irrelevant in Bayes theorem? [duplicate]
NOT all the MCMC methods avoid the need for the normalising constant. However, many of them do (such as the Metropolis-Hastings algorithm), since the iteration process is based on the ratio $R(\theta_1,\theta_2)=\dfrac{\pi(\theta_1\vert x)}{\pi(\theta_2\vert x)}$, where $$\pi(\theta\vert x) = \dfrac{\pi(x\vert \theta)\...
Normalizing constant irrelevant in Bayes theorem? [duplicate]
NOT all the MCMC methods avoid the need for the normalising constant. However, many of them do (such as the Metropolis-Hastings algorithm), since the iteration process is based on the ratio $R(\theta_
Normalizing constant irrelevant in Bayes theorem? [duplicate] NOT all the MCMC methods avoid the need for the normalising constant. However, many of them do (such as the Metropolis-Hastings algorithm), since the iteration process is based on the ratio $R(\theta_1,\theta_2)=\dfrac{\pi(\theta_1\vert x)}{\pi(\theta_2\vert...
Normalizing constant irrelevant in Bayes theorem? [duplicate] NOT all the MCMC methods avoid the need for the normalising constant. However, many of them do (such as the Metropolis-Hastings algorithm), since the iteration process is based on the ratio $R(\theta_
44,711
Normalizing constant irrelevant in Bayes theorem? [duplicate]
When ignoring the probability of evidence, you obtain something, which is proportional to the proper posterior distribution. In many situations you can easily normalize your improper (or unnormalized) posterior after calculating it. This is, because once you have your result (e.g. a marginal over some random variable),...
Normalizing constant irrelevant in Bayes theorem? [duplicate]
When ignoring the probability of evidence, you obtain something, which is proportional to the proper posterior distribution. In many situations you can easily normalize your improper (or unnormalized)
Normalizing constant irrelevant in Bayes theorem? [duplicate] When ignoring the probability of evidence, you obtain something, which is proportional to the proper posterior distribution. In many situations you can easily normalize your improper (or unnormalized) posterior after calculating it. This is, because once you...
Normalizing constant irrelevant in Bayes theorem? [duplicate] When ignoring the probability of evidence, you obtain something, which is proportional to the proper posterior distribution. In many situations you can easily normalize your improper (or unnormalized)
44,712
Finding appropriate distribution that fit to for a frequency distribution of a variable
Agree with Dmitry and others in the above discussion. I have following general comments that might help. We can identify 4 steps in fitting distributions: 1) Model/function choice: hypothesize families of distributions; 2) Estimate parameters; 3) Evaluate quality of fit; 4) Goodness of fit statistical tests. The first...
Finding appropriate distribution that fit to for a frequency distribution of a variable
Agree with Dmitry and others in the above discussion. I have following general comments that might help. We can identify 4 steps in fitting distributions: 1) Model/function choice: hypothesize famili
Finding appropriate distribution that fit to for a frequency distribution of a variable Agree with Dmitry and others in the above discussion. I have following general comments that might help. We can identify 4 steps in fitting distributions: 1) Model/function choice: hypothesize families of distributions; 2) Estimate...
Finding appropriate distribution that fit to for a frequency distribution of a variable Agree with Dmitry and others in the above discussion. I have following general comments that might help. We can identify 4 steps in fitting distributions: 1) Model/function choice: hypothesize famili
44,713
Finding appropriate distribution that fit to for a frequency distribution of a variable
First of all, I need to say that I do agree with @whuber that just explaining the data with some "commonly used distribution" is probably not the best idea. A good idea would be to find the underlying model and parametrize it. And it does not need to be a distribution at all. However, if your question is just about the...
Finding appropriate distribution that fit to for a frequency distribution of a variable
First of all, I need to say that I do agree with @whuber that just explaining the data with some "commonly used distribution" is probably not the best idea. A good idea would be to find the underlying
Finding appropriate distribution that fit to for a frequency distribution of a variable First of all, I need to say that I do agree with @whuber that just explaining the data with some "commonly used distribution" is probably not the best idea. A good idea would be to find the underlying model and parametrize it. And i...
Finding appropriate distribution that fit to for a frequency distribution of a variable First of all, I need to say that I do agree with @whuber that just explaining the data with some "commonly used distribution" is probably not the best idea. A good idea would be to find the underlying
44,714
Interpreting $R^2$, F-statistic & p-value of a model
The F-statistics tells you if the model fits the data better than the mean. Or, in other words, if $H_0:\;R^2=0$ should be rejected. See: Wikipedia To illustrate that the formula given in the link is indeed used by summary.lm: x1 <- 1:10 set.seed(42) x2 <- rnorm(10) y <- x1+2*x2+rnorm(10) fit0 <- lm(y~1) fit1 <- lm(y~...
Interpreting $R^2$, F-statistic & p-value of a model
The F-statistics tells you if the model fits the data better than the mean. Or, in other words, if $H_0:\;R^2=0$ should be rejected. See: Wikipedia To illustrate that the formula given in the link is
Interpreting $R^2$, F-statistic & p-value of a model The F-statistics tells you if the model fits the data better than the mean. Or, in other words, if $H_0:\;R^2=0$ should be rejected. See: Wikipedia To illustrate that the formula given in the link is indeed used by summary.lm: x1 <- 1:10 set.seed(42) x2 <- rnorm(10) ...
Interpreting $R^2$, F-statistic & p-value of a model The F-statistics tells you if the model fits the data better than the mean. Or, in other words, if $H_0:\;R^2=0$ should be rejected. See: Wikipedia To illustrate that the formula given in the link is
44,715
Solving the Kolmogorov forward equation for transition probabilities
This is a continuous time Markov process; $\mathbb{Q}$ is an infinitesimal generator of the transition matrices $\mathbb{P}(t)$ giving the transition probabilities over a span of time $t \ge 0$, the primes denote differentiation with respect to $t$, and the "Kronecker delta" is the initial condition $\mathbb{P}(0) = \m...
Solving the Kolmogorov forward equation for transition probabilities
This is a continuous time Markov process; $\mathbb{Q}$ is an infinitesimal generator of the transition matrices $\mathbb{P}(t)$ giving the transition probabilities over a span of time $t \ge 0$, the p
Solving the Kolmogorov forward equation for transition probabilities This is a continuous time Markov process; $\mathbb{Q}$ is an infinitesimal generator of the transition matrices $\mathbb{P}(t)$ giving the transition probabilities over a span of time $t \ge 0$, the primes denote differentiation with respect to $t$, a...
Solving the Kolmogorov forward equation for transition probabilities This is a continuous time Markov process; $\mathbb{Q}$ is an infinitesimal generator of the transition matrices $\mathbb{P}(t)$ giving the transition probabilities over a span of time $t \ge 0$, the p
44,716
Derivation of the effect of unmodeled confounders on OLS estimates
I don’t know where you got this last expression from but the derivation of the omitted variable bias formula goes as follows. You want to regress the long regression equation that includes both $X_i$ and $W_i$ but for some reason you do not observe $W_i$. Then your model becomes: $$ \begin{align} y_i &= \alpha + \beta ...
Derivation of the effect of unmodeled confounders on OLS estimates
I don’t know where you got this last expression from but the derivation of the omitted variable bias formula goes as follows. You want to regress the long regression equation that includes both $X_i$
Derivation of the effect of unmodeled confounders on OLS estimates I don’t know where you got this last expression from but the derivation of the omitted variable bias formula goes as follows. You want to regress the long regression equation that includes both $X_i$ and $W_i$ but for some reason you do not observe $W_i...
Derivation of the effect of unmodeled confounders on OLS estimates I don’t know where you got this last expression from but the derivation of the omitted variable bias formula goes as follows. You want to regress the long regression equation that includes both $X_i$
44,717
Formatting graphs and figures: why and when is it bad to include horizontal lines?
I think about it this way: When I prepare a figure for a paper, I usually want to both show data and make some point about the data. Anything that helps these goals in a simple clear way is a worthwhile addition, anything else should be removed (without distorting the data of course). In the case of horizontal (or vert...
Formatting graphs and figures: why and when is it bad to include horizontal lines?
I think about it this way: When I prepare a figure for a paper, I usually want to both show data and make some point about the data. Anything that helps these goals in a simple clear way is a worthwhi
Formatting graphs and figures: why and when is it bad to include horizontal lines? I think about it this way: When I prepare a figure for a paper, I usually want to both show data and make some point about the data. Anything that helps these goals in a simple clear way is a worthwhile addition, anything else should be ...
Formatting graphs and figures: why and when is it bad to include horizontal lines? I think about it this way: When I prepare a figure for a paper, I usually want to both show data and make some point about the data. Anything that helps these goals in a simple clear way is a worthwhi
44,718
Formatting graphs and figures: why and when is it bad to include horizontal lines?
It's wrong because the default behaviour of Excel is highly prominent gridlines, which are distracting and "chartjunky", and it violates the formatting rules for the journal. Journals often have lowest common denominator formatting rules. They're there so that it's harder to screw things up, not because it's the best...
Formatting graphs and figures: why and when is it bad to include horizontal lines?
It's wrong because the default behaviour of Excel is highly prominent gridlines, which are distracting and "chartjunky", and it violates the formatting rules for the journal. Journals often have lowe
Formatting graphs and figures: why and when is it bad to include horizontal lines? It's wrong because the default behaviour of Excel is highly prominent gridlines, which are distracting and "chartjunky", and it violates the formatting rules for the journal. Journals often have lowest common denominator formatting rule...
Formatting graphs and figures: why and when is it bad to include horizontal lines? It's wrong because the default behaviour of Excel is highly prominent gridlines, which are distracting and "chartjunky", and it violates the formatting rules for the journal. Journals often have lowe
44,719
How to add outliers to an existing data?
You could add random noise to the existing data objects, i.e. changing a given percentage of the data entries to random values within the data range, or swapping some entries between two data objects (which won't change the value distribution in this dimension). This method is often used to test the robustness of algor...
How to add outliers to an existing data?
You could add random noise to the existing data objects, i.e. changing a given percentage of the data entries to random values within the data range, or swapping some entries between two data objects
How to add outliers to an existing data? You could add random noise to the existing data objects, i.e. changing a given percentage of the data entries to random values within the data range, or swapping some entries between two data objects (which won't change the value distribution in this dimension). This method is o...
How to add outliers to an existing data? You could add random noise to the existing data objects, i.e. changing a given percentage of the data entries to random values within the data range, or swapping some entries between two data objects
44,720
How to add outliers to an existing data?
There are two commonly seen approaches: Add outliers to real data by randomization methods. In order to obtain a rare class, downsample a class to desired sparsity (usually, this should be <<1%) For 1 there are some variants - modifying single attributes, drawing each attribute, but from different instances etc.; per...
How to add outliers to an existing data?
There are two commonly seen approaches: Add outliers to real data by randomization methods. In order to obtain a rare class, downsample a class to desired sparsity (usually, this should be <<1%) For
How to add outliers to an existing data? There are two commonly seen approaches: Add outliers to real data by randomization methods. In order to obtain a rare class, downsample a class to desired sparsity (usually, this should be <<1%) For 1 there are some variants - modifying single attributes, drawing each attribut...
How to add outliers to an existing data? There are two commonly seen approaches: Add outliers to real data by randomization methods. In order to obtain a rare class, downsample a class to desired sparsity (usually, this should be <<1%) For
44,721
How to add outliers to an existing data?
Outliers are usually thought of in relation to the model, as the comments already discuss. But that does not say anything about how they are generated: They can be rare events by the very process described by the model (roughly 1 in 10⁹ standard normally distributed numbers will be < -6) or they can be generated by a p...
How to add outliers to an existing data?
Outliers are usually thought of in relation to the model, as the comments already discuss. But that does not say anything about how they are generated: They can be rare events by the very process desc
How to add outliers to an existing data? Outliers are usually thought of in relation to the model, as the comments already discuss. But that does not say anything about how they are generated: They can be rare events by the very process described by the model (roughly 1 in 10⁹ standard normally distributed numbers will...
How to add outliers to an existing data? Outliers are usually thought of in relation to the model, as the comments already discuss. But that does not say anything about how they are generated: They can be rare events by the very process desc
44,722
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
I looked at the answers on SO. I don't think they are satisfactory. People often argue for the normal distribution because of the central limit theorem. That may be okay in large samples when the problem involves averages. But machine learning problems can be more complex and sample sizes are not always large enou...
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
I looked at the answers on SO. I don't think they are satisfactory. People often argue for the normal distribution because of the central limit theorem. That may be okay in large samples when the
Can anyone tell me why we always use the Gaussian distribution in Machine learning? I looked at the answers on SO. I don't think they are satisfactory. People often argue for the normal distribution because of the central limit theorem. That may be okay in large samples when the problem involves averages. But mach...
Can anyone tell me why we always use the Gaussian distribution in Machine learning? I looked at the answers on SO. I don't think they are satisfactory. People often argue for the normal distribution because of the central limit theorem. That may be okay in large samples when the
44,723
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
Machine learning (and statistics as well) treats data as the mix of deterministic (causal) and random parts. The random part of data usually has normal distribution. (Really, the causal relation is reverse: the distribution of random part of variable is called normal). Central limit theorem says that the sum of large n...
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
Machine learning (and statistics as well) treats data as the mix of deterministic (causal) and random parts. The random part of data usually has normal distribution. (Really, the causal relation is re
Can anyone tell me why we always use the Gaussian distribution in Machine learning? Machine learning (and statistics as well) treats data as the mix of deterministic (causal) and random parts. The random part of data usually has normal distribution. (Really, the causal relation is reverse: the distribution of random pa...
Can anyone tell me why we always use the Gaussian distribution in Machine learning? Machine learning (and statistics as well) treats data as the mix of deterministic (causal) and random parts. The random part of data usually has normal distribution. (Really, the causal relation is re
44,724
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
One reason that normal distributions are often (but not always!) assumed: the nature of the distribution often leads to extremely efficient computation. For example, in generalized linear regression, the solution is technically in closed form when your distribution is Gaussian: $\hat \beta = (X^T X)^{-1} X^T Y$ where a...
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
One reason that normal distributions are often (but not always!) assumed: the nature of the distribution often leads to extremely efficient computation. For example, in generalized linear regression,
Can anyone tell me why we always use the Gaussian distribution in Machine learning? One reason that normal distributions are often (but not always!) assumed: the nature of the distribution often leads to extremely efficient computation. For example, in generalized linear regression, the solution is technically in close...
Can anyone tell me why we always use the Gaussian distribution in Machine learning? One reason that normal distributions are often (but not always!) assumed: the nature of the distribution often leads to extremely efficient computation. For example, in generalized linear regression,
44,725
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
I had the same question "what the is advantage of doing a Gaussian transformation on predictors or target?" Infact, caret package has a pre-processing step that enables this transformation. I tried reasoning this out and am summarizing my understanding - Usually the data distribution in Nature follows a Normal distr...
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
I had the same question "what the is advantage of doing a Gaussian transformation on predictors or target?" Infact, caret package has a pre-processing step that enables this transformation. I tried r
Can anyone tell me why we always use the Gaussian distribution in Machine learning? I had the same question "what the is advantage of doing a Gaussian transformation on predictors or target?" Infact, caret package has a pre-processing step that enables this transformation. I tried reasoning this out and am summarizing...
Can anyone tell me why we always use the Gaussian distribution in Machine learning? I had the same question "what the is advantage of doing a Gaussian transformation on predictors or target?" Infact, caret package has a pre-processing step that enables this transformation. I tried r
44,726
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
i'm currently studying machine learning and the same question popped to my mind. What I think the reason should be is that in every machine learning problem we assume we have abundant observational data available and whenever data tends to infinity it gets normally distributed around its mean and thats what Normal dist...
Can anyone tell me why we always use the Gaussian distribution in Machine learning?
i'm currently studying machine learning and the same question popped to my mind. What I think the reason should be is that in every machine learning problem we assume we have abundant observational da
Can anyone tell me why we always use the Gaussian distribution in Machine learning? i'm currently studying machine learning and the same question popped to my mind. What I think the reason should be is that in every machine learning problem we assume we have abundant observational data available and whenever data tends...
Can anyone tell me why we always use the Gaussian distribution in Machine learning? i'm currently studying machine learning and the same question popped to my mind. What I think the reason should be is that in every machine learning problem we assume we have abundant observational da
44,727
Understanding regression results when data are subsetted
To expand on Peter Flom's answer (which is echoed in Michael Chernick's subsequent reply), this graphic may help the intuition. The following R code shows how it was produced. Briefly, it generates 400 data points per year, with values of variable $x$ ranging variously from $0$ to $2$ through $2$ to $4$, shifting upw...
Understanding regression results when data are subsetted
To expand on Peter Flom's answer (which is echoed in Michael Chernick's subsequent reply), this graphic may help the intuition. The following R code shows how it was produced. Briefly, it generates
Understanding regression results when data are subsetted To expand on Peter Flom's answer (which is echoed in Michael Chernick's subsequent reply), this graphic may help the intuition. The following R code shows how it was produced. Briefly, it generates 400 data points per year, with values of variable $x$ ranging v...
Understanding regression results when data are subsetted To expand on Peter Flom's answer (which is echoed in Michael Chernick's subsequent reply), this graphic may help the intuition. The following R code shows how it was produced. Briefly, it generates
44,728
Understanding regression results when data are subsetted
In this case, it does not appear to have to do with sample sizes, since the CIs for the individual years do not even overlap with the CI for the whole period. It's hard to say exactly what's going on. Your code would help - did the model for the full data set include year as a IV? What is your dependent variable? What ...
Understanding regression results when data are subsetted
In this case, it does not appear to have to do with sample sizes, since the CIs for the individual years do not even overlap with the CI for the whole period. It's hard to say exactly what's going on.
Understanding regression results when data are subsetted In this case, it does not appear to have to do with sample sizes, since the CIs for the individual years do not even overlap with the CI for the whole period. It's hard to say exactly what's going on. Your code would help - did the model for the full data set inc...
Understanding regression results when data are subsetted In this case, it does not appear to have to do with sample sizes, since the CIs for the individual years do not even overlap with the CI for the whole period. It's hard to say exactly what's going on.
44,729
Understanding regression results when data are subsetted
I think the smaller sample size explains why some years are siginficant and others are not. Actually if you do multiplicity correction for doing 5 different test you may find that none of them are significant given a proper p-value adjustment to the tests. But Peter has hit on an important observation. The individual ...
Understanding regression results when data are subsetted
I think the smaller sample size explains why some years are siginficant and others are not. Actually if you do multiplicity correction for doing 5 different test you may find that none of them are sig
Understanding regression results when data are subsetted I think the smaller sample size explains why some years are siginficant and others are not. Actually if you do multiplicity correction for doing 5 different test you may find that none of them are significant given a proper p-value adjustment to the tests. But Pe...
Understanding regression results when data are subsetted I think the smaller sample size explains why some years are siginficant and others are not. Actually if you do multiplicity correction for doing 5 different test you may find that none of them are sig
44,730
A question about notation of Bayes' Theorem
In fact in the notation $$p(\theta|y)\propto p(y,\theta)$$ it is understood that the symbol "$\propto$" means that the two members are proportional functions of the variable $\theta$. This is not ambiguous because it is clearly understood that we are dealing with a distribution on the space of the parameter $\theta$. T...
A question about notation of Bayes' Theorem
In fact in the notation $$p(\theta|y)\propto p(y,\theta)$$ it is understood that the symbol "$\propto$" means that the two members are proportional functions of the variable $\theta$. This is not ambi
A question about notation of Bayes' Theorem In fact in the notation $$p(\theta|y)\propto p(y,\theta)$$ it is understood that the symbol "$\propto$" means that the two members are proportional functions of the variable $\theta$. This is not ambiguous because it is clearly understood that we are dealing with a distributi...
A question about notation of Bayes' Theorem In fact in the notation $$p(\theta|y)\propto p(y,\theta)$$ it is understood that the symbol "$\propto$" means that the two members are proportional functions of the variable $\theta$. This is not ambi
44,731
A question about notation of Bayes' Theorem
You are right, posterior distribution is proportional to the joint distribution of $y$ and $\theta$. and the proportionality constant is the inverse of marginal distribution (which is a constant value) $p(y)$
A question about notation of Bayes' Theorem
You are right, posterior distribution is proportional to the joint distribution of $y$ and $\theta$. and the proportionality constant is the inverse of marginal distribution (which is a constant value
A question about notation of Bayes' Theorem You are right, posterior distribution is proportional to the joint distribution of $y$ and $\theta$. and the proportionality constant is the inverse of marginal distribution (which is a constant value) $p(y)$
A question about notation of Bayes' Theorem You are right, posterior distribution is proportional to the joint distribution of $y$ and $\theta$. and the proportionality constant is the inverse of marginal distribution (which is a constant value
44,732
What is the meaning of operators in regression or anova formulas in R
The formulas in R have their own mini-language. You can have some detailed information in the R session with help(formula) which you can also find here. For the sake of the example, let's say that you predict $Z$ from $X$ and $Y$ and let's drop the error terms. $Z \sim X + Y$ means that you fit and additive model $Z_i...
What is the meaning of operators in regression or anova formulas in R
The formulas in R have their own mini-language. You can have some detailed information in the R session with help(formula) which you can also find here. For the sake of the example, let's say that yo
What is the meaning of operators in regression or anova formulas in R The formulas in R have their own mini-language. You can have some detailed information in the R session with help(formula) which you can also find here. For the sake of the example, let's say that you predict $Z$ from $X$ and $Y$ and let's drop the ...
What is the meaning of operators in regression or anova formulas in R The formulas in R have their own mini-language. You can have some detailed information in the R session with help(formula) which you can also find here. For the sake of the example, let's say that yo
44,733
Predict probabilities from Firth logistic regression in R
You can probably compute any predictions you want with little algebra. Let consider the example dataset, data(sex2) fm <- case ~ age+oc+vic+vicl+vis+dia fit <- logistf(fm, data=sex2) A design matrix is the only missing piece to compute predicted probabilities once we get the regression coefficients, given by betas <-...
Predict probabilities from Firth logistic regression in R
You can probably compute any predictions you want with little algebra. Let consider the example dataset, data(sex2) fm <- case ~ age+oc+vic+vicl+vis+dia fit <- logistf(fm, data=sex2) A design matrix
Predict probabilities from Firth logistic regression in R You can probably compute any predictions you want with little algebra. Let consider the example dataset, data(sex2) fm <- case ~ age+oc+vic+vicl+vis+dia fit <- logistf(fm, data=sex2) A design matrix is the only missing piece to compute predicted probabilities ...
Predict probabilities from Firth logistic regression in R You can probably compute any predictions you want with little algebra. Let consider the example dataset, data(sex2) fm <- case ~ age+oc+vic+vicl+vis+dia fit <- logistf(fm, data=sex2) A design matrix
44,734
Predict probabilities from Firth logistic regression in R
An alternative approach is the brglm package. For example, using the same data/model as @chl's Answer data(sex2) fm <- case ~ age + oc + vic + vicl + vis + dia fit <- brglm(fm, data = sex2) predict(fit, newdata = sex2[1:5, ], type = "response") That yields the > predict(fit, newdata = sex2[1:5, ], type = "response") ...
Predict probabilities from Firth logistic regression in R
An alternative approach is the brglm package. For example, using the same data/model as @chl's Answer data(sex2) fm <- case ~ age + oc + vic + vicl + vis + dia fit <- brglm(fm, data = sex2) predict(f
Predict probabilities from Firth logistic regression in R An alternative approach is the brglm package. For example, using the same data/model as @chl's Answer data(sex2) fm <- case ~ age + oc + vic + vicl + vis + dia fit <- brglm(fm, data = sex2) predict(fit, newdata = sex2[1:5, ], type = "response") That yields the...
Predict probabilities from Firth logistic regression in R An alternative approach is the brglm package. For example, using the same data/model as @chl's Answer data(sex2) fm <- case ~ age + oc + vic + vicl + vis + dia fit <- brglm(fm, data = sex2) predict(f
44,735
SVM options in scikit-learn
I realize this is a super old question, but I ran into this same thing today, and found this document. Section 7.3, which describes shrinkage as implemented in libSVM (around which sklearn's SVM is a wrapper), begins with the following useful blurb: The shrinking technique reduces the size of the problem by temporaril...
SVM options in scikit-learn
I realize this is a super old question, but I ran into this same thing today, and found this document. Section 7.3, which describes shrinkage as implemented in libSVM (around which sklearn's SVM is a
SVM options in scikit-learn I realize this is a super old question, but I ran into this same thing today, and found this document. Section 7.3, which describes shrinkage as implemented in libSVM (around which sklearn's SVM is a wrapper), begins with the following useful blurb: The shrinking technique reduces the size ...
SVM options in scikit-learn I realize this is a super old question, but I ran into this same thing today, and found this document. Section 7.3, which describes shrinkage as implemented in libSVM (around which sklearn's SVM is a
44,736
SVM options in scikit-learn
scale_C=True means that the C parameter of the SVM problem is scaled with the number of samples. This is the default in libSVM and liblinear, however if you train models with a widely-varying number of samples it means that a single value of C will not be adequate for all the models. For this reason, we advocate using ...
SVM options in scikit-learn
scale_C=True means that the C parameter of the SVM problem is scaled with the number of samples. This is the default in libSVM and liblinear, however if you train models with a widely-varying number o
SVM options in scikit-learn scale_C=True means that the C parameter of the SVM problem is scaled with the number of samples. This is the default in libSVM and liblinear, however if you train models with a widely-varying number of samples it means that a single value of C will not be adequate for all the models. For thi...
SVM options in scikit-learn scale_C=True means that the C parameter of the SVM problem is scaled with the number of samples. This is the default in libSVM and liblinear, however if you train models with a widely-varying number o
44,737
Bayesian prior corresponding to penalized regression coefficients
L2 penalty penalizes the sum of squared betas but not via a constraint such as $< C$. The L1 penalty is the lasso. For the Bayesian lasso see the 2008 JASA paper by Trevor Park and George Cassella.
Bayesian prior corresponding to penalized regression coefficients
L2 penalty penalizes the sum of squared betas but not via a constraint such as $< C$. The L1 penalty is the lasso. For the Bayesian lasso see the 2008 JASA paper by Trevor Park and George Cassella.
Bayesian prior corresponding to penalized regression coefficients L2 penalty penalizes the sum of squared betas but not via a constraint such as $< C$. The L1 penalty is the lasso. For the Bayesian lasso see the 2008 JASA paper by Trevor Park and George Cassella.
Bayesian prior corresponding to penalized regression coefficients L2 penalty penalizes the sum of squared betas but not via a constraint such as $< C$. The L1 penalty is the lasso. For the Bayesian lasso see the 2008 JASA paper by Trevor Park and George Cassella.
44,738
Bayesian prior corresponding to penalized regression coefficients
For the lasso penalty this corresponds to a double exponential prior - so long as you are taking the posterior mode as your estimate. If you constraint the betas to be positive then you have an exponential prior. The parameter of the exponential distribution $\lambda$ has a correspondence with your $C$ in that you ca...
Bayesian prior corresponding to penalized regression coefficients
For the lasso penalty this corresponds to a double exponential prior - so long as you are taking the posterior mode as your estimate. If you constraint the betas to be positive then you have an expon
Bayesian prior corresponding to penalized regression coefficients For the lasso penalty this corresponds to a double exponential prior - so long as you are taking the posterior mode as your estimate. If you constraint the betas to be positive then you have an exponential prior. The parameter of the exponential distri...
Bayesian prior corresponding to penalized regression coefficients For the lasso penalty this corresponds to a double exponential prior - so long as you are taking the posterior mode as your estimate. If you constraint the betas to be positive then you have an expon
44,739
Bayesian prior corresponding to penalized regression coefficients
The $L_2$ constraint on the coefficients is Tikhonov regularization. It turns out that for the case where the prior is multivariate normal and the model is linear, the posterior is also multivariate normal. It turns out that the mean of the posterior distribution occurs at a point in parameter space that can also be...
Bayesian prior corresponding to penalized regression coefficients
The $L_2$ constraint on the coefficients is Tikhonov regularization. It turns out that for the case where the prior is multivariate normal and the model is linear, the posterior is also multivariate
Bayesian prior corresponding to penalized regression coefficients The $L_2$ constraint on the coefficients is Tikhonov regularization. It turns out that for the case where the prior is multivariate normal and the model is linear, the posterior is also multivariate normal. It turns out that the mean of the posterior ...
Bayesian prior corresponding to penalized regression coefficients The $L_2$ constraint on the coefficients is Tikhonov regularization. It turns out that for the case where the prior is multivariate normal and the model is linear, the posterior is also multivariate
44,740
Bayesian prior corresponding to penalized regression coefficients
If you want your $\beta$s to be non-negative and sum to a given value then it seems a scaled Dirichlet prior would make sense.
Bayesian prior corresponding to penalized regression coefficients
If you want your $\beta$s to be non-negative and sum to a given value then it seems a scaled Dirichlet prior would make sense.
Bayesian prior corresponding to penalized regression coefficients If you want your $\beta$s to be non-negative and sum to a given value then it seems a scaled Dirichlet prior would make sense.
Bayesian prior corresponding to penalized regression coefficients If you want your $\beta$s to be non-negative and sum to a given value then it seems a scaled Dirichlet prior would make sense.
44,741
What is the difference between test to check homogenity of variance and ANOVA?
Fligner-Killeen's and Levene's tests are two ways to test the ANOVA assumption of "equal variances in the population" before conducting the ANOVA test. Levene's is widely used and is typically the default in programs like SPSS, but either test (or even Brown-Forsythe) is acceptable. ANOVA is the omnibus test of mean ...
What is the difference between test to check homogenity of variance and ANOVA?
Fligner-Killeen's and Levene's tests are two ways to test the ANOVA assumption of "equal variances in the population" before conducting the ANOVA test. Levene's is widely used and is typically the de
What is the difference between test to check homogenity of variance and ANOVA? Fligner-Killeen's and Levene's tests are two ways to test the ANOVA assumption of "equal variances in the population" before conducting the ANOVA test. Levene's is widely used and is typically the default in programs like SPSS, but either t...
What is the difference between test to check homogenity of variance and ANOVA? Fligner-Killeen's and Levene's tests are two ways to test the ANOVA assumption of "equal variances in the population" before conducting the ANOVA test. Levene's is widely used and is typically the de
44,742
What is the difference between test to check homogenity of variance and ANOVA?
Just thought I'd post a little more about the Fligner Killeen test. It is a nonparametric way of comparing the variances of more than two groups that is very robust against non-normal data. Essentially, it starts off the same way as a Brown Forsythe test for the ANOVA, obtaining the absolute deviations of each observa...
What is the difference between test to check homogenity of variance and ANOVA?
Just thought I'd post a little more about the Fligner Killeen test. It is a nonparametric way of comparing the variances of more than two groups that is very robust against non-normal data. Essential
What is the difference between test to check homogenity of variance and ANOVA? Just thought I'd post a little more about the Fligner Killeen test. It is a nonparametric way of comparing the variances of more than two groups that is very robust against non-normal data. Essentially, it starts off the same way as a Brown...
What is the difference between test to check homogenity of variance and ANOVA? Just thought I'd post a little more about the Fligner Killeen test. It is a nonparametric way of comparing the variances of more than two groups that is very robust against non-normal data. Essential
44,743
What is the difference between test to check homogenity of variance and ANOVA?
Don't know about Fligner, but Levene's test is actually an ANOVA of absolute deviations from group means (or group medians, this would be Brown-Forsythe test).
What is the difference between test to check homogenity of variance and ANOVA?
Don't know about Fligner, but Levene's test is actually an ANOVA of absolute deviations from group means (or group medians, this would be Brown-Forsythe test).
What is the difference between test to check homogenity of variance and ANOVA? Don't know about Fligner, but Levene's test is actually an ANOVA of absolute deviations from group means (or group medians, this would be Brown-Forsythe test).
What is the difference between test to check homogenity of variance and ANOVA? Don't know about Fligner, but Levene's test is actually an ANOVA of absolute deviations from group means (or group medians, this would be Brown-Forsythe test).
44,744
What is the difference between test to check homogenity of variance and ANOVA?
ANOVA is called "analysis of variance" because it decomposes the total variance into variance within groups (the "error") and variance among the group means. So it tests whether group means are equal by comparing the variance among them to that expected based solely on the within-group variance: is the variation among ...
What is the difference between test to check homogenity of variance and ANOVA?
ANOVA is called "analysis of variance" because it decomposes the total variance into variance within groups (the "error") and variance among the group means. So it tests whether group means are equal
What is the difference between test to check homogenity of variance and ANOVA? ANOVA is called "analysis of variance" because it decomposes the total variance into variance within groups (the "error") and variance among the group means. So it tests whether group means are equal by comparing the variance among them to t...
What is the difference between test to check homogenity of variance and ANOVA? ANOVA is called "analysis of variance" because it decomposes the total variance into variance within groups (the "error") and variance among the group means. So it tests whether group means are equal
44,745
How to interpret the margin of error in a poll?
The claim that the margin of error is $4.9$% follows from assuming that the poll was conducted as if a box had been filled with tickets--one for each member of the entire population (of "hardcore Republican voters")--thoroughly mixed, $400$ of those were blindly taken out, and each of the associated $400$ voters had wr...
How to interpret the margin of error in a poll?
The claim that the margin of error is $4.9$% follows from assuming that the poll was conducted as if a box had been filled with tickets--one for each member of the entire population (of "hardcore Repu
How to interpret the margin of error in a poll? The claim that the margin of error is $4.9$% follows from assuming that the poll was conducted as if a box had been filled with tickets--one for each member of the entire population (of "hardcore Republican voters")--thoroughly mixed, $400$ of those were blindly taken out...
How to interpret the margin of error in a poll? The claim that the margin of error is $4.9$% follows from assuming that the poll was conducted as if a box had been filled with tickets--one for each member of the entire population (of "hardcore Repu
44,746
How to interpret the margin of error in a poll?
I won't try to deliver my own answer, but I would refer you to the "What Is a Survey?" booklet compiled by the Survey Research Methods Section of the American Statistical Association. (Fritz Scheuren endorsing it on the title page is a former President of ASA from about five years ago. He used to be a high profile stat...
How to interpret the margin of error in a poll?
I won't try to deliver my own answer, but I would refer you to the "What Is a Survey?" booklet compiled by the Survey Research Methods Section of the American Statistical Association. (Fritz Scheuren
How to interpret the margin of error in a poll? I won't try to deliver my own answer, but I would refer you to the "What Is a Survey?" booklet compiled by the Survey Research Methods Section of the American Statistical Association. (Fritz Scheuren endorsing it on the title page is a former President of ASA from about f...
How to interpret the margin of error in a poll? I won't try to deliver my own answer, but I would refer you to the "What Is a Survey?" booklet compiled by the Survey Research Methods Section of the American Statistical Association. (Fritz Scheuren
44,747
How to interpret the margin of error in a poll?
To answer your question: It is possible to extrapolate from a sample of 400 to the views of all 700,000. This is contingent on the sample being random. Statistical Power is the topic you'd want to look into to confirm this. If I ask 400 of my closest friends, this doesn't work. To get a truly random sample, I'd have to...
How to interpret the margin of error in a poll?
To answer your question: It is possible to extrapolate from a sample of 400 to the views of all 700,000. This is contingent on the sample being random. Statistical Power is the topic you'd want to loo
How to interpret the margin of error in a poll? To answer your question: It is possible to extrapolate from a sample of 400 to the views of all 700,000. This is contingent on the sample being random. Statistical Power is the topic you'd want to look into to confirm this. If I ask 400 of my closest friends, this doesn't...
How to interpret the margin of error in a poll? To answer your question: It is possible to extrapolate from a sample of 400 to the views of all 700,000. This is contingent on the sample being random. Statistical Power is the topic you'd want to loo
44,748
How to interpret the margin of error in a poll?
The short answer is yes, you can extrapolate. Longer answer: The key question is whether the pollsters took a random sample of a population. They claim to have taken a random sample of Republican primary voters. But this is difficult. People refuse to answer polls, or they aren't home or other things can go wrong; even...
How to interpret the margin of error in a poll?
The short answer is yes, you can extrapolate. Longer answer: The key question is whether the pollsters took a random sample of a population. They claim to have taken a random sample of Republican prim
How to interpret the margin of error in a poll? The short answer is yes, you can extrapolate. Longer answer: The key question is whether the pollsters took a random sample of a population. They claim to have taken a random sample of Republican primary voters. But this is difficult. People refuse to answer polls, or the...
How to interpret the margin of error in a poll? The short answer is yes, you can extrapolate. Longer answer: The key question is whether the pollsters took a random sample of a population. They claim to have taken a random sample of Republican prim
44,749
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is non-normal?
With such a small sample size the normality assumption is rather important. You may consider the Wilcoxon signed rank test if you think this assumption is faulty. If the population is normally distributed, there is no minimum sample size. If the mean difference is small relative to the population variance, then you wi...
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is
With such a small sample size the normality assumption is rather important. You may consider the Wilcoxon signed rank test if you think this assumption is faulty. If the population is normally distri
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is non-normal? With such a small sample size the normality assumption is rather important. You may consider the Wilcoxon signed rank test if you think this assumption is faulty. If the population is normally distributed, ...
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is With such a small sample size the normality assumption is rather important. You may consider the Wilcoxon signed rank test if you think this assumption is faulty. If the population is normally distri
44,750
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is non-normal?
There is no minimum sample size for a t-test. But as @shabbychef noted, you will have very little power.
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is
There is no minimum sample size for a t-test. But as @shabbychef noted, you will have very little power.
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is non-normal? There is no minimum sample size for a t-test. But as @shabbychef noted, you will have very little power.
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is There is no minimum sample size for a t-test. But as @shabbychef noted, you will have very little power.
44,751
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is non-normal?
What is the minimum sample size for a paired t-test? Generally speaking for the ordinary paired t-test, two pairs is the smallest, yielding 1 d.f. Which assumption should I check for paired t-test? Normally, I'd try to assess all of them, but if you only have 4 pairs, it's just about hopeless to try. You have four pai...
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is
What is the minimum sample size for a paired t-test? Generally speaking for the ordinary paired t-test, two pairs is the smallest, yielding 1 d.f. Which assumption should I check for paired t-test? N
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is non-normal? What is the minimum sample size for a paired t-test? Generally speaking for the ordinary paired t-test, two pairs is the smallest, yielding 1 d.f. Which assumption should I check for paired t-test? Normally...
What is a minimum sample size for a paired t-test and what is a non-parametric equivalent if data is What is the minimum sample size for a paired t-test? Generally speaking for the ordinary paired t-test, two pairs is the smallest, yielding 1 d.f. Which assumption should I check for paired t-test? N
44,752
Which distribution to use with MCMC and empirical data?
Kolmogorov Smirnoff is always a good test to see if an arbitrary distribution fits. You can use the test cited below to see if two sets of data came from the same distribution: Li, Q. and E. Maasoumi and J.S. Racine (2009), “A Nonparametric Test for Equality of Distributions with Mixed Categorical and Continuou...
Which distribution to use with MCMC and empirical data?
Kolmogorov Smirnoff is always a good test to see if an arbitrary distribution fits. You can use the test cited below to see if two sets of data came from the same distribution: Li, Q. and E. Maasoum
Which distribution to use with MCMC and empirical data? Kolmogorov Smirnoff is always a good test to see if an arbitrary distribution fits. You can use the test cited below to see if two sets of data came from the same distribution: Li, Q. and E. Maasoumi and J.S. Racine (2009), “A Nonparametric Test for Equality...
Which distribution to use with MCMC and empirical data? Kolmogorov Smirnoff is always a good test to see if an arbitrary distribution fits. You can use the test cited below to see if two sets of data came from the same distribution: Li, Q. and E. Maasoum
44,753
Which distribution to use with MCMC and empirical data?
Note that goodness of fit tests can only rule out distributions, they don't prove which distribution the data came from. And in many cases they may have low power to rule out some distributions, so you really don't know if the data comes from that distribution, or you just don't have the power. But note that you can h...
Which distribution to use with MCMC and empirical data?
Note that goodness of fit tests can only rule out distributions, they don't prove which distribution the data came from. And in many cases they may have low power to rule out some distributions, so y
Which distribution to use with MCMC and empirical data? Note that goodness of fit tests can only rule out distributions, they don't prove which distribution the data came from. And in many cases they may have low power to rule out some distributions, so you really don't know if the data comes from that distribution, o...
Which distribution to use with MCMC and empirical data? Note that goodness of fit tests can only rule out distributions, they don't prove which distribution the data came from. And in many cases they may have low power to rule out some distributions, so y
44,754
Which distribution to use with MCMC and empirical data?
There is no definitive answer to your second question, since all the method in statistics are dedicated to developing distributions to fit the empirical data. So the "best practice" would be finding the appropriate statistical model, which might have generated the data.
Which distribution to use with MCMC and empirical data?
There is no definitive answer to your second question, since all the method in statistics are dedicated to developing distributions to fit the empirical data. So the "best practice" would be finding t
Which distribution to use with MCMC and empirical data? There is no definitive answer to your second question, since all the method in statistics are dedicated to developing distributions to fit the empirical data. So the "best practice" would be finding the appropriate statistical model, which might have generated the...
Which distribution to use with MCMC and empirical data? There is no definitive answer to your second question, since all the method in statistics are dedicated to developing distributions to fit the empirical data. So the "best practice" would be finding t
44,755
Which distribution to use with MCMC and empirical data?
Without some extra context the question is difficult to answer. What is your real-world data? Models (a theoretical distribution for your data) come from applications, not vacuums. There isn't one best way to approximate an unknown distribution in practice. There isn't even one "best". As a general comment, you can get...
Which distribution to use with MCMC and empirical data?
Without some extra context the question is difficult to answer. What is your real-world data? Models (a theoretical distribution for your data) come from applications, not vacuums. There isn't one bes
Which distribution to use with MCMC and empirical data? Without some extra context the question is difficult to answer. What is your real-world data? Models (a theoretical distribution for your data) come from applications, not vacuums. There isn't one best way to approximate an unknown distribution in practice. There ...
Which distribution to use with MCMC and empirical data? Without some extra context the question is difficult to answer. What is your real-world data? Models (a theoretical distribution for your data) come from applications, not vacuums. There isn't one bes
44,756
Converting arbitrary distribution to uniform one
If $X$ has the (cumulative) distribution function $F(x)=P(X<x)$, then $F(X)$ has a uniform distribution on $[0,1]$. You don't know what $F$ is, but with N = 500,000 data points you could simply use the empirical distribution function: $$\hat{F}(x) = \frac{1}{N} \sum_{i=1}^N 1[x_i\leq x]$$ where $1[A]$ is the indicator ...
Converting arbitrary distribution to uniform one
If $X$ has the (cumulative) distribution function $F(x)=P(X<x)$, then $F(X)$ has a uniform distribution on $[0,1]$. You don't know what $F$ is, but with N = 500,000 data points you could simply use th
Converting arbitrary distribution to uniform one If $X$ has the (cumulative) distribution function $F(x)=P(X<x)$, then $F(X)$ has a uniform distribution on $[0,1]$. You don't know what $F$ is, but with N = 500,000 data points you could simply use the empirical distribution function: $$\hat{F}(x) = \frac{1}{N} \sum_{i=1...
Converting arbitrary distribution to uniform one If $X$ has the (cumulative) distribution function $F(x)=P(X<x)$, then $F(X)$ has a uniform distribution on $[0,1]$. You don't know what $F$ is, but with N = 500,000 data points you could simply use th
44,757
Converting arbitrary distribution to uniform one
Suppose you have a cumulative distribution function $F$ of the variable in question. Suppose the value given is $x$, and the range is $[r_1,r_2]$ with $x\in[r_1,r_2]$. Then if you select the amount of values falling into that range $N$, the following should hold: $$F(r_1)-F(r_2)=\frac{N}{500 000}$$ This is an equation ...
Converting arbitrary distribution to uniform one
Suppose you have a cumulative distribution function $F$ of the variable in question. Suppose the value given is $x$, and the range is $[r_1,r_2]$ with $x\in[r_1,r_2]$. Then if you select the amount of
Converting arbitrary distribution to uniform one Suppose you have a cumulative distribution function $F$ of the variable in question. Suppose the value given is $x$, and the range is $[r_1,r_2]$ with $x\in[r_1,r_2]$. Then if you select the amount of values falling into that range $N$, the following should hold: $$F(r_1...
Converting arbitrary distribution to uniform one Suppose you have a cumulative distribution function $F$ of the variable in question. Suppose the value given is $x$, and the range is $[r_1,r_2]$ with $x\in[r_1,r_2]$. Then if you select the amount of
44,758
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning?
I like @nico's response because it makes clear that statistical and pragmatic thinking shall come hand in hand; this also has the merit to bring out issues like statistical vs. clinical significance. But about your specific question, I would say this is clearly detailed in the two sections that directly follow your quo...
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning?
I like @nico's response because it makes clear that statistical and pragmatic thinking shall come hand in hand; this also has the merit to bring out issues like statistical vs. clinical significance.
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning? I like @nico's response because it makes clear that statistical and pragmatic thinking shall come hand in hand; this also has the merit to bring out issues like statistical vs. clinical significance. But about your specific...
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning? I like @nico's response because it makes clear that statistical and pragmatic thinking shall come hand in hand; this also has the merit to bring out issues like statistical vs. clinical significance.
44,759
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning?
I have not read the book, but my best guess would be that the author wants to points out that sometimes a critical reasoning has to be made when applying statistics to biological and medical issues. The sole fact that, for instance, a treatment does not have a "statistically significant" effect does not imply that the ...
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning?
I have not read the book, but my best guess would be that the author wants to points out that sometimes a critical reasoning has to be made when applying statistics to biological and medical issues. T
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning? I have not read the book, but my best guess would be that the author wants to points out that sometimes a critical reasoning has to be made when applying statistics to biological and medical issues. The sole fact that, for ...
In relation to clinical trials, what is clinical reasoning in contrast to statistical reasoning? I have not read the book, but my best guess would be that the author wants to points out that sometimes a critical reasoning has to be made when applying statistics to biological and medical issues. T
44,760
How to fix the threshold for statistical validity of p-values produced by ANOVAs?
Hey, but it seems you already looked at the results! Usually, the risk of falsely rejecting the null (Type I error, or $\alpha$) should be decided before starting the analysis. Power might also be fixed to a given value (e.g., 0.80). At least, this is the "Neyman-Pearson" approach. For example, you might consider a ris...
How to fix the threshold for statistical validity of p-values produced by ANOVAs?
Hey, but it seems you already looked at the results! Usually, the risk of falsely rejecting the null (Type I error, or $\alpha$) should be decided before starting the analysis. Power might also be fix
How to fix the threshold for statistical validity of p-values produced by ANOVAs? Hey, but it seems you already looked at the results! Usually, the risk of falsely rejecting the null (Type I error, or $\alpha$) should be decided before starting the analysis. Power might also be fixed to a given value (e.g., 0.80). At l...
How to fix the threshold for statistical validity of p-values produced by ANOVAs? Hey, but it seems you already looked at the results! Usually, the risk of falsely rejecting the null (Type I error, or $\alpha$) should be decided before starting the analysis. Power might also be fix
44,761
How to fix the threshold for statistical validity of p-values produced by ANOVAs?
My advice would be to tread carefully with p-values if you didn't have a specific hypothesis in mind before you started the experiment. Adjusting p-values for multiple and "vaguely specified" hypothesis (e.g. not specifying the alternative hypothesis) is difficult. I suppose the "purist" would tell you that this shoul...
How to fix the threshold for statistical validity of p-values produced by ANOVAs?
My advice would be to tread carefully with p-values if you didn't have a specific hypothesis in mind before you started the experiment. Adjusting p-values for multiple and "vaguely specified" hypothe
How to fix the threshold for statistical validity of p-values produced by ANOVAs? My advice would be to tread carefully with p-values if you didn't have a specific hypothesis in mind before you started the experiment. Adjusting p-values for multiple and "vaguely specified" hypothesis (e.g. not specifying the alternati...
How to fix the threshold for statistical validity of p-values produced by ANOVAs? My advice would be to tread carefully with p-values if you didn't have a specific hypothesis in mind before you started the experiment. Adjusting p-values for multiple and "vaguely specified" hypothe
44,762
Can statistical prediction be asymmetric?
The coefficient of $X_n$ (and its significance) in the regression of $X_1$ on $X_2, \ldots, X_n$ can be computed by first obtaining the residuals $Y_1$ for the regression of $X_1$ on $X_2, \ldots, X_{n-1}$ and obtaining the residuals $Y_n$ for the regression of $X_n$ on $X_2, \ldots, X_{n-1}$. Then you regress $Y_1$ o...
Can statistical prediction be asymmetric?
The coefficient of $X_n$ (and its significance) in the regression of $X_1$ on $X_2, \ldots, X_n$ can be computed by first obtaining the residuals $Y_1$ for the regression of $X_1$ on $X_2, \ldots, X_{
Can statistical prediction be asymmetric? The coefficient of $X_n$ (and its significance) in the regression of $X_1$ on $X_2, \ldots, X_n$ can be computed by first obtaining the residuals $Y_1$ for the regression of $X_1$ on $X_2, \ldots, X_{n-1}$ and obtaining the residuals $Y_n$ for the regression of $X_n$ on $X_2, \...
Can statistical prediction be asymmetric? The coefficient of $X_n$ (and its significance) in the regression of $X_1$ on $X_2, \ldots, X_n$ can be computed by first obtaining the residuals $Y_1$ for the regression of $X_1$ on $X_2, \ldots, X_{
44,763
Can statistical prediction be asymmetric?
Your are trying to estimate model \begin{align} X_1&=\alpha_0+\alpha_2X_2+...+\alpha_nX_n+\varepsilon,\\ X_n&=\beta_0+\beta_1X_1+...+\beta_{n-1}X_{n-1}+\eta. \end{align} For such model ordinary least squares will give biased estimates. Assuming that $X_2,...,X_{n-1}$ are either deterministic or independent from $\varep...
Can statistical prediction be asymmetric?
Your are trying to estimate model \begin{align} X_1&=\alpha_0+\alpha_2X_2+...+\alpha_nX_n+\varepsilon,\\ X_n&=\beta_0+\beta_1X_1+...+\beta_{n-1}X_{n-1}+\eta. \end{align} For such model ordinary least
Can statistical prediction be asymmetric? Your are trying to estimate model \begin{align} X_1&=\alpha_0+\alpha_2X_2+...+\alpha_nX_n+\varepsilon,\\ X_n&=\beta_0+\beta_1X_1+...+\beta_{n-1}X_{n-1}+\eta. \end{align} For such model ordinary least squares will give biased estimates. Assuming that $X_2,...,X_{n-1}$ are either...
Can statistical prediction be asymmetric? Your are trying to estimate model \begin{align} X_1&=\alpha_0+\alpha_2X_2+...+\alpha_nX_n+\varepsilon,\\ X_n&=\beta_0+\beta_1X_1+...+\beta_{n-1}X_{n-1}+\eta. \end{align} For such model ordinary least
44,764
Estimating the probability that a software change fixed a problem
This question asks for a prediction limit. This tests whether a future statistic is "consistent" with previous data. (In this case, the future statistic is the post-fix value of 223.) It accounts for a chance mechanism or uncertainty in three ways: The data themselves can vary by chance. Because of this, any estima...
Estimating the probability that a software change fixed a problem
This question asks for a prediction limit. This tests whether a future statistic is "consistent" with previous data. (In this case, the future statistic is the post-fix value of 223.) It accounts f
Estimating the probability that a software change fixed a problem This question asks for a prediction limit. This tests whether a future statistic is "consistent" with previous data. (In this case, the future statistic is the post-fix value of 223.) It accounts for a chance mechanism or uncertainty in three ways: T...
Estimating the probability that a software change fixed a problem This question asks for a prediction limit. This tests whether a future statistic is "consistent" with previous data. (In this case, the future statistic is the post-fix value of 223.) It accounts f
44,765
Estimating the probability that a software change fixed a problem
There are a few ways of doing this problem. The way I would tackle this problem is as follows. The data you have comes from a geometric distribution. That is, the number of Bernoulli trials before a failure. The geometric distribution has one parameter p, which is the probability of failure at each point. For your data...
Estimating the probability that a software change fixed a problem
There are a few ways of doing this problem. The way I would tackle this problem is as follows. The data you have comes from a geometric distribution. That is, the number of Bernoulli trials before a f
Estimating the probability that a software change fixed a problem There are a few ways of doing this problem. The way I would tackle this problem is as follows. The data you have comes from a geometric distribution. That is, the number of Bernoulli trials before a failure. The geometric distribution has one parameter p...
Estimating the probability that a software change fixed a problem There are a few ways of doing this problem. The way I would tackle this problem is as follows. The data you have comes from a geometric distribution. That is, the number of Bernoulli trials before a f
44,766
Estimating the probability that a software change fixed a problem
I think you could torture your data a bit with bootstrapping. Following cgillspies calculations with the geometric distribution, I played around a bit and came up with the following R-code - any corrections greatly appreciated: fails <- c(100, 22, 36, 44, 89, 24, 74) # Observed data N <- 100000 # Number of replications...
Estimating the probability that a software change fixed a problem
I think you could torture your data a bit with bootstrapping. Following cgillspies calculations with the geometric distribution, I played around a bit and came up with the following R-code - any corre
Estimating the probability that a software change fixed a problem I think you could torture your data a bit with bootstrapping. Following cgillspies calculations with the geometric distribution, I played around a bit and came up with the following R-code - any corrections greatly appreciated: fails <- c(100, 22, 36, 44...
Estimating the probability that a software change fixed a problem I think you could torture your data a bit with bootstrapping. Following cgillspies calculations with the geometric distribution, I played around a bit and came up with the following R-code - any corre
44,767
Estimating the probability that a software change fixed a problem
I faced this problem myself and decided to try Fisher's exact test. This has the advantage that the arithmetic boils down to something you can do with JavaScript. I put this on a web page - this should work either from there or if you download it to your computer (which you are welcome to do). I think you have a total ...
Estimating the probability that a software change fixed a problem
I faced this problem myself and decided to try Fisher's exact test. This has the advantage that the arithmetic boils down to something you can do with JavaScript. I put this on a web page - this shoul
Estimating the probability that a software change fixed a problem I faced this problem myself and decided to try Fisher's exact test. This has the advantage that the arithmetic boils down to something you can do with JavaScript. I put this on a web page - this should work either from there or if you download it to your...
Estimating the probability that a software change fixed a problem I faced this problem myself and decided to try Fisher's exact test. This has the advantage that the arithmetic boils down to something you can do with JavaScript. I put this on a web page - this shoul
44,768
Should I reverse score items before running reliability analyses (item-total correlation) and factor analysis?
Yes, you should reverse score all items as needed to ensure that a particular score means the same thing on all items. You should do this for all types of analysis. For example, you have 'propensity to shoplift' measured via 3 items on a scale of 1 to 5 (where 1 is low propensity to shoplift and 5 is high). Suppose th...
Should I reverse score items before running reliability analyses (item-total correlation) and factor
Yes, you should reverse score all items as needed to ensure that a particular score means the same thing on all items. You should do this for all types of analysis. For example, you have 'propensity
Should I reverse score items before running reliability analyses (item-total correlation) and factor analysis? Yes, you should reverse score all items as needed to ensure that a particular score means the same thing on all items. You should do this for all types of analysis. For example, you have 'propensity to shopli...
Should I reverse score items before running reliability analyses (item-total correlation) and factor Yes, you should reverse score all items as needed to ensure that a particular score means the same thing on all items. You should do this for all types of analysis. For example, you have 'propensity
44,769
Should I reverse score items before running reliability analyses (item-total correlation) and factor analysis?
Reliability Analysis: Yes, you should reverse score the reversed items. Factor Analysis: It does not matter so much. Eigenvalues and associated indices (e.g., variance explained by factors, rules of thumb regarding number of factors to extract, etc.) should be the same. The sign of factor loadings will flip based on wh...
Should I reverse score items before running reliability analyses (item-total correlation) and factor
Reliability Analysis: Yes, you should reverse score the reversed items. Factor Analysis: It does not matter so much. Eigenvalues and associated indices (e.g., variance explained by factors, rules of t
Should I reverse score items before running reliability analyses (item-total correlation) and factor analysis? Reliability Analysis: Yes, you should reverse score the reversed items. Factor Analysis: It does not matter so much. Eigenvalues and associated indices (e.g., variance explained by factors, rules of thumb rega...
Should I reverse score items before running reliability analyses (item-total correlation) and factor Reliability Analysis: Yes, you should reverse score the reversed items. Factor Analysis: It does not matter so much. Eigenvalues and associated indices (e.g., variance explained by factors, rules of t
44,770
Why don't we use normal distribution in every problem? [closed]
The CLT does not give one permission to assume the mean is normally distributed in any and all circumstances. The mean of a sample, when viewed as a random variable obtained from many different samples from a distribution, has its own distribution. And the CLT gives criteria when that distribution is normal or when i...
Why don't we use normal distribution in every problem? [closed]
The CLT does not give one permission to assume the mean is normally distributed in any and all circumstances. The mean of a sample, when viewed as a random variable obtained from many different sampl
Why don't we use normal distribution in every problem? [closed] The CLT does not give one permission to assume the mean is normally distributed in any and all circumstances. The mean of a sample, when viewed as a random variable obtained from many different samples from a distribution, has its own distribution. And t...
Why don't we use normal distribution in every problem? [closed] The CLT does not give one permission to assume the mean is normally distributed in any and all circumstances. The mean of a sample, when viewed as a random variable obtained from many different sampl
44,771
Why don't we use normal distribution in every problem? [closed]
Under certain regularity conditions the CLT does indeed guarantee that a properly normalized sum of random variables converges to a Gaussian limit. But even in classical problems those conditions aren't always met. The "law of rare events" gives one example of where a sum of independent random variables converges to a...
Why don't we use normal distribution in every problem? [closed]
Under certain regularity conditions the CLT does indeed guarantee that a properly normalized sum of random variables converges to a Gaussian limit. But even in classical problems those conditions aren
Why don't we use normal distribution in every problem? [closed] Under certain regularity conditions the CLT does indeed guarantee that a properly normalized sum of random variables converges to a Gaussian limit. But even in classical problems those conditions aren't always met. The "law of rare events" gives one examp...
Why don't we use normal distribution in every problem? [closed] Under certain regularity conditions the CLT does indeed guarantee that a properly normalized sum of random variables converges to a Gaussian limit. But even in classical problems those conditions aren
44,772
Why don't we use normal distribution in every problem? [closed]
First, the CLT doesn't guarantee that the mean of samples will be normally distributed. But more importantly, what you seem to be getting at is that the CLT is sufficient for the expected value of your estimator to be equal to the population parameter of the mean. This is known as being "unbiased". Taking the populatio...
Why don't we use normal distribution in every problem? [closed]
First, the CLT doesn't guarantee that the mean of samples will be normally distributed. But more importantly, what you seem to be getting at is that the CLT is sufficient for the expected value of you
Why don't we use normal distribution in every problem? [closed] First, the CLT doesn't guarantee that the mean of samples will be normally distributed. But more importantly, what you seem to be getting at is that the CLT is sufficient for the expected value of your estimator to be equal to the population parameter of t...
Why don't we use normal distribution in every problem? [closed] First, the CLT doesn't guarantee that the mean of samples will be normally distributed. But more importantly, what you seem to be getting at is that the CLT is sufficient for the expected value of you
44,773
Is there a name for a distribution where I can take log of its histogram and get back the same histogram?
To put this in more mathematical form, you want to be able to start with a random variable $X$, take a logarithm, then perhaps add and multiply by some numbers and get back the same distribution. $Y=a+b\log X$ and $X$ have the same distribution. This is possible for a distribution that puts all its probability on two p...
Is there a name for a distribution where I can take log of its histogram and get back the same histo
To put this in more mathematical form, you want to be able to start with a random variable $X$, take a logarithm, then perhaps add and multiply by some numbers and get back the same distribution. $Y=a
Is there a name for a distribution where I can take log of its histogram and get back the same histogram? To put this in more mathematical form, you want to be able to start with a random variable $X$, take a logarithm, then perhaps add and multiply by some numbers and get back the same distribution. $Y=a+b\log X$ and ...
Is there a name for a distribution where I can take log of its histogram and get back the same histo To put this in more mathematical form, you want to be able to start with a random variable $X$, take a logarithm, then perhaps add and multiply by some numbers and get back the same distribution. $Y=a
44,774
Is there a name for a distribution where I can take log of its histogram and get back the same histogram?
A logarithm can be thought of as a function that stretches/squeezes along the number line. Given a series of values, taking the log will stretch the lower end of the range, and squash the upper end of the range - small values like 0.001, 0.01, and 0.1 that are "nearby" one another get stretched to cover a larger range ...
Is there a name for a distribution where I can take log of its histogram and get back the same histo
A logarithm can be thought of as a function that stretches/squeezes along the number line. Given a series of values, taking the log will stretch the lower end of the range, and squash the upper end of
Is there a name for a distribution where I can take log of its histogram and get back the same histogram? A logarithm can be thought of as a function that stretches/squeezes along the number line. Given a series of values, taking the log will stretch the lower end of the range, and squash the upper end of the range - s...
Is there a name for a distribution where I can take log of its histogram and get back the same histo A logarithm can be thought of as a function that stretches/squeezes along the number line. Given a series of values, taking the log will stretch the lower end of the range, and squash the upper end of
44,775
Misunderstanding the chi squared distribution
Sampling one value from $$ \sum_{i=1}^k Z_i^2 $$ requires to make one draw from $Z_1$, one draw from $Z_2$, and so forth. In other words, you must make $k$ independent draws from the $N(0, 1)$ distribution. On the other hand, sampling one value from $$ kZ^2 $$ requires to make one single draw from $Z$, square it, and t...
Misunderstanding the chi squared distribution
Sampling one value from $$ \sum_{i=1}^k Z_i^2 $$ requires to make one draw from $Z_1$, one draw from $Z_2$, and so forth. In other words, you must make $k$ independent draws from the $N(0, 1)$ distrib
Misunderstanding the chi squared distribution Sampling one value from $$ \sum_{i=1}^k Z_i^2 $$ requires to make one draw from $Z_1$, one draw from $Z_2$, and so forth. In other words, you must make $k$ independent draws from the $N(0, 1)$ distribution. On the other hand, sampling one value from $$ kZ^2 $$ requires to m...
Misunderstanding the chi squared distribution Sampling one value from $$ \sum_{i=1}^k Z_i^2 $$ requires to make one draw from $Z_1$, one draw from $Z_2$, and so forth. In other words, you must make $k$ independent draws from the $N(0, 1)$ distrib
44,776
What are the downsides of ARIMA models?
One key downside is that ARIMA models tend not to forecast very well. (I'm sure I will get my share of pushback for that statement. And yes, it is too broad in a sense, but it serves as - I believe - a useful first-order approximation.) This came as something of a surprise at the earlier forecasting competition, at lea...
What are the downsides of ARIMA models?
One key downside is that ARIMA models tend not to forecast very well. (I'm sure I will get my share of pushback for that statement. And yes, it is too broad in a sense, but it serves as - I believe -
What are the downsides of ARIMA models? One key downside is that ARIMA models tend not to forecast very well. (I'm sure I will get my share of pushback for that statement. And yes, it is too broad in a sense, but it serves as - I believe - a useful first-order approximation.) This came as something of a surprise at the...
What are the downsides of ARIMA models? One key downside is that ARIMA models tend not to forecast very well. (I'm sure I will get my share of pushback for that statement. And yes, it is too broad in a sense, but it serves as - I believe -
44,777
What are the downsides of ARIMA models?
In my answer, I respectfully disagree with the accepted answer. First of all, the fact that ARIMA models do not forecast well in forecasting competitions is not a weakness of ARIMA but is evidence that the stochastic process that produced the time series in question was one other than ARIMA and ARIMA should not have be...
What are the downsides of ARIMA models?
In my answer, I respectfully disagree with the accepted answer. First of all, the fact that ARIMA models do not forecast well in forecasting competitions is not a weakness of ARIMA but is evidence tha
What are the downsides of ARIMA models? In my answer, I respectfully disagree with the accepted answer. First of all, the fact that ARIMA models do not forecast well in forecasting competitions is not a weakness of ARIMA but is evidence that the stochastic process that produced the time series in question was one other...
What are the downsides of ARIMA models? In my answer, I respectfully disagree with the accepted answer. First of all, the fact that ARIMA models do not forecast well in forecasting competitions is not a weakness of ARIMA but is evidence tha
44,778
What are the downsides of ARIMA models?
The most of the processes in real applications (including Financial Data) are not pure ARIMA Processes or they are not all. That is why using this model in forecasting of those series lead to poor results. Furthermore, this model has some important limitations: It can capture only linear dependencies with the past. It...
What are the downsides of ARIMA models?
The most of the processes in real applications (including Financial Data) are not pure ARIMA Processes or they are not all. That is why using this model in forecasting of those series lead to poor res
What are the downsides of ARIMA models? The most of the processes in real applications (including Financial Data) are not pure ARIMA Processes or they are not all. That is why using this model in forecasting of those series lead to poor results. Furthermore, this model has some important limitations: It can capture on...
What are the downsides of ARIMA models? The most of the processes in real applications (including Financial Data) are not pure ARIMA Processes or they are not all. That is why using this model in forecasting of those series lead to poor res
44,779
How is a Bimodal distribution platykurtic?
Graphical comment per @whuber's Comment. Here is a histogram of a sample of a million observations from a beta distribution with shape parameters $\alpha=\beta = 0.5.$ The Wikipedia link has formulas for the mean, variance, skewness, and kurtosis of beta distributions (for given $\alpha,\beta)$. The superimposed normal...
How is a Bimodal distribution platykurtic?
Graphical comment per @whuber's Comment. Here is a histogram of a sample of a million observations from a beta distribution with shape parameters $\alpha=\beta = 0.5.$ The Wikipedia link has formulas
How is a Bimodal distribution platykurtic? Graphical comment per @whuber's Comment. Here is a histogram of a sample of a million observations from a beta distribution with shape parameters $\alpha=\beta = 0.5.$ The Wikipedia link has formulas for the mean, variance, skewness, and kurtosis of beta distributions (for giv...
How is a Bimodal distribution platykurtic? Graphical comment per @whuber's Comment. Here is a histogram of a sample of a million observations from a beta distribution with shape parameters $\alpha=\beta = 0.5.$ The Wikipedia link has formulas
44,780
How is a Bimodal distribution platykurtic?
While the incorrect "peakedness" interpretation of kurtosis is finally fading away, it has been replaced by other, slightly less egregious misinterpretations. One is that high kurtosis means "a lot of data in the tails." This may have been started by Balanda and MacGillivray, who "defined" kurtosis "vaguely as the loca...
How is a Bimodal distribution platykurtic?
While the incorrect "peakedness" interpretation of kurtosis is finally fading away, it has been replaced by other, slightly less egregious misinterpretations. One is that high kurtosis means "a lot of
How is a Bimodal distribution platykurtic? While the incorrect "peakedness" interpretation of kurtosis is finally fading away, it has been replaced by other, slightly less egregious misinterpretations. One is that high kurtosis means "a lot of data in the tails." This may have been started by Balanda and MacGillivray, ...
How is a Bimodal distribution platykurtic? While the incorrect "peakedness" interpretation of kurtosis is finally fading away, it has been replaced by other, slightly less egregious misinterpretations. One is that high kurtosis means "a lot of
44,781
Testing the difference in distribution between two groups
By inspection, it is pretty clear that Cat is under-represented in the second database. Let's see how that plays out in a chi-squared test of your $2\times 4$ contingency matrix. db1 = c(22000, 2300, 42009, 106000) db2 = c( 380, 30, 7, 260) MAT= rbind(db1,db2); MAT [,1] [,2] [,3] [,4] db1 220...
Testing the difference in distribution between two groups
By inspection, it is pretty clear that Cat is under-represented in the second database. Let's see how that plays out in a chi-squared test of your $2\times 4$ contingency matrix. db1 = c(22000, 230
Testing the difference in distribution between two groups By inspection, it is pretty clear that Cat is under-represented in the second database. Let's see how that plays out in a chi-squared test of your $2\times 4$ contingency matrix. db1 = c(22000, 2300, 42009, 106000) db2 = c( 380, 30, 7, 260) MAT...
Testing the difference in distribution between two groups By inspection, it is pretty clear that Cat is under-represented in the second database. Let's see how that plays out in a chi-squared test of your $2\times 4$ contingency matrix. db1 = c(22000, 230
44,782
Testing the difference in distribution between two groups
A $\chi^2$-test would be the obvious choice, especially since you do not seem to have a problem with small cell counts.
Testing the difference in distribution between two groups
A $\chi^2$-test would be the obvious choice, especially since you do not seem to have a problem with small cell counts.
Testing the difference in distribution between two groups A $\chi^2$-test would be the obvious choice, especially since you do not seem to have a problem with small cell counts.
Testing the difference in distribution between two groups A $\chi^2$-test would be the obvious choice, especially since you do not seem to have a problem with small cell counts.
44,783
An intuitive explanation of the instrumental variable
I think the most intuitive explanation lies in the causal Directed Acyclic Graph (DAG) approach taken by Judea Pearl, where $A\to B$ means $A$ causes $B$. The typical setup for an instrumental variable is as follows: Here the unmeasured variable $E$ is your variable causing the problem, because it sets up a backdoor p...
An intuitive explanation of the instrumental variable
I think the most intuitive explanation lies in the causal Directed Acyclic Graph (DAG) approach taken by Judea Pearl, where $A\to B$ means $A$ causes $B$. The typical setup for an instrumental variabl
An intuitive explanation of the instrumental variable I think the most intuitive explanation lies in the causal Directed Acyclic Graph (DAG) approach taken by Judea Pearl, where $A\to B$ means $A$ causes $B$. The typical setup for an instrumental variable is as follows: Here the unmeasured variable $E$ is your variabl...
An intuitive explanation of the instrumental variable I think the most intuitive explanation lies in the causal Directed Acyclic Graph (DAG) approach taken by Judea Pearl, where $A\to B$ means $A$ causes $B$. The typical setup for an instrumental variabl
44,784
An intuitive explanation of the instrumental variable
I also struggled with intuitive understanding of IV method. There are few explanations, which are worth considering. Let me present you the one I found by myself, which is for me quite convincing. First, for me DAG paradigm perfectly described by @Adrian Keister is the key to understand what is going on here. It helps ...
An intuitive explanation of the instrumental variable
I also struggled with intuitive understanding of IV method. There are few explanations, which are worth considering. Let me present you the one I found by myself, which is for me quite convincing. Fir
An intuitive explanation of the instrumental variable I also struggled with intuitive understanding of IV method. There are few explanations, which are worth considering. Let me present you the one I found by myself, which is for me quite convincing. First, for me DAG paradigm perfectly described by @Adrian Keister is ...
An intuitive explanation of the instrumental variable I also struggled with intuitive understanding of IV method. There are few explanations, which are worth considering. Let me present you the one I found by myself, which is for me quite convincing. Fir
44,785
Prove that the OLS estimator of the intercept is BLUE
This is one of those theorems that is easier to prove in greater generality using vector algebra than it is to prove with scalar algebra. To do this, consider the multiple linear regression model $\mathbf{Y} = \mathbf{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon}$ and consider the general linear estimator: $$\hat{\...
Prove that the OLS estimator of the intercept is BLUE
This is one of those theorems that is easier to prove in greater generality using vector algebra than it is to prove with scalar algebra. To do this, consider the multiple linear regression model $\m
Prove that the OLS estimator of the intercept is BLUE This is one of those theorems that is easier to prove in greater generality using vector algebra than it is to prove with scalar algebra. To do this, consider the multiple linear regression model $\mathbf{Y} = \mathbf{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon...
Prove that the OLS estimator of the intercept is BLUE This is one of those theorems that is easier to prove in greater generality using vector algebra than it is to prove with scalar algebra. To do this, consider the multiple linear regression model $\m
44,786
Prove that the OLS estimator of the intercept is BLUE
I eventually figured out where I was going wrong - so I'm going to post my work here in case anyone else gets stuck down the same rabbit hole. Start by defining an alternative estimator: $$\tilde{\alpha} = \sum_{i=1}^n c_i y_i$$ and define $c_i = k_i + d_i$, where $k_i$ are the weights on the OLS estimator $\hat{\alpha...
Prove that the OLS estimator of the intercept is BLUE
I eventually figured out where I was going wrong - so I'm going to post my work here in case anyone else gets stuck down the same rabbit hole. Start by defining an alternative estimator: $$\tilde{\alp
Prove that the OLS estimator of the intercept is BLUE I eventually figured out where I was going wrong - so I'm going to post my work here in case anyone else gets stuck down the same rabbit hole. Start by defining an alternative estimator: $$\tilde{\alpha} = \sum_{i=1}^n c_i y_i$$ and define $c_i = k_i + d_i$, where $...
Prove that the OLS estimator of the intercept is BLUE I eventually figured out where I was going wrong - so I'm going to post my work here in case anyone else gets stuck down the same rabbit hole. Start by defining an alternative estimator: $$\tilde{\alp
44,787
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
Making the substitution $x = \frac{n}{2}-1$, you essentially want to control $$1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}$$ as $x \to \infty$. Gautschi's inequality (applied with $s=\frac{1}{2}$) implies $$ 1 - \sqrt{\frac{x+1}{x+\frac{1}{2}}} <1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqr...
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
Making the substitution $x = \frac{n}{2}-1$, you essentially want to control $$1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}$$ as $x \to \infty$. Gautschi's inequality (applied
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$ Making the substitution $x = \frac{n}{2}-1$, you essentially want to control $$1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}$$ as $x \to \infty$. Gautschi's inequality (applied with $s=\frac{1}{2}$) implies $$...
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$ Making the substitution $x = \frac{n}{2}-1$, you essentially want to control $$1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}$$ as $x \to \infty$. Gautschi's inequality (applied
44,788
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
The default approach for analyzing expressions involving Gamma functions is Stirling's asymptotic expansion $$\log \Gamma(z) = \frac{1}{2}\log(2\pi) + \left(z - \frac{1}{2}\right)\log(z) - z + \frac{1}{12z} - \frac{1}{360z^3} + \cdots$$ (and usually you don't even need that final term). This gives us some intuition ab...
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
The default approach for analyzing expressions involving Gamma functions is Stirling's asymptotic expansion $$\log \Gamma(z) = \frac{1}{2}\log(2\pi) + \left(z - \frac{1}{2}\right)\log(z) - z + \frac{1
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$ The default approach for analyzing expressions involving Gamma functions is Stirling's asymptotic expansion $$\log \Gamma(z) = \frac{1}{2}\log(2\pi) + \left(z - \frac{1}{2}\right)\log(z) - z + \frac{1}{12z} - \frac{1}{360z^3} + \cdo...
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$ The default approach for analyzing expressions involving Gamma functions is Stirling's asymptotic expansion $$\log \Gamma(z) = \frac{1}{2}\log(2\pi) + \left(z - \frac{1}{2}\right)\log(z) - z + \frac{1
44,789
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
Comment: Using R to visualize the speed of convergence. n = seq(5,300,by=5) c = 4*n*(1-sqrt(2/(n-1))*gamma(n/2)/gamma((n-1)/2)) plot(n,c); abline(h=1, col="green2", lwd=2)
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$
Comment: Using R to visualize the speed of convergence. n = seq(5,300,by=5) c = 4*n*(1-sqrt(2/(n-1))*gamma(n/2)/gamma((n-1)/2)) plot(n,c); abline(h=1, col="green2", lwd=2)
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$ Comment: Using R to visualize the speed of convergence. n = seq(5,300,by=5) c = 4*n*(1-sqrt(2/(n-1))*gamma(n/2)/gamma((n-1)/2)) plot(n,c); abline(h=1, col="green2", lwd=2)
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$ Comment: Using R to visualize the speed of convergence. n = seq(5,300,by=5) c = 4*n*(1-sqrt(2/(n-1))*gamma(n/2)/gamma((n-1)/2)) plot(n,c); abline(h=1, col="green2", lwd=2)
44,790
MCMC: long burn in vs re-initialization of the chain?
If you have only one single chain (or if you want all your chains to be completely independent), then this procedure is not different from classical burn-in. However, it can accelerate convergence if you allow your chains to interact. Start from a random position, and let all your chains run independently for $T$ steps...
MCMC: long burn in vs re-initialization of the chain?
If you have only one single chain (or if you want all your chains to be completely independent), then this procedure is not different from classical burn-in. However, it can accelerate convergence if
MCMC: long burn in vs re-initialization of the chain? If you have only one single chain (or if you want all your chains to be completely independent), then this procedure is not different from classical burn-in. However, it can accelerate convergence if you allow your chains to interact. Start from a random position, a...
MCMC: long burn in vs re-initialization of the chain? If you have only one single chain (or if you want all your chains to be completely independent), then this procedure is not different from classical burn-in. However, it can accelerate convergence if
44,791
MCMC: long burn in vs re-initialization of the chain?
The difference with the standard burn-in step in MCMC is that the later is usually done blindly, as a fixed fraction of the overall number of iterations, e.g., 20%. Here the burn-in or warm-up step is more actively looking for a reasonable starting point, that is, one that is compatible with the target density. The per...
MCMC: long burn in vs re-initialization of the chain?
The difference with the standard burn-in step in MCMC is that the later is usually done blindly, as a fixed fraction of the overall number of iterations, e.g., 20%. Here the burn-in or warm-up step is
MCMC: long burn in vs re-initialization of the chain? The difference with the standard burn-in step in MCMC is that the later is usually done blindly, as a fixed fraction of the overall number of iterations, e.g., 20%. Here the burn-in or warm-up step is more actively looking for a reasonable starting point, that is, o...
MCMC: long burn in vs re-initialization of the chain? The difference with the standard burn-in step in MCMC is that the later is usually done blindly, as a fixed fraction of the overall number of iterations, e.g., 20%. Here the burn-in or warm-up step is
44,792
Regression in Causal Inference
Just to add to the excellent answers by Adrian and Noah, there is the residual question of: how to establish which of the three sets of variables given above should be conditioned on. Fist let's recap how the backdoor criterion is applied to this particular DAG, which I'm reposting here: Usually we are interested in...
Regression in Causal Inference
Just to add to the excellent answers by Adrian and Noah, there is the residual question of: how to establish which of the three sets of variables given above should be conditioned on. Fist let's rec
Regression in Causal Inference Just to add to the excellent answers by Adrian and Noah, there is the residual question of: how to establish which of the three sets of variables given above should be conditioned on. Fist let's recap how the backdoor criterion is applied to this particular DAG, which I'm reposting here...
Regression in Causal Inference Just to add to the excellent answers by Adrian and Noah, there is the residual question of: how to establish which of the three sets of variables given above should be conditioned on. Fist let's rec
44,793
Regression in Causal Inference
In a regression model, conditioning on a variable simply means including it in your equation. For your graph (thank you for including a causal diagram!), let's say you wanted to condition on $\{U_1,U_3\}.$ Then in a regression setting, NOT conditioning on those variables would mean you would regress $Y=aX+\varepsilon.$...
Regression in Causal Inference
In a regression model, conditioning on a variable simply means including it in your equation. For your graph (thank you for including a causal diagram!), let's say you wanted to condition on $\{U_1,U_
Regression in Causal Inference In a regression model, conditioning on a variable simply means including it in your equation. For your graph (thank you for including a causal diagram!), let's say you wanted to condition on $\{U_1,U_3\}.$ Then in a regression setting, NOT conditioning on those variables would mean you wo...
Regression in Causal Inference In a regression model, conditioning on a variable simply means including it in your equation. For your graph (thank you for including a causal diagram!), let's say you wanted to condition on $\{U_1,U_
44,794
Regression in Causal Inference
There are a few important distinctions I would like to make in this answer. The first is between a DAG and a parametric model. A DAG is a nonparametric system of structural equations, meaning that arrows do not necessarily represent main effects in a linear regression of an outcome on its causes. $X$, $U_2$, and $U_3$ ...
Regression in Causal Inference
There are a few important distinctions I would like to make in this answer. The first is between a DAG and a parametric model. A DAG is a nonparametric system of structural equations, meaning that arr
Regression in Causal Inference There are a few important distinctions I would like to make in this answer. The first is between a DAG and a parametric model. A DAG is a nonparametric system of structural equations, meaning that arrows do not necessarily represent main effects in a linear regression of an outcome on its...
Regression in Causal Inference There are a few important distinctions I would like to make in this answer. The first is between a DAG and a parametric model. A DAG is a nonparametric system of structural equations, meaning that arr
44,795
What is finite precision arithmetic and how does it affect SVD when computed by computers?
Floating point arithmetic is an approximation to arithmetic with real numbers. It's an approximation in the sense that all digits of a number aren't stored, but instead are truncated to a certain level of precision. This creates errors, because values like $\sqrt{2}$, which have an unending sequence of digits, can't be...
What is finite precision arithmetic and how does it affect SVD when computed by computers?
Floating point arithmetic is an approximation to arithmetic with real numbers. It's an approximation in the sense that all digits of a number aren't stored, but instead are truncated to a certain leve
What is finite precision arithmetic and how does it affect SVD when computed by computers? Floating point arithmetic is an approximation to arithmetic with real numbers. It's an approximation in the sense that all digits of a number aren't stored, but instead are truncated to a certain level of precision. This creates ...
What is finite precision arithmetic and how does it affect SVD when computed by computers? Floating point arithmetic is an approximation to arithmetic with real numbers. It's an approximation in the sense that all digits of a number aren't stored, but instead are truncated to a certain leve
44,796
What is finite precision arithmetic and how does it affect SVD when computed by computers?
TLDR; In computers numbers are stored in finite slots of memory. For instance, an integer number in mathematics is whole number such as ...,-2,-1,0,1,2,3,... that can go in both directions from negative infinity to positive infinity. In a computer this number can be represented by a type such as int8_t (in C++) which s...
What is finite precision arithmetic and how does it affect SVD when computed by computers?
TLDR; In computers numbers are stored in finite slots of memory. For instance, an integer number in mathematics is whole number such as ...,-2,-1,0,1,2,3,... that can go in both directions from negati
What is finite precision arithmetic and how does it affect SVD when computed by computers? TLDR; In computers numbers are stored in finite slots of memory. For instance, an integer number in mathematics is whole number such as ...,-2,-1,0,1,2,3,... that can go in both directions from negative infinity to positive infin...
What is finite precision arithmetic and how does it affect SVD when computed by computers? TLDR; In computers numbers are stored in finite slots of memory. For instance, an integer number in mathematics is whole number such as ...,-2,-1,0,1,2,3,... that can go in both directions from negati
44,797
Advice on running random forests on a large dataset
Some hints: 500k rows with 100 columns do not impose problems to load and prepare, even on a normal laptop. No need for big data tools like spark. Spark is good in situations with hundreds of millions of rows. Good random forest implementations like ranger (available in caret) are fully parallelized. The more cores, t...
Advice on running random forests on a large dataset
Some hints: 500k rows with 100 columns do not impose problems to load and prepare, even on a normal laptop. No need for big data tools like spark. Spark is good in situations with hundreds of million
Advice on running random forests on a large dataset Some hints: 500k rows with 100 columns do not impose problems to load and prepare, even on a normal laptop. No need for big data tools like spark. Spark is good in situations with hundreds of millions of rows. Good random forest implementations like ranger (available...
Advice on running random forests on a large dataset Some hints: 500k rows with 100 columns do not impose problems to load and prepare, even on a normal laptop. No need for big data tools like spark. Spark is good in situations with hundreds of million
44,798
Advice on running random forests on a large dataset
The answer is already given in the other answer (+1), the dataset you describe is not that big and should not need any specialized software or hardware to handle it. The only thing that I'd add, is that you rather should not use Spark. You can check those benchmarks, Spark "is slower and has a larger memory footprint" ...
Advice on running random forests on a large dataset
The answer is already given in the other answer (+1), the dataset you describe is not that big and should not need any specialized software or hardware to handle it. The only thing that I'd add, is th
Advice on running random forests on a large dataset The answer is already given in the other answer (+1), the dataset you describe is not that big and should not need any specialized software or hardware to handle it. The only thing that I'd add, is that you rather should not use Spark. You can check those benchmarks, ...
Advice on running random forests on a large dataset The answer is already given in the other answer (+1), the dataset you describe is not that big and should not need any specialized software or hardware to handle it. The only thing that I'd add, is th
44,799
Expected triangle area from normal distribution
This problem can be solved through a series of simplifications and then looking things up. First, $\sigma$ merely establishes a unit of measurement: in a system where $\sigma$ is one unit, the covariance matrix is the identity and the unit of area is $\sigma^2:$ that's why the result is a multiple of $\sigma^2.$ So fr...
Expected triangle area from normal distribution
This problem can be solved through a series of simplifications and then looking things up. First, $\sigma$ merely establishes a unit of measurement: in a system where $\sigma$ is one unit, the covaria
Expected triangle area from normal distribution This problem can be solved through a series of simplifications and then looking things up. First, $\sigma$ merely establishes a unit of measurement: in a system where $\sigma$ is one unit, the covariance matrix is the identity and the unit of area is $\sigma^2:$ that's wh...
Expected triangle area from normal distribution This problem can be solved through a series of simplifications and then looking things up. First, $\sigma$ merely establishes a unit of measurement: in a system where $\sigma$ is one unit, the covaria
44,800
Expected triangle area from normal distribution
Rather than an answer I want to extend your speculation: The distribution of the area with $\sigma=1$ has a Gamma distribution with parameters 2 and $\sqrt{3}/2$. Why? First, a histogram of random samples looks very much like a Gamma distribution. (I'm using Mathematica here because I know the OP also uses Mathemati...
Expected triangle area from normal distribution
Rather than an answer I want to extend your speculation: The distribution of the area with $\sigma=1$ has a Gamma distribution with parameters 2 and $\sqrt{3}/2$. Why? First, a histogram of random s
Expected triangle area from normal distribution Rather than an answer I want to extend your speculation: The distribution of the area with $\sigma=1$ has a Gamma distribution with parameters 2 and $\sqrt{3}/2$. Why? First, a histogram of random samples looks very much like a Gamma distribution. (I'm using Mathematic...
Expected triangle area from normal distribution Rather than an answer I want to extend your speculation: The distribution of the area with $\sigma=1$ has a Gamma distribution with parameters 2 and $\sqrt{3}/2$. Why? First, a histogram of random s