idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
14,401
When the Central Limit Theorem and the Law of Large Numbers disagree
I believe it should be clear by now that "the CLT approach" gives the right answer. Let's pinpoint exactly where the "LLN approach" goes wrong. Starting with the finite statements, it is clear then that we can equivalently either subtract $\sqrt{n}$ from both sides, or multliply both sides by $1/\sqrt{n}$. We get $$...
When the Central Limit Theorem and the Law of Large Numbers disagree
I believe it should be clear by now that "the CLT approach" gives the right answer. Let's pinpoint exactly where the "LLN approach" goes wrong. Starting with the finite statements, it is clear then
When the Central Limit Theorem and the Law of Large Numbers disagree I believe it should be clear by now that "the CLT approach" gives the right answer. Let's pinpoint exactly where the "LLN approach" goes wrong. Starting with the finite statements, it is clear then that we can equivalently either subtract $\sqrt{n}...
When the Central Limit Theorem and the Law of Large Numbers disagree I believe it should be clear by now that "the CLT approach" gives the right answer. Let's pinpoint exactly where the "LLN approach" goes wrong. Starting with the finite statements, it is clear then
14,402
Example of distribution where large sample size is necessary for central limit theorem
Some books state a sample size of size 30 or higher is necessary for the central limit theorem to give a good approximation for $\bar{X}$. This common rule of thumb is pretty much completely useless. There are non-normal distributions for which n=2 will do okay and non-normal distributions for which much larger $n$ i...
Example of distribution where large sample size is necessary for central limit theorem
Some books state a sample size of size 30 or higher is necessary for the central limit theorem to give a good approximation for $\bar{X}$. This common rule of thumb is pretty much completely useless
Example of distribution where large sample size is necessary for central limit theorem Some books state a sample size of size 30 or higher is necessary for the central limit theorem to give a good approximation for $\bar{X}$. This common rule of thumb is pretty much completely useless. There are non-normal distributi...
Example of distribution where large sample size is necessary for central limit theorem Some books state a sample size of size 30 or higher is necessary for the central limit theorem to give a good approximation for $\bar{X}$. This common rule of thumb is pretty much completely useless
14,403
Example of distribution where large sample size is necessary for central limit theorem
In addition to the many great answers provided here, Rand Wilcox has published excellent papers on the subject and has shown that our typical checking for adequacy of the normal approximation is quite misleading (and underestimates the sample size needed). He makes an excellent point that the mean can be approximately...
Example of distribution where large sample size is necessary for central limit theorem
In addition to the many great answers provided here, Rand Wilcox has published excellent papers on the subject and has shown that our typical checking for adequacy of the normal approximation is quite
Example of distribution where large sample size is necessary for central limit theorem In addition to the many great answers provided here, Rand Wilcox has published excellent papers on the subject and has shown that our typical checking for adequacy of the normal approximation is quite misleading (and underestimates t...
Example of distribution where large sample size is necessary for central limit theorem In addition to the many great answers provided here, Rand Wilcox has published excellent papers on the subject and has shown that our typical checking for adequacy of the normal approximation is quite
14,404
Example of distribution where large sample size is necessary for central limit theorem
You might find this paper helpful (or at least interesting): http://www.umass.edu/remp/Papers/Smith&Wells_NERA06.pdf Researchers at UMass actually carried out a study similar to what you're asking. At what sample size do certain distributed data follow a normal distribution due to CLT? Apparently a lot of data collect...
Example of distribution where large sample size is necessary for central limit theorem
You might find this paper helpful (or at least interesting): http://www.umass.edu/remp/Papers/Smith&Wells_NERA06.pdf Researchers at UMass actually carried out a study similar to what you're asking. A
Example of distribution where large sample size is necessary for central limit theorem You might find this paper helpful (or at least interesting): http://www.umass.edu/remp/Papers/Smith&Wells_NERA06.pdf Researchers at UMass actually carried out a study similar to what you're asking. At what sample size do certain dis...
Example of distribution where large sample size is necessary for central limit theorem You might find this paper helpful (or at least interesting): http://www.umass.edu/remp/Papers/Smith&Wells_NERA06.pdf Researchers at UMass actually carried out a study similar to what you're asking. A
14,405
Is it appropriate to use "time" as a causal variable in a DAG?
As a partial answer to this question, I am going to put forward an argument to the effect that time itself cannot be a proper causal variable, but it is legitimate to use a "time" variable that represents a particular state-of-nature occurring or existing over a specified period of time (which is actually a state varia...
Is it appropriate to use "time" as a causal variable in a DAG?
As a partial answer to this question, I am going to put forward an argument to the effect that time itself cannot be a proper causal variable, but it is legitimate to use a "time" variable that repres
Is it appropriate to use "time" as a causal variable in a DAG? As a partial answer to this question, I am going to put forward an argument to the effect that time itself cannot be a proper causal variable, but it is legitimate to use a "time" variable that represents a particular state-of-nature occurring or existing o...
Is it appropriate to use "time" as a causal variable in a DAG? As a partial answer to this question, I am going to put forward an argument to the effect that time itself cannot be a proper causal variable, but it is legitimate to use a "time" variable that repres
14,406
Is it appropriate to use "time" as a causal variable in a DAG?
I see no problem with this. A simple example from physics: suppose you are interested in modelling the DAG of the temperature of a glass of water. It might look something like: Time does cause the temperature to change. There are mediators in between, but it doesn't matter from this 10,000 foot view. From this DAG, it...
Is it appropriate to use "time" as a causal variable in a DAG?
I see no problem with this. A simple example from physics: suppose you are interested in modelling the DAG of the temperature of a glass of water. It might look something like: Time does cause the te
Is it appropriate to use "time" as a causal variable in a DAG? I see no problem with this. A simple example from physics: suppose you are interested in modelling the DAG of the temperature of a glass of water. It might look something like: Time does cause the temperature to change. There are mediators in between, but ...
Is it appropriate to use "time" as a causal variable in a DAG? I see no problem with this. A simple example from physics: suppose you are interested in modelling the DAG of the temperature of a glass of water. It might look something like: Time does cause the te
14,407
Is it appropriate to use "time" as a causal variable in a DAG?
Whether "time" is an appropriate variable in a model depends on the phenomenon you are modeling. Thus, as you posed it, your question is about model misspecification, not a fundamental question about causal modeling per se. In some models, "time" (or "year" or "duration in seconds") will be an "appropriate" variable, i...
Is it appropriate to use "time" as a causal variable in a DAG?
Whether "time" is an appropriate variable in a model depends on the phenomenon you are modeling. Thus, as you posed it, your question is about model misspecification, not a fundamental question about
Is it appropriate to use "time" as a causal variable in a DAG? Whether "time" is an appropriate variable in a model depends on the phenomenon you are modeling. Thus, as you posed it, your question is about model misspecification, not a fundamental question about causal modeling per se. In some models, "time" (or "year"...
Is it appropriate to use "time" as a causal variable in a DAG? Whether "time" is an appropriate variable in a model depends on the phenomenon you are modeling. Thus, as you posed it, your question is about model misspecification, not a fundamental question about
14,408
Is it appropriate to use "time" as a causal variable in a DAG?
Time almost necessarily is a factor in any causal analysis. In fact, I would say the majority of DAGs include it without the statistician actually explicitly thinking about it. Most often, it's age. Age is time since birth. We all agree this causes mortality. We also unthinking model interactions between age and other ...
Is it appropriate to use "time" as a causal variable in a DAG?
Time almost necessarily is a factor in any causal analysis. In fact, I would say the majority of DAGs include it without the statistician actually explicitly thinking about it. Most often, it's age. A
Is it appropriate to use "time" as a causal variable in a DAG? Time almost necessarily is a factor in any causal analysis. In fact, I would say the majority of DAGs include it without the statistician actually explicitly thinking about it. Most often, it's age. Age is time since birth. We all agree this causes mortalit...
Is it appropriate to use "time" as a causal variable in a DAG? Time almost necessarily is a factor in any causal analysis. In fact, I would say the majority of DAGs include it without the statistician actually explicitly thinking about it. Most often, it's age. A
14,409
Is it appropriate to use "time" as a causal variable in a DAG?
Gravitational time dilation means that time passes more slowly in the vicinity of a large mass. If time can be thus dependent, then it seems likely that time can also be a cause, as it seems arbitrary to permit time one role but not the other.
Is it appropriate to use "time" as a causal variable in a DAG?
Gravitational time dilation means that time passes more slowly in the vicinity of a large mass. If time can be thus dependent, then it seems likely that time can also be a cause, as it seems arbitrary
Is it appropriate to use "time" as a causal variable in a DAG? Gravitational time dilation means that time passes more slowly in the vicinity of a large mass. If time can be thus dependent, then it seems likely that time can also be a cause, as it seems arbitrary to permit time one role but not the other.
Is it appropriate to use "time" as a causal variable in a DAG? Gravitational time dilation means that time passes more slowly in the vicinity of a large mass. If time can be thus dependent, then it seems likely that time can also be a cause, as it seems arbitrary
14,410
General method for deriving the standard error
What you want to find is the standard deviation of the sampling distribution of the mean. I.e., in plain English, the sampling distribution is when you pick $n$ items from your population, add them together, and divide the sum by $n$. We than find the variance of this quantity and get the standard deviation by taking t...
General method for deriving the standard error
What you want to find is the standard deviation of the sampling distribution of the mean. I.e., in plain English, the sampling distribution is when you pick $n$ items from your population, add them to
General method for deriving the standard error What you want to find is the standard deviation of the sampling distribution of the mean. I.e., in plain English, the sampling distribution is when you pick $n$ items from your population, add them together, and divide the sum by $n$. We than find the variance of this quan...
General method for deriving the standard error What you want to find is the standard deviation of the sampling distribution of the mean. I.e., in plain English, the sampling distribution is when you pick $n$ items from your population, add them to
14,411
General method for deriving the standard error
The standard error is the standard deviation of the statistic (under the null hypothesis, if you're testing). A general method for finding standard error would be to first find the distribution or moment generating function of your statistic, find the second central moment, and take the square root. For example, if you...
General method for deriving the standard error
The standard error is the standard deviation of the statistic (under the null hypothesis, if you're testing). A general method for finding standard error would be to first find the distribution or mom
General method for deriving the standard error The standard error is the standard deviation of the statistic (under the null hypothesis, if you're testing). A general method for finding standard error would be to first find the distribution or moment generating function of your statistic, find the second central moment...
General method for deriving the standard error The standard error is the standard deviation of the statistic (under the null hypothesis, if you're testing). A general method for finding standard error would be to first find the distribution or mom
14,412
What does Theta mean?
It is not a convention, but quite often $\theta$ stands for the set of parameters of a distribution. That was it for plain English, let's show examples instead. Example 1. You want to study the throw of an old fashioned thumbtack (the ones with a big circular bottom). You assume that the probability that it falls point...
What does Theta mean?
It is not a convention, but quite often $\theta$ stands for the set of parameters of a distribution. That was it for plain English, let's show examples instead. Example 1. You want to study the throw
What does Theta mean? It is not a convention, but quite often $\theta$ stands for the set of parameters of a distribution. That was it for plain English, let's show examples instead. Example 1. You want to study the throw of an old fashioned thumbtack (the ones with a big circular bottom). You assume that the probabili...
What does Theta mean? It is not a convention, but quite often $\theta$ stands for the set of parameters of a distribution. That was it for plain English, let's show examples instead. Example 1. You want to study the throw
14,413
What does Theta mean?
What $\theta$ refers to depends on what model you are working with. For example, in ordinary least squares regression, you model a dependent variable (usually called Y) as a linear combination of one or more independent variables (usually called X), getting something like $Y_i = b_0 + b_1x_1 + b_2x_2 + ... + b_px_p$ wh...
What does Theta mean?
What $\theta$ refers to depends on what model you are working with. For example, in ordinary least squares regression, you model a dependent variable (usually called Y) as a linear combination of one
What does Theta mean? What $\theta$ refers to depends on what model you are working with. For example, in ordinary least squares regression, you model a dependent variable (usually called Y) as a linear combination of one or more independent variables (usually called X), getting something like $Y_i = b_0 + b_1x_1 + b_2...
What does Theta mean? What $\theta$ refers to depends on what model you are working with. For example, in ordinary least squares regression, you model a dependent variable (usually called Y) as a linear combination of one
14,414
What does Theta mean?
In plain English: Statistical distribution is a mathematical function $f$ that tells you what is the probability of different values of your random variable $X$ that has the distribution $f$, i.e. $f(x)$ outputs a probability of $x$. There are different such a functions, but for now let consider $f$ as some kind of "ge...
What does Theta mean?
In plain English: Statistical distribution is a mathematical function $f$ that tells you what is the probability of different values of your random variable $X$ that has the distribution $f$, i.e. $f(
What does Theta mean? In plain English: Statistical distribution is a mathematical function $f$ that tells you what is the probability of different values of your random variable $X$ that has the distribution $f$, i.e. $f(x)$ outputs a probability of $x$. There are different such a functions, but for now let consider $...
What does Theta mean? In plain English: Statistical distribution is a mathematical function $f$ that tells you what is the probability of different values of your random variable $X$ that has the distribution $f$, i.e. $f(
14,415
Back-transformation of regression coefficients
One problem is that you've written $$Y=α+β⋅X$$ That is a simple deterministic (i.e. non-random) model. In that case, you could back transform the coefficients on the original scale, since it's just a matter of some simple algebra. But, in usual regression you only have $E(Y|X)=α+β⋅X $ ; you've left the error term out...
Back-transformation of regression coefficients
One problem is that you've written $$Y=α+β⋅X$$ That is a simple deterministic (i.e. non-random) model. In that case, you could back transform the coefficients on the original scale, since it's just
Back-transformation of regression coefficients One problem is that you've written $$Y=α+β⋅X$$ That is a simple deterministic (i.e. non-random) model. In that case, you could back transform the coefficients on the original scale, since it's just a matter of some simple algebra. But, in usual regression you only have $...
Back-transformation of regression coefficients One problem is that you've written $$Y=α+β⋅X$$ That is a simple deterministic (i.e. non-random) model. In that case, you could back transform the coefficients on the original scale, since it's just
14,416
Back-transformation of regression coefficients
I salute your efforts here, but you're barking up the wrong tree. You don't back transform betas. Your model holds in the transformed data world. If you want to make a prediction, for example, you back transform $\hat{y}_i$, but that's it. Of course, you can also get a prediction interval by computing the high and ...
Back-transformation of regression coefficients
I salute your efforts here, but you're barking up the wrong tree. You don't back transform betas. Your model holds in the transformed data world. If you want to make a prediction, for example, you
Back-transformation of regression coefficients I salute your efforts here, but you're barking up the wrong tree. You don't back transform betas. Your model holds in the transformed data world. If you want to make a prediction, for example, you back transform $\hat{y}_i$, but that's it. Of course, you can also get a...
Back-transformation of regression coefficients I salute your efforts here, but you're barking up the wrong tree. You don't back transform betas. Your model holds in the transformed data world. If you want to make a prediction, for example, you
14,417
What does conditioning on a random variable mean?
Conditioning on an event (such as a particular specification of a random variable) means that this event is treated as being known to have occurred. This still allows us to specify conditioning on an event $\{ Y=y \}$ where the actual value $y$ is an algebraic variable that falls within some range.$^\dagger$ For exam...
What does conditioning on a random variable mean?
Conditioning on an event (such as a particular specification of a random variable) means that this event is treated as being known to have occurred. This still allows us to specify conditioning on an
What does conditioning on a random variable mean? Conditioning on an event (such as a particular specification of a random variable) means that this event is treated as being known to have occurred. This still allows us to specify conditioning on an event $\{ Y=y \}$ where the actual value $y$ is an algebraic variable...
What does conditioning on a random variable mean? Conditioning on an event (such as a particular specification of a random variable) means that this event is treated as being known to have occurred. This still allows us to specify conditioning on an
14,418
What does conditioning on a random variable mean?
Conditioning on a random variable is much more subtle than conditioning on an event. Conditioning on an Event Recall that for an event $B$ with $P(B) > 0$ we define the conditional probability given $B$ by $$ P(A \mid B) = \frac{P(A \cap B)}{P(B)} $$ for every event $A$. This defines a new probability measure $P(\ \cdo...
What does conditioning on a random variable mean?
Conditioning on a random variable is much more subtle than conditioning on an event. Conditioning on an Event Recall that for an event $B$ with $P(B) > 0$ we define the conditional probability given $
What does conditioning on a random variable mean? Conditioning on a random variable is much more subtle than conditioning on an event. Conditioning on an Event Recall that for an event $B$ with $P(B) > 0$ we define the conditional probability given $B$ by $$ P(A \mid B) = \frac{P(A \cap B)}{P(B)} $$ for every event $A$...
What does conditioning on a random variable mean? Conditioning on a random variable is much more subtle than conditioning on an event. Conditioning on an Event Recall that for an event $B$ with $P(B) > 0$ we define the conditional probability given $
14,419
What does conditioning on a random variable mean?
It means that the value of the random variable Y is known. For example, suppose $E(X|Y)=10+Y^2$. Then if $Y=2, ~E(X|Y=2)=14.$
What does conditioning on a random variable mean?
It means that the value of the random variable Y is known. For example, suppose $E(X|Y)=10+Y^2$. Then if $Y=2, ~E(X|Y=2)=14.$
What does conditioning on a random variable mean? It means that the value of the random variable Y is known. For example, suppose $E(X|Y)=10+Y^2$. Then if $Y=2, ~E(X|Y=2)=14.$
What does conditioning on a random variable mean? It means that the value of the random variable Y is known. For example, suppose $E(X|Y)=10+Y^2$. Then if $Y=2, ~E(X|Y=2)=14.$
14,420
Is PCA optimization convex?
No, the usual formulations of PCA are not convex problems. But they can be transformed into a convex optimization problem. The insight and the fun of this is following and visualizing the sequence of transformations rather than just getting the answer: it lies in the journey, not the destination. The chief steps in t...
Is PCA optimization convex?
No, the usual formulations of PCA are not convex problems. But they can be transformed into a convex optimization problem. The insight and the fun of this is following and visualizing the sequence of
Is PCA optimization convex? No, the usual formulations of PCA are not convex problems. But they can be transformed into a convex optimization problem. The insight and the fun of this is following and visualizing the sequence of transformations rather than just getting the answer: it lies in the journey, not the destin...
Is PCA optimization convex? No, the usual formulations of PCA are not convex problems. But they can be transformed into a convex optimization problem. The insight and the fun of this is following and visualizing the sequence of
14,421
Is PCA optimization convex?
No. Rank $k$ PCA of matrix $M$ can be formulated as $\hat{X} = \underset{rank(X) \leq k}{argmin} \| M - X\|_F^2$ ($\|\cdot\|_F$ is Frobenius norm). For derivation see Eckart-Young theorem. Though the norm is convex, the set over which it is optimized is nonconvex. A convex relaxation of PCA's problem is called Convex...
Is PCA optimization convex?
No. Rank $k$ PCA of matrix $M$ can be formulated as $\hat{X} = \underset{rank(X) \leq k}{argmin} \| M - X\|_F^2$ ($\|\cdot\|_F$ is Frobenius norm). For derivation see Eckart-Young theorem. Though the
Is PCA optimization convex? No. Rank $k$ PCA of matrix $M$ can be formulated as $\hat{X} = \underset{rank(X) \leq k}{argmin} \| M - X\|_F^2$ ($\|\cdot\|_F$ is Frobenius norm). For derivation see Eckart-Young theorem. Though the norm is convex, the set over which it is optimized is nonconvex. A convex relaxation of PC...
Is PCA optimization convex? No. Rank $k$ PCA of matrix $M$ can be formulated as $\hat{X} = \underset{rank(X) \leq k}{argmin} \| M - X\|_F^2$ ($\|\cdot\|_F$ is Frobenius norm). For derivation see Eckart-Young theorem. Though the
14,422
Is PCA optimization convex?
Disclaimer: The previous answers do a pretty good job of explaining how PCA in its original formulation is non-convex but can be converted to a convex optimization problem. My answer is only meant for those poor souls (such as me) who are not so familiar with the jargon of Unit Spheres and SVDs - which is, btw, good t...
Is PCA optimization convex?
Disclaimer: The previous answers do a pretty good job of explaining how PCA in its original formulation is non-convex but can be converted to a convex optimization problem. My answer is only meant fo
Is PCA optimization convex? Disclaimer: The previous answers do a pretty good job of explaining how PCA in its original formulation is non-convex but can be converted to a convex optimization problem. My answer is only meant for those poor souls (such as me) who are not so familiar with the jargon of Unit Spheres and ...
Is PCA optimization convex? Disclaimer: The previous answers do a pretty good job of explaining how PCA in its original formulation is non-convex but can be converted to a convex optimization problem. My answer is only meant fo
14,423
Does Bayes theorem hold for expectations?
$$E[A\mid B] \stackrel{?}= E[B\mid A]\frac{E[A]}{E[B]} \tag 1$$ The conjectured result $(1)$ is trivially true for independent random variables $A$ and $B$ with nonzero means. If $E[B]=0$, then the right side of $(1)$ involves a division by $0$ and so $(1)$ is meaningless. Note that whether or not $A$ and $B$ are inde...
Does Bayes theorem hold for expectations?
$$E[A\mid B] \stackrel{?}= E[B\mid A]\frac{E[A]}{E[B]} \tag 1$$ The conjectured result $(1)$ is trivially true for independent random variables $A$ and $B$ with nonzero means. If $E[B]=0$, then the r
Does Bayes theorem hold for expectations? $$E[A\mid B] \stackrel{?}= E[B\mid A]\frac{E[A]}{E[B]} \tag 1$$ The conjectured result $(1)$ is trivially true for independent random variables $A$ and $B$ with nonzero means. If $E[B]=0$, then the right side of $(1)$ involves a division by $0$ and so $(1)$ is meaningless. Not...
Does Bayes theorem hold for expectations? $$E[A\mid B] \stackrel{?}= E[B\mid A]\frac{E[A]}{E[B]} \tag 1$$ The conjectured result $(1)$ is trivially true for independent random variables $A$ and $B$ with nonzero means. If $E[B]=0$, then the r
14,424
Does Bayes theorem hold for expectations?
The result is untrue in general, let us see that in a simple example. Let $X \mid P=p$ have a binomial distribution with parameters $n,p$ and $P$ have the beta distrubution with parameters $(\alpha, \beta)$, that is, a bayesian model with conjugate prior. Now just calculate the two sides of your formula, the left han...
Does Bayes theorem hold for expectations?
The result is untrue in general, let us see that in a simple example. Let $X \mid P=p$ have a binomial distribution with parameters $n,p$ and $P$ have the beta distrubution with parameters $(\alpha,
Does Bayes theorem hold for expectations? The result is untrue in general, let us see that in a simple example. Let $X \mid P=p$ have a binomial distribution with parameters $n,p$ and $P$ have the beta distrubution with parameters $(\alpha, \beta)$, that is, a bayesian model with conjugate prior. Now just calculate t...
Does Bayes theorem hold for expectations? The result is untrue in general, let us see that in a simple example. Let $X \mid P=p$ have a binomial distribution with parameters $n,p$ and $P$ have the beta distrubution with parameters $(\alpha,
14,425
Does Bayes theorem hold for expectations?
The conditional expected value of a random variable $A$ given the event that $B=b$ is a number that depends on what number $b$ is. So call it $h(b).$ Then the conditional expected value $\operatorname{E}(A\mid B)$ is $h(B),$ a random variable whose value is completely determined by the value of the random variable $B$....
Does Bayes theorem hold for expectations?
The conditional expected value of a random variable $A$ given the event that $B=b$ is a number that depends on what number $b$ is. So call it $h(b).$ Then the conditional expected value $\operatorname
Does Bayes theorem hold for expectations? The conditional expected value of a random variable $A$ given the event that $B=b$ is a number that depends on what number $b$ is. So call it $h(b).$ Then the conditional expected value $\operatorname{E}(A\mid B)$ is $h(B),$ a random variable whose value is completely determine...
Does Bayes theorem hold for expectations? The conditional expected value of a random variable $A$ given the event that $B=b$ is a number that depends on what number $b$ is. So call it $h(b).$ Then the conditional expected value $\operatorname
14,426
Does Bayes theorem hold for expectations?
The expression certainly does not hold in general. For the fun of it, I show below that if $A$ and $B$ follow jointly a bivariate normal distribution, and have non-zero means, the result will hold if the two variables are linear functions of each other and have the same coefficient of variation (the ratio of standard d...
Does Bayes theorem hold for expectations?
The expression certainly does not hold in general. For the fun of it, I show below that if $A$ and $B$ follow jointly a bivariate normal distribution, and have non-zero means, the result will hold if
Does Bayes theorem hold for expectations? The expression certainly does not hold in general. For the fun of it, I show below that if $A$ and $B$ follow jointly a bivariate normal distribution, and have non-zero means, the result will hold if the two variables are linear functions of each other and have the same coeffic...
Does Bayes theorem hold for expectations? The expression certainly does not hold in general. For the fun of it, I show below that if $A$ and $B$ follow jointly a bivariate normal distribution, and have non-zero means, the result will hold if
14,427
Matrix notation for logistic regression
In linear regression the Maximize Likelihood Estimation (MLE) solution for estimating $x$ has the following closed form solution (assuming that A is a matrix with full column rank): $$\hat{x}_\text{lin}=\underset{x}{\text{argmin}} \|Ax-b\|_2^2 = (A^TA)^{-1}A^Tb$$ This is read as "find the $x$ that minimizes the objecti...
Matrix notation for logistic regression
In linear regression the Maximize Likelihood Estimation (MLE) solution for estimating $x$ has the following closed form solution (assuming that A is a matrix with full column rank): $$\hat{x}_\text{li
Matrix notation for logistic regression In linear regression the Maximize Likelihood Estimation (MLE) solution for estimating $x$ has the following closed form solution (assuming that A is a matrix with full column rank): $$\hat{x}_\text{lin}=\underset{x}{\text{argmin}} \|Ax-b\|_2^2 = (A^TA)^{-1}A^Tb$$ This is read as ...
Matrix notation for logistic regression In linear regression the Maximize Likelihood Estimation (MLE) solution for estimating $x$ has the following closed form solution (assuming that A is a matrix with full column rank): $$\hat{x}_\text{li
14,428
Matrix notation for logistic regression
@joceratops answer focuses on the optimization problem of maximum likelihood for estimation. This is indeed a flexible approach that is amenable to many types of problems. For estimating most models, including linear and logistic regression models, there is another general approach that is based on the method of moment...
Matrix notation for logistic regression
@joceratops answer focuses on the optimization problem of maximum likelihood for estimation. This is indeed a flexible approach that is amenable to many types of problems. For estimating most models,
Matrix notation for logistic regression @joceratops answer focuses on the optimization problem of maximum likelihood for estimation. This is indeed a flexible approach that is amenable to many types of problems. For estimating most models, including linear and logistic regression models, there is another general approa...
Matrix notation for logistic regression @joceratops answer focuses on the optimization problem of maximum likelihood for estimation. This is indeed a flexible approach that is amenable to many types of problems. For estimating most models,
14,429
square things in statistics- generalized rationale [duplicate]
$\newcommand{\predicted}{{\rm predicted}}\newcommand{\actual}{{\rm actual}}\newcommand{\Var}{{\rm Var}}$ You're right that one could instead choose to use the absolute error--in fact, the absolute error is often closer to what you "care about" when making predictions from your model. For instance, if you buy a stock ex...
square things in statistics- generalized rationale [duplicate]
$\newcommand{\predicted}{{\rm predicted}}\newcommand{\actual}{{\rm actual}}\newcommand{\Var}{{\rm Var}}$ You're right that one could instead choose to use the absolute error--in fact, the absolute err
square things in statistics- generalized rationale [duplicate] $\newcommand{\predicted}{{\rm predicted}}\newcommand{\actual}{{\rm actual}}\newcommand{\Var}{{\rm Var}}$ You're right that one could instead choose to use the absolute error--in fact, the absolute error is often closer to what you "care about" when making p...
square things in statistics- generalized rationale [duplicate] $\newcommand{\predicted}{{\rm predicted}}\newcommand{\actual}{{\rm actual}}\newcommand{\Var}{{\rm Var}}$ You're right that one could instead choose to use the absolute error--in fact, the absolute err
14,430
square things in statistics- generalized rationale [duplicate]
It's because of the close connection between many statistical methods and geometric concepts such as projections, distances, and the Pythagorean Theorem. For example, suppose that you view the data values $(x_1,x_2,\ldots,x_n)$ as a point in $n$-dimensional space. Then the sample SD is $1/\sqrt {n-1}$ times the distanc...
square things in statistics- generalized rationale [duplicate]
It's because of the close connection between many statistical methods and geometric concepts such as projections, distances, and the Pythagorean Theorem. For example, suppose that you view the data va
square things in statistics- generalized rationale [duplicate] It's because of the close connection between many statistical methods and geometric concepts such as projections, distances, and the Pythagorean Theorem. For example, suppose that you view the data values $(x_1,x_2,\ldots,x_n)$ as a point in $n$-dimensional...
square things in statistics- generalized rationale [duplicate] It's because of the close connection between many statistical methods and geometric concepts such as projections, distances, and the Pythagorean Theorem. For example, suppose that you view the data va
14,431
square things in statistics- generalized rationale [duplicate]
Because it makes the math easier. One can use other techniques for example for linear regression. Thes other methods tend to be more complicated in implementation details and have less elegant closed form solutions. Thus they are often ignored until a project demands they be used.
square things in statistics- generalized rationale [duplicate]
Because it makes the math easier. One can use other techniques for example for linear regression. Thes other methods tend to be more complicated in implementation details and have less elegant closed
square things in statistics- generalized rationale [duplicate] Because it makes the math easier. One can use other techniques for example for linear regression. Thes other methods tend to be more complicated in implementation details and have less elegant closed form solutions. Thus they are often ignored until a proje...
square things in statistics- generalized rationale [duplicate] Because it makes the math easier. One can use other techniques for example for linear regression. Thes other methods tend to be more complicated in implementation details and have less elegant closed
14,432
square things in statistics- generalized rationale [duplicate]
Honestly, it's because it makes the math easier than if absolute value were used. Laplace in fact tried to use absolute value instead of squared differences. It makes things quite annoying. Here's a link to a description of the Laplace distrubtion http://en.wikipedia.org/wiki/Laplace_distribution. Before computers usin...
square things in statistics- generalized rationale [duplicate]
Honestly, it's because it makes the math easier than if absolute value were used. Laplace in fact tried to use absolute value instead of squared differences. It makes things quite annoying. Here's a l
square things in statistics- generalized rationale [duplicate] Honestly, it's because it makes the math easier than if absolute value were used. Laplace in fact tried to use absolute value instead of squared differences. It makes things quite annoying. Here's a link to a description of the Laplace distrubtion http://en...
square things in statistics- generalized rationale [duplicate] Honestly, it's because it makes the math easier than if absolute value were used. Laplace in fact tried to use absolute value instead of squared differences. It makes things quite annoying. Here's a l
14,433
How to best visualize differences in many proportions across three groups?
Thanks for making the data accessible and for an interesting dataset and graphical challenge. My main suggestion is of a (Cleveland) dot chart. The most important details I would like to emphasise: Superimposition here allows and eases comparison. The order of topics in your displays appears quite arbitrary. Absent ...
How to best visualize differences in many proportions across three groups?
Thanks for making the data accessible and for an interesting dataset and graphical challenge. My main suggestion is of a (Cleveland) dot chart. The most important details I would like to emphasise:
How to best visualize differences in many proportions across three groups? Thanks for making the data accessible and for an interesting dataset and graphical challenge. My main suggestion is of a (Cleveland) dot chart. The most important details I would like to emphasise: Superimposition here allows and eases compari...
How to best visualize differences in many proportions across three groups? Thanks for making the data accessible and for an interesting dataset and graphical challenge. My main suggestion is of a (Cleveland) dot chart. The most important details I would like to emphasise:
14,434
How to best visualize differences in many proportions across three groups?
The dot plot from Nick Cox is probably best for the complete picture. If you really want to emphasize the first versus second relationship, here's a modification to your chart that offsets the difference bar with the length of the second bar. And for a different big picture view, you can try something like a slope cha...
How to best visualize differences in many proportions across three groups?
The dot plot from Nick Cox is probably best for the complete picture. If you really want to emphasize the first versus second relationship, here's a modification to your chart that offsets the differe
How to best visualize differences in many proportions across three groups? The dot plot from Nick Cox is probably best for the complete picture. If you really want to emphasize the first versus second relationship, here's a modification to your chart that offsets the difference bar with the length of the second bar. A...
How to best visualize differences in many proportions across three groups? The dot plot from Nick Cox is probably best for the complete picture. If you really want to emphasize the first versus second relationship, here's a modification to your chart that offsets the differe
14,435
How to best visualize differences in many proportions across three groups?
My first instict was to suggest a Mosaic plot; it graphs each sub-category as a rectangle, where one dimension represents the total count for the main category and the other dimension represents the sub-category's proportionate share. There's an R package to draw them, but it's also fairly straightforward to do with l...
How to best visualize differences in many proportions across three groups?
My first instict was to suggest a Mosaic plot; it graphs each sub-category as a rectangle, where one dimension represents the total count for the main category and the other dimension represents the s
How to best visualize differences in many proportions across three groups? My first instict was to suggest a Mosaic plot; it graphs each sub-category as a rectangle, where one dimension represents the total count for the main category and the other dimension represents the sub-category's proportionate share. There's a...
How to best visualize differences in many proportions across three groups? My first instict was to suggest a Mosaic plot; it graphs each sub-category as a rectangle, where one dimension represents the total count for the main category and the other dimension represents the s
14,436
How to best visualize differences in many proportions across three groups?
Have you tried a bubble chart? https://code.google.com/apis/ajax/playground/?type=visualization#bubble_chart The individual topics could be circles and each circle could be pie chart of the percentage that each news outlet covers the topic. The size of the circle could indicate the relative coverage of the topic. e.g ...
How to best visualize differences in many proportions across three groups?
Have you tried a bubble chart? https://code.google.com/apis/ajax/playground/?type=visualization#bubble_chart The individual topics could be circles and each circle could be pie chart of the percentag
How to best visualize differences in many proportions across three groups? Have you tried a bubble chart? https://code.google.com/apis/ajax/playground/?type=visualization#bubble_chart The individual topics could be circles and each circle could be pie chart of the percentage that each news outlet covers the topic. The...
How to best visualize differences in many proportions across three groups? Have you tried a bubble chart? https://code.google.com/apis/ajax/playground/?type=visualization#bubble_chart The individual topics could be circles and each circle could be pie chart of the percentag
14,437
The linearity of variance
$\DeclareMathOperator{\Cov}{Cov}$ $\DeclareMathOperator{\Corr}{Corr}$ $\DeclareMathOperator{\Var}{Var}$ The problem with your line of reasoning is "I think we can always assume $X$ to be independent from the other $X$s." $X$ is not independent of $X$. The symbol $X$ is being used to refer to the same random variable ...
The linearity of variance
$\DeclareMathOperator{\Cov}{Cov}$ $\DeclareMathOperator{\Corr}{Corr}$ $\DeclareMathOperator{\Var}{Var}$ The problem with your line of reasoning is "I think we can always assume $X$ to be independent
The linearity of variance $\DeclareMathOperator{\Cov}{Cov}$ $\DeclareMathOperator{\Corr}{Corr}$ $\DeclareMathOperator{\Var}{Var}$ The problem with your line of reasoning is "I think we can always assume $X$ to be independent from the other $X$s." $X$ is not independent of $X$. The symbol $X$ is being used to refer to...
The linearity of variance $\DeclareMathOperator{\Cov}{Cov}$ $\DeclareMathOperator{\Corr}{Corr}$ $\DeclareMathOperator{\Var}{Var}$ The problem with your line of reasoning is "I think we can always assume $X$ to be independent
14,438
The linearity of variance
Another way of thinking about it is that with random variables $2X \neq X + X$. $2X$ would mean two times the value of the outcome of $X$, while $X + X$ would mean two trials of $X$. In other words, it's the difference between rolling a die once and doubling the result, vs rolling a die twice.
The linearity of variance
Another way of thinking about it is that with random variables $2X \neq X + X$. $2X$ would mean two times the value of the outcome of $X$, while $X + X$ would mean two trials of $X$. In other words,
The linearity of variance Another way of thinking about it is that with random variables $2X \neq X + X$. $2X$ would mean two times the value of the outcome of $X$, while $X + X$ would mean two trials of $X$. In other words, it's the difference between rolling a die once and doubling the result, vs rolling a die twice...
The linearity of variance Another way of thinking about it is that with random variables $2X \neq X + X$. $2X$ would mean two times the value of the outcome of $X$, while $X + X$ would mean two trials of $X$. In other words,
14,439
Why kurtosis of a normal distribution is 3 instead of 0
Kurtosis is certainly not the location of where the peak is. As you say, that's already called the mode. Kurtosis is the standardized fourth moment: If $Z=\frac{X-\mu}{\sigma}$, is a standardized version of the variable we're looking at, then the population kurtosis is the average fourth power of that standardized vari...
Why kurtosis of a normal distribution is 3 instead of 0
Kurtosis is certainly not the location of where the peak is. As you say, that's already called the mode. Kurtosis is the standardized fourth moment: If $Z=\frac{X-\mu}{\sigma}$, is a standardized vers
Why kurtosis of a normal distribution is 3 instead of 0 Kurtosis is certainly not the location of where the peak is. As you say, that's already called the mode. Kurtosis is the standardized fourth moment: If $Z=\frac{X-\mu}{\sigma}$, is a standardized version of the variable we're looking at, then the population kurtos...
Why kurtosis of a normal distribution is 3 instead of 0 Kurtosis is certainly not the location of where the peak is. As you say, that's already called the mode. Kurtosis is the standardized fourth moment: If $Z=\frac{X-\mu}{\sigma}$, is a standardized vers
14,440
Why kurtosis of a normal distribution is 3 instead of 0
Here is a direct visualization to understand what the number "3" refers as regards the kurtosis of the normal distribution. Let $X$ be normally distributed, and let $Z = (X-\mu)/\sigma$. Let $V = Z^4$. Consider the graph of the pdf of $V$, $p_V(v)$. This curve is to the right of zero, and extends to infinity, with 0.9...
Why kurtosis of a normal distribution is 3 instead of 0
Here is a direct visualization to understand what the number "3" refers as regards the kurtosis of the normal distribution. Let $X$ be normally distributed, and let $Z = (X-\mu)/\sigma$. Let $V = Z^4
Why kurtosis of a normal distribution is 3 instead of 0 Here is a direct visualization to understand what the number "3" refers as regards the kurtosis of the normal distribution. Let $X$ be normally distributed, and let $Z = (X-\mu)/\sigma$. Let $V = Z^4$. Consider the graph of the pdf of $V$, $p_V(v)$. This curve is...
Why kurtosis of a normal distribution is 3 instead of 0 Here is a direct visualization to understand what the number "3" refers as regards the kurtosis of the normal distribution. Let $X$ be normally distributed, and let $Z = (X-\mu)/\sigma$. Let $V = Z^4
14,441
Stats is not maths?
Mathematics deals with idealized abstractions that (almost always) have absolute solutions, or the fact that no such solution exists can generally be described fully. It is the science of discovering complex but necessary consequences from simple axioms. Statistics uses math, but it is not math. It's educated guesswor...
Stats is not maths?
Mathematics deals with idealized abstractions that (almost always) have absolute solutions, or the fact that no such solution exists can generally be described fully. It is the science of discovering
Stats is not maths? Mathematics deals with idealized abstractions that (almost always) have absolute solutions, or the fact that no such solution exists can generally be described fully. It is the science of discovering complex but necessary consequences from simple axioms. Statistics uses math, but it is not math. It...
Stats is not maths? Mathematics deals with idealized abstractions that (almost always) have absolute solutions, or the fact that no such solution exists can generally be described fully. It is the science of discovering
14,442
Stats is not maths?
Tongue firmly in cheek: Einstein apparently wrote As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. so statistics is the branch of maths that describes reality. ;o) I'd say statistics is a branch of mathematics in the same way th...
Stats is not maths?
Tongue firmly in cheek: Einstein apparently wrote As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. so statis
Stats is not maths? Tongue firmly in cheek: Einstein apparently wrote As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. so statistics is the branch of maths that describes reality. ;o) I'd say statistics is a branch of mathematic...
Stats is not maths? Tongue firmly in cheek: Einstein apparently wrote As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality. so statis
14,443
Stats is not maths?
Well if you say that "something like statistics, where you can't build everything on basic axioms" then you should probably read about Kolmogorov's axiomatic theory of probability. Kolmogorov defines probability in an abstract and axiomatic way as you can see in this pdf on page 42 or here at the bottom of page 1 and ...
Stats is not maths?
Well if you say that "something like statistics, where you can't build everything on basic axioms" then you should probably read about Kolmogorov's axiomatic theory of probability. Kolmogorov defines
Stats is not maths? Well if you say that "something like statistics, where you can't build everything on basic axioms" then you should probably read about Kolmogorov's axiomatic theory of probability. Kolmogorov defines probability in an abstract and axiomatic way as you can see in this pdf on page 42 or here at the b...
Stats is not maths? Well if you say that "something like statistics, where you can't build everything on basic axioms" then you should probably read about Kolmogorov's axiomatic theory of probability. Kolmogorov defines
14,444
Stats is not maths?
I have no rigorous or philosophical basis for answering this, but I've heard the "stats is not math" complaint often from people, usually physics types. I think people want guarantees certainty from their math, and statistics (usually) offers only probabilistic conclusions with associated p values. Actually, this is ex...
Stats is not maths?
I have no rigorous or philosophical basis for answering this, but I've heard the "stats is not math" complaint often from people, usually physics types. I think people want guarantees certainty from t
Stats is not maths? I have no rigorous or philosophical basis for answering this, but I've heard the "stats is not math" complaint often from people, usually physics types. I think people want guarantees certainty from their math, and statistics (usually) offers only probabilistic conclusions with associated p values. ...
Stats is not maths? I have no rigorous or philosophical basis for answering this, but I've heard the "stats is not math" complaint often from people, usually physics types. I think people want guarantees certainty from t
14,445
Stats is not maths?
Statistical tests, models, and inference tools are formulated in the language of mathematics, and statisticians have mathematically proven thick books of very important and interesting results about them. In many cases, the proofs provide compelling evidence that the statistical tools in question are reliable and/or po...
Stats is not maths?
Statistical tests, models, and inference tools are formulated in the language of mathematics, and statisticians have mathematically proven thick books of very important and interesting results about t
Stats is not maths? Statistical tests, models, and inference tools are formulated in the language of mathematics, and statisticians have mathematically proven thick books of very important and interesting results about them. In many cases, the proofs provide compelling evidence that the statistical tools in question ar...
Stats is not maths? Statistical tests, models, and inference tools are formulated in the language of mathematics, and statisticians have mathematically proven thick books of very important and interesting results about t
14,446
Stats is not maths?
Maybe its because I'm a plebe and haven't taken any advanced mathematical courses, but I don't see why statistics isn't mathematics. The arguments here and on a duplicate question seem to argue two primary points as to why statistics isn't mathematics*. It isn't exact/certain, and as such relies on assumptions. It app...
Stats is not maths?
Maybe its because I'm a plebe and haven't taken any advanced mathematical courses, but I don't see why statistics isn't mathematics. The arguments here and on a duplicate question seem to argue two pr
Stats is not maths? Maybe its because I'm a plebe and haven't taken any advanced mathematical courses, but I don't see why statistics isn't mathematics. The arguments here and on a duplicate question seem to argue two primary points as to why statistics isn't mathematics*. It isn't exact/certain, and as such relies on...
Stats is not maths? Maybe its because I'm a plebe and haven't taken any advanced mathematical courses, but I don't see why statistics isn't mathematics. The arguments here and on a duplicate question seem to argue two pr
14,447
Stats is not maths?
The "difference" relies on: Inductive reasoning vs. Deductive reasoning vs. Inference. For instance, no mathematical theorem can tell what distribution or prior you can use for your data/model. By the way, Bayesian statistics is an axiomatised area.
Stats is not maths?
The "difference" relies on: Inductive reasoning vs. Deductive reasoning vs. Inference. For instance, no mathematical theorem can tell what distribution or prior you can use for your data/model. By the
Stats is not maths? The "difference" relies on: Inductive reasoning vs. Deductive reasoning vs. Inference. For instance, no mathematical theorem can tell what distribution or prior you can use for your data/model. By the way, Bayesian statistics is an axiomatised area.
Stats is not maths? The "difference" relies on: Inductive reasoning vs. Deductive reasoning vs. Inference. For instance, no mathematical theorem can tell what distribution or prior you can use for your data/model. By the
14,448
Stats is not maths?
This may be a very unpopular opinion, but given the history and formulation of concepts of statistics (and probability theory), I consider statistics to be a subbranch of physics. Indeed, Gauss initially formalized the least squares regression model in astronomical predictions. The majority of contributions to statist...
Stats is not maths?
This may be a very unpopular opinion, but given the history and formulation of concepts of statistics (and probability theory), I consider statistics to be a subbranch of physics. Indeed, Gauss initi
Stats is not maths? This may be a very unpopular opinion, but given the history and formulation of concepts of statistics (and probability theory), I consider statistics to be a subbranch of physics. Indeed, Gauss initially formalized the least squares regression model in astronomical predictions. The majority of cont...
Stats is not maths? This may be a very unpopular opinion, but given the history and formulation of concepts of statistics (and probability theory), I consider statistics to be a subbranch of physics. Indeed, Gauss initi
14,449
How to generate a non-integer amount of consecutive Bernoulli successes?
We can solve this via a couple of "tricks" and a little math. Here is the basic algorithm: Generate a Geometric random variable with probability of success $p$. The outcome of this random variable determines a fixed known value $f_n \in [0,1]$. Generate a $\mathrm{Ber}(f_n)$ random variable using fair coin flips gener...
How to generate a non-integer amount of consecutive Bernoulli successes?
We can solve this via a couple of "tricks" and a little math. Here is the basic algorithm: Generate a Geometric random variable with probability of success $p$. The outcome of this random variable de
How to generate a non-integer amount of consecutive Bernoulli successes? We can solve this via a couple of "tricks" and a little math. Here is the basic algorithm: Generate a Geometric random variable with probability of success $p$. The outcome of this random variable determines a fixed known value $f_n \in [0,1]$. G...
How to generate a non-integer amount of consecutive Bernoulli successes? We can solve this via a couple of "tricks" and a little math. Here is the basic algorithm: Generate a Geometric random variable with probability of success $p$. The outcome of this random variable de
14,450
How to generate a non-integer amount of consecutive Bernoulli successes?
Is the following answer silly? If $X_1,\dots,X_n$ are independent $\mathrm{Ber}(p)$ and $Y_n$ has distribution $\mathrm{Ber}\left(\left(\sum_{i=1}^n X_i/n \right)^a\right)$, then $Y_n$ will be approximately distributed as $\mathrm{Ber}(p^a)$, when $n\to\infty$. Hence, if you don't know $p$, but you can toss this coin a...
How to generate a non-integer amount of consecutive Bernoulli successes?
Is the following answer silly? If $X_1,\dots,X_n$ are independent $\mathrm{Ber}(p)$ and $Y_n$ has distribution $\mathrm{Ber}\left(\left(\sum_{i=1}^n X_i/n \right)^a\right)$, then $Y_n$ will be approxi
How to generate a non-integer amount of consecutive Bernoulli successes? Is the following answer silly? If $X_1,\dots,X_n$ are independent $\mathrm{Ber}(p)$ and $Y_n$ has distribution $\mathrm{Ber}\left(\left(\sum_{i=1}^n X_i/n \right)^a\right)$, then $Y_n$ will be approximately distributed as $\mathrm{Ber}(p^a)$, when...
How to generate a non-integer amount of consecutive Bernoulli successes? Is the following answer silly? If $X_1,\dots,X_n$ are independent $\mathrm{Ber}(p)$ and $Y_n$ has distribution $\mathrm{Ber}\left(\left(\sum_{i=1}^n X_i/n \right)^a\right)$, then $Y_n$ will be approxi
14,451
How to generate a non-integer amount of consecutive Bernoulli successes?
I posted the following exposition of this question and cardinal's answer to the General Discussion forum of the current Analytic Combinatorics class on Coursera, "Application of power series to constructing a random variable." I'm posting a copy here as community wiki to make this publicly and more permanently availabl...
How to generate a non-integer amount of consecutive Bernoulli successes?
I posted the following exposition of this question and cardinal's answer to the General Discussion forum of the current Analytic Combinatorics class on Coursera, "Application of power series to constr
How to generate a non-integer amount of consecutive Bernoulli successes? I posted the following exposition of this question and cardinal's answer to the General Discussion forum of the current Analytic Combinatorics class on Coursera, "Application of power series to constructing a random variable." I'm posting a copy h...
How to generate a non-integer amount of consecutive Bernoulli successes? I posted the following exposition of this question and cardinal's answer to the General Discussion forum of the current Analytic Combinatorics class on Coursera, "Application of power series to constr
14,452
How to generate a non-integer amount of consecutive Bernoulli successes?
The very complete answer by cardinal and subsequent contributions inspired the following remark/variant. Let PZ stand "Probability of Zero" and $q:=1-p$. If $X_n$ is an iid Bernoulli sequence with PZ $q$, then $M_n := \max(X_1,\,X_2,\,\dots, X_n)$ is a Bernoulli r.v. with PZ $q^n$. Now making $n$ random i.e., replacing...
How to generate a non-integer amount of consecutive Bernoulli successes?
The very complete answer by cardinal and subsequent contributions inspired the following remark/variant. Let PZ stand "Probability of Zero" and $q:=1-p$. If $X_n$ is an iid Bernoulli sequence with PZ
How to generate a non-integer amount of consecutive Bernoulli successes? The very complete answer by cardinal and subsequent contributions inspired the following remark/variant. Let PZ stand "Probability of Zero" and $q:=1-p$. If $X_n$ is an iid Bernoulli sequence with PZ $q$, then $M_n := \max(X_1,\,X_2,\,\dots, X_n)$...
How to generate a non-integer amount of consecutive Bernoulli successes? The very complete answer by cardinal and subsequent contributions inspired the following remark/variant. Let PZ stand "Probability of Zero" and $q:=1-p$. If $X_n$ is an iid Bernoulli sequence with PZ
14,453
Advantages and disadvantages of SVM
There are four main advantages: Firstly it has a regularisation parameter, which makes the user think about avoiding over-fitting. Secondly it uses the kernel trick, so you can build in expert knowledge about the problem via engineering the kernel. Thirdly an SVM is defined by a convex optimisation problem (no local ...
Advantages and disadvantages of SVM
There are four main advantages: Firstly it has a regularisation parameter, which makes the user think about avoiding over-fitting. Secondly it uses the kernel trick, so you can build in expert knowle
Advantages and disadvantages of SVM There are four main advantages: Firstly it has a regularisation parameter, which makes the user think about avoiding over-fitting. Secondly it uses the kernel trick, so you can build in expert knowledge about the problem via engineering the kernel. Thirdly an SVM is defined by a co...
Advantages and disadvantages of SVM There are four main advantages: Firstly it has a regularisation parameter, which makes the user think about avoiding over-fitting. Secondly it uses the kernel trick, so you can build in expert knowle
14,454
Interpreting R's ur.df (Dickey-Fuller unit root test) results
It seems the creators of this particular R command presume one is familiar with the original Dickey-Fuller formulae, so did not provide the relevant documentation for how to interpret the values. I found that Enders was an incredibly helpful resource (Applied Econometric Time Series 3e, 2010, p. 206-209--I imagine oth...
Interpreting R's ur.df (Dickey-Fuller unit root test) results
It seems the creators of this particular R command presume one is familiar with the original Dickey-Fuller formulae, so did not provide the relevant documentation for how to interpret the values. I f
Interpreting R's ur.df (Dickey-Fuller unit root test) results It seems the creators of this particular R command presume one is familiar with the original Dickey-Fuller formulae, so did not provide the relevant documentation for how to interpret the values. I found that Enders was an incredibly helpful resource (Appli...
Interpreting R's ur.df (Dickey-Fuller unit root test) results It seems the creators of this particular R command presume one is familiar with the original Dickey-Fuller formulae, so did not provide the relevant documentation for how to interpret the values. I f
14,455
Interpreting R's ur.df (Dickey-Fuller unit root test) results
As joint-p already pointed out, the significance codes are fairly standard and they correspond to p-values, i.e. the statistical significance of a hypothesis test. a p-value of .01 means that the conclusion is true within 99% confidence. The Wikipedia article on Dickey-Fuller describes the three versions of the Dickey-...
Interpreting R's ur.df (Dickey-Fuller unit root test) results
As joint-p already pointed out, the significance codes are fairly standard and they correspond to p-values, i.e. the statistical significance of a hypothesis test. a p-value of .01 means that the conc
Interpreting R's ur.df (Dickey-Fuller unit root test) results As joint-p already pointed out, the significance codes are fairly standard and they correspond to p-values, i.e. the statistical significance of a hypothesis test. a p-value of .01 means that the conclusion is true within 99% confidence. The Wikipedia articl...
Interpreting R's ur.df (Dickey-Fuller unit root test) results As joint-p already pointed out, the significance codes are fairly standard and they correspond to p-values, i.e. the statistical significance of a hypothesis test. a p-value of .01 means that the conc
14,456
Interpreting R's ur.df (Dickey-Fuller unit root test) results
I found Jeramy's answer pretty easy to follow, but constantly found myself trying to walk through the logic correctly and making mistakes. I coded up an R function that interprets each of the three types of models, and gives warnings if there are inconsistencies or inconclusive results (I don't think there ever should...
Interpreting R's ur.df (Dickey-Fuller unit root test) results
I found Jeramy's answer pretty easy to follow, but constantly found myself trying to walk through the logic correctly and making mistakes. I coded up an R function that interprets each of the three t
Interpreting R's ur.df (Dickey-Fuller unit root test) results I found Jeramy's answer pretty easy to follow, but constantly found myself trying to walk through the logic correctly and making mistakes. I coded up an R function that interprets each of the three types of models, and gives warnings if there are inconsiste...
Interpreting R's ur.df (Dickey-Fuller unit root test) results I found Jeramy's answer pretty easy to follow, but constantly found myself trying to walk through the logic correctly and making mistakes. I coded up an R function that interprets each of the three t
14,457
Interpreting R's ur.df (Dickey-Fuller unit root test) results
More info in Roger Perman's lecture notes on unit root tests See also table 4.2 in Enders, Applied Econometric Time Series (4e), which summarizes the different hypotheses to which these test statistics refer. Content agrees with the image provided above.
Interpreting R's ur.df (Dickey-Fuller unit root test) results
More info in Roger Perman's lecture notes on unit root tests See also table 4.2 in Enders, Applied Econometric Time Series (4e), which summarizes the different hypotheses to which these test statistic
Interpreting R's ur.df (Dickey-Fuller unit root test) results More info in Roger Perman's lecture notes on unit root tests See also table 4.2 in Enders, Applied Econometric Time Series (4e), which summarizes the different hypotheses to which these test statistics refer. Content agrees with the image provided above.
Interpreting R's ur.df (Dickey-Fuller unit root test) results More info in Roger Perman's lecture notes on unit root tests See also table 4.2 in Enders, Applied Econometric Time Series (4e), which summarizes the different hypotheses to which these test statistic
14,458
Interpreting R's ur.df (Dickey-Fuller unit root test) results
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. phi1 phi2 phi3 are equivalent to F-tests in ADF framew...
Interpreting R's ur.df (Dickey-Fuller unit root test) results
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Interpreting R's ur.df (Dickey-Fuller unit root test) results Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. ...
Interpreting R's ur.df (Dickey-Fuller unit root test) results Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
14,459
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How?
The Bayesian approach to (parametric) statistical inference starts from a statistical model, ie a family of parametrised distributions, $$X\sim F_\theta,\qquad\theta\in\Theta$$ and it introduces a supplementary probability distribution on the parameter $$\theta\sim\pi(\theta)$$ The posterior distribution on $\theta$ is...
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on
The Bayesian approach to (parametric) statistical inference starts from a statistical model, ie a family of parametrised distributions, $$X\sim F_\theta,\qquad\theta\in\Theta$$ and it introduces a sup
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How? The Bayesian approach to (parametric) statistical inference starts from a statistical model, ie a family of parametrised distributions, $$X\sim F_\theta,\qquad\theta\in\Theta$$ and it introduces a supplementary pr...
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on The Bayesian approach to (parametric) statistical inference starts from a statistical model, ie a family of parametrised distributions, $$X\sim F_\theta,\qquad\theta\in\Theta$$ and it introduces a sup
14,460
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How?
Maybe the confusion comes from the short hand $p(\theta|y)$ which actually means $p(\theta|Y=y)$, the random variable $Y$ interpreted as generating the data takes the fixed value $y$, fixed after actually having observed the data? So the data are random in the sense of having a distribution as long as they're uncertain...
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on
Maybe the confusion comes from the short hand $p(\theta|y)$ which actually means $p(\theta|Y=y)$, the random variable $Y$ interpreted as generating the data takes the fixed value $y$, fixed after actu
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How? Maybe the confusion comes from the short hand $p(\theta|y)$ which actually means $p(\theta|Y=y)$, the random variable $Y$ interpreted as generating the data takes the fixed value $y$, fixed after actually having o...
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on Maybe the confusion comes from the short hand $p(\theta|y)$ which actually means $p(\theta|Y=y)$, the random variable $Y$ interpreted as generating the data takes the fixed value $y$, fixed after actu
14,461
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How?
Be very careful with the statement you choose. "nonrandom" is very different from "observed". In Bayesian statistics everything is a random variable, the only difference between these random variables is some are observed and some are hidden. For example in your case $y$ is an observed random variable and $\theta$ is a...
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on
Be very careful with the statement you choose. "nonrandom" is very different from "observed". In Bayesian statistics everything is a random variable, the only difference between these random variables
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How? Be very careful with the statement you choose. "nonrandom" is very different from "observed". In Bayesian statistics everything is a random variable, the only difference between these random variables is some are ...
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on Be very careful with the statement you choose. "nonrandom" is very different from "observed". In Bayesian statistics everything is a random variable, the only difference between these random variables
14,462
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How?
To be concrete, consider the simple case of throwing a dice. Every face has a probability to be thrown. The outcome of all throws is non-random (it is a fixed pattern determined by throwing the dice a lot of times). On this pattern, you can apply a new chance of appearing. If you throw with two dices, a new pattern wil...
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on
To be concrete, consider the simple case of throwing a dice. Every face has a probability to be thrown. The outcome of all throws is non-random (it is a fixed pattern determined by throwing the dice a
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on. How? To be concrete, consider the simple case of throwing a dice. Every face has a probability to be thrown. The outcome of all throws is non-random (it is a fixed pattern determined by throwing the dice a lot of times...
In Bayesian statistics, data is considered nonrandom but can have a probability or be conditioned on To be concrete, consider the simple case of throwing a dice. Every face has a probability to be thrown. The outcome of all throws is non-random (it is a fixed pattern determined by throwing the dice a
14,463
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
There is a way to estimate the consequences for out-of-sample performance, provided that the decision-making process in the modeling can be adequately turned into an automated or semi-automated process. That's to repeat the entire modeling process on multiple bootstrap re-samples of the data set. That's about as close ...
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
There is a way to estimate the consequences for out-of-sample performance, provided that the decision-making process in the modeling can be adequately turned into an automated or semi-automated proces
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? There is a way to estimate the consequences for out-of-sample performance, provided that the decision-making process in the modeling can be adequately turned into an automated or semi-automated process. That's to repeat...
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? There is a way to estimate the consequences for out-of-sample performance, provided that the decision-making process in the modeling can be adequately turned into an automated or semi-automated proces
14,464
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Here's a basic answer from a machine-learning perspective. The more complex and large the model class you consider, the better you will be able to fit any dataset, but the less confidence you can have in out-of-sample performance. In other words, the more likely you are to overfit to your sample. In data-snooping, one ...
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Here's a basic answer from a machine-learning perspective. The more complex and large the model class you consider, the better you will be able to fit any dataset, but the less confidence you can have
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Here's a basic answer from a machine-learning perspective. The more complex and large the model class you consider, the better you will be able to fit any dataset, but the less confidence you can have in out-of-sample p...
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Here's a basic answer from a machine-learning perspective. The more complex and large the model class you consider, the better you will be able to fit any dataset, but the less confidence you can have
14,465
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Here is an answer from a physics perspective. If you are doing excessive "fitting," then you might be data snooping. However, if you are "modeling" in the way we mean in physics, then you are actually doing what you are supposed to do. If you're response variable is decibels and your explanatory variables are things li...
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Here is an answer from a physics perspective. If you are doing excessive "fitting," then you might be data snooping. However, if you are "modeling" in the way we mean in physics, then you are actually
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Here is an answer from a physics perspective. If you are doing excessive "fitting," then you might be data snooping. However, if you are "modeling" in the way we mean in physics, then you are actually doing what you are...
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Here is an answer from a physics perspective. If you are doing excessive "fitting," then you might be data snooping. However, if you are "modeling" in the way we mean in physics, then you are actually
14,466
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Finding iteratively the best analytical model that fits data that has an error term is acceptable within the constraints nicely explained in the article you quote. But perhaps what you are asking is what is the effectiveness of such model when you use it to predict out-of-sample data that was not used to generate the m...
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Finding iteratively the best analytical model that fits data that has an error term is acceptable within the constraints nicely explained in the article you quote. But perhaps what you are asking is w
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Finding iteratively the best analytical model that fits data that has an error term is acceptable within the constraints nicely explained in the article you quote. But perhaps what you are asking is what is the effectiv...
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Finding iteratively the best analytical model that fits data that has an error term is acceptable within the constraints nicely explained in the article you quote. But perhaps what you are asking is w
14,467
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
We have let the data affect our model. Well, all models are based on data. The issue is whether the model is being constructed from training data or testing data. If you make decisions of what type of model you want to look into based on plots of the training data, that's not data snooping. Ideally, any metrics descri...
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
We have let the data affect our model. Well, all models are based on data. The issue is whether the model is being constructed from training data or testing data. If you make decisions of what type o
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? We have let the data affect our model. Well, all models are based on data. The issue is whether the model is being constructed from training data or testing data. If you make decisions of what type of model you want to...
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? We have let the data affect our model. Well, all models are based on data. The issue is whether the model is being constructed from training data or testing data. If you make decisions of what type o
14,468
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Another physics perspective (see also @albalter's nice answer). In the analysis of physics data understanding the "error" bars in measurement is of paramount importance. If you cannot account for the size of the measured errors that may reveal an unknown causative phenomenon or, more usually and sadly, some unforeseen...
When we plot data and then use nonlinear transformations in a regression model are we data-snooping?
Another physics perspective (see also @albalter's nice answer). In the analysis of physics data understanding the "error" bars in measurement is of paramount importance. If you cannot account for the
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Another physics perspective (see also @albalter's nice answer). In the analysis of physics data understanding the "error" bars in measurement is of paramount importance. If you cannot account for the size of the measur...
When we plot data and then use nonlinear transformations in a regression model are we data-snooping? Another physics perspective (see also @albalter's nice answer). In the analysis of physics data understanding the "error" bars in measurement is of paramount importance. If you cannot account for the
14,469
How to do data augmentation and train-validate split?
First split the data into training and validation sets, then do data augmentation on the training set. You use your validation set to try to estimate how your method works on real world data, thus it should only contain real world data. Adding augmented data will not improve the accuracy of the validation. It will at b...
How to do data augmentation and train-validate split?
First split the data into training and validation sets, then do data augmentation on the training set. You use your validation set to try to estimate how your method works on real world data, thus it
How to do data augmentation and train-validate split? First split the data into training and validation sets, then do data augmentation on the training set. You use your validation set to try to estimate how your method works on real world data, thus it should only contain real world data. Adding augmented data will no...
How to do data augmentation and train-validate split? First split the data into training and validation sets, then do data augmentation on the training set. You use your validation set to try to estimate how your method works on real world data, thus it
14,470
How to do data augmentation and train-validate split?
never do 3, as you will get leakage. for example assume the augmentation is a 1-pixel shift left. if the split in not augmentation aware, you may get very similar data samples in both train and validation.
How to do data augmentation and train-validate split?
never do 3, as you will get leakage. for example assume the augmentation is a 1-pixel shift left. if the split in not augmentation aware, you may get very similar data samples in both train and valida
How to do data augmentation and train-validate split? never do 3, as you will get leakage. for example assume the augmentation is a 1-pixel shift left. if the split in not augmentation aware, you may get very similar data samples in both train and validation.
How to do data augmentation and train-validate split? never do 3, as you will get leakage. for example assume the augmentation is a 1-pixel shift left. if the split in not augmentation aware, you may get very similar data samples in both train and valida
14,471
How to do data augmentation and train-validate split?
Data Augmentation means adding external data/information to the existing data which is being analyzed. So, as the entire augmented data would be used for machine learning, then the following process would be better suitable: Do data augmentation --> Splitting data
How to do data augmentation and train-validate split?
Data Augmentation means adding external data/information to the existing data which is being analyzed. So, as the entire augmented data would be used for machine learning, then the following process w
How to do data augmentation and train-validate split? Data Augmentation means adding external data/information to the existing data which is being analyzed. So, as the entire augmented data would be used for machine learning, then the following process would be better suitable: Do data augmentation --> Splitting data
How to do data augmentation and train-validate split? Data Augmentation means adding external data/information to the existing data which is being analyzed. So, as the entire augmented data would be used for machine learning, then the following process w
14,472
Misunderstanding a P-value?
Because of your comments I will make two separate sections: p-values In statistical hypothesis testing you can find 'statistical evidence' for the alternative hypothesis; As I explained in What follows if we fail to reject the null hypothesis?, it is similar to 'proof by contradiction' in mathematics. So if we want ...
Misunderstanding a P-value?
Because of your comments I will make two separate sections: p-values In statistical hypothesis testing you can find 'statistical evidence' for the alternative hypothesis; As I explained in What foll
Misunderstanding a P-value? Because of your comments I will make two separate sections: p-values In statistical hypothesis testing you can find 'statistical evidence' for the alternative hypothesis; As I explained in What follows if we fail to reject the null hypothesis?, it is similar to 'proof by contradiction' in ...
Misunderstanding a P-value? Because of your comments I will make two separate sections: p-values In statistical hypothesis testing you can find 'statistical evidence' for the alternative hypothesis; As I explained in What foll
14,473
Misunderstanding a P-value?
The first statement is not strictly true. From a nifty paper on the misunderstanding of significance: (http://myweb.brooklyn.liu.edu/cortiz/PDF%20Files/Misinterpretations%20of%20Significance.pdf) "[This statement] may look similar to the definition of an error of Type I (i.e., the probability of rejecting the H0 a...
Misunderstanding a P-value?
The first statement is not strictly true. From a nifty paper on the misunderstanding of significance: (http://myweb.brooklyn.liu.edu/cortiz/PDF%20Files/Misinterpretations%20of%20Significance.pdf) "
Misunderstanding a P-value? The first statement is not strictly true. From a nifty paper on the misunderstanding of significance: (http://myweb.brooklyn.liu.edu/cortiz/PDF%20Files/Misinterpretations%20of%20Significance.pdf) "[This statement] may look similar to the definition of an error of Type I (i.e., the proba...
Misunderstanding a P-value? The first statement is not strictly true. From a nifty paper on the misunderstanding of significance: (http://myweb.brooklyn.liu.edu/cortiz/PDF%20Files/Misinterpretations%20of%20Significance.pdf) "
14,474
Misunderstanding a P-value?
The correct interpretation of a p-value is the conditional probability of an outcome at least as conductive to the alternative hypothesis as the observed value (at least as "extreme"), assuming the null hypothesis is true. Incorrect interpretations generally involve either a marginal probability or a switching of the...
Misunderstanding a P-value?
The correct interpretation of a p-value is the conditional probability of an outcome at least as conductive to the alternative hypothesis as the observed value (at least as "extreme"), assuming the nu
Misunderstanding a P-value? The correct interpretation of a p-value is the conditional probability of an outcome at least as conductive to the alternative hypothesis as the observed value (at least as "extreme"), assuming the null hypothesis is true. Incorrect interpretations generally involve either a marginal proba...
Misunderstanding a P-value? The correct interpretation of a p-value is the conditional probability of an outcome at least as conductive to the alternative hypothesis as the observed value (at least as "extreme"), assuming the nu
14,475
Misunderstanding a P-value?
The p-value allows us to determine whether the null hypothesis (or the claimed hypothesis) can be rejected or not. If the p-value is less than the significance level, α, then this represents a statistically significant result, and the null hypothesis should be rejected. If the p-value is greater than the significance l...
Misunderstanding a P-value?
The p-value allows us to determine whether the null hypothesis (or the claimed hypothesis) can be rejected or not. If the p-value is less than the significance level, α, then this represents a statist
Misunderstanding a P-value? The p-value allows us to determine whether the null hypothesis (or the claimed hypothesis) can be rejected or not. If the p-value is less than the significance level, α, then this represents a statistically significant result, and the null hypothesis should be rejected. If the p-value is gre...
Misunderstanding a P-value? The p-value allows us to determine whether the null hypothesis (or the claimed hypothesis) can be rejected or not. If the p-value is less than the significance level, α, then this represents a statist
14,476
How is it possible that Poisson GLM accepts non-integer numbers?
Of course you are correct that the Poisson distribution technically is defined only for integers. However, statistical modeling is the art of good approximations ("all models are wrong"), and there are times when it makes sense to treat non-integer data as though it were [approximately] Poisson. For example, if you ...
How is it possible that Poisson GLM accepts non-integer numbers?
Of course you are correct that the Poisson distribution technically is defined only for integers. However, statistical modeling is the art of good approximations ("all models are wrong"), and there a
How is it possible that Poisson GLM accepts non-integer numbers? Of course you are correct that the Poisson distribution technically is defined only for integers. However, statistical modeling is the art of good approximations ("all models are wrong"), and there are times when it makes sense to treat non-integer data ...
How is it possible that Poisson GLM accepts non-integer numbers? Of course you are correct that the Poisson distribution technically is defined only for integers. However, statistical modeling is the art of good approximations ("all models are wrong"), and there a
14,477
How is it possible that Poisson GLM accepts non-integer numbers?
For a response $y$, if you assume the logarithm of its expectation is a linear combination of predictors $\renewcommand{\vec}[1]{\boldsymbol{#1}}\vec{x}$ $$\operatorname{E}Y_i=\exp{\vec\beta^{\mathrm{T}}\vec{x}_i}$$ then consistent estimates for the regression coefficients $\vec\beta$ can be obtained by solving the sco...
How is it possible that Poisson GLM accepts non-integer numbers?
For a response $y$, if you assume the logarithm of its expectation is a linear combination of predictors $\renewcommand{\vec}[1]{\boldsymbol{#1}}\vec{x}$ $$\operatorname{E}Y_i=\exp{\vec\beta^{\mathrm{
How is it possible that Poisson GLM accepts non-integer numbers? For a response $y$, if you assume the logarithm of its expectation is a linear combination of predictors $\renewcommand{\vec}[1]{\boldsymbol{#1}}\vec{x}$ $$\operatorname{E}Y_i=\exp{\vec\beta^{\mathrm{T}}\vec{x}_i}$$ then consistent estimates for the regre...
How is it possible that Poisson GLM accepts non-integer numbers? For a response $y$, if you assume the logarithm of its expectation is a linear combination of predictors $\renewcommand{\vec}[1]{\boldsymbol{#1}}\vec{x}$ $$\operatorname{E}Y_i=\exp{\vec\beta^{\mathrm{
14,478
Poisson or quasi poisson in a regression with count data and overdispersion?
When trying to determine what sort of glm equation you want to estimate, you should think about plausible relationships between the expected value of your target variable given the right hand side (rhs) variables and the variance of the target variable given the rhs variables. Plots of the residuals vs. the fitted val...
Poisson or quasi poisson in a regression with count data and overdispersion?
When trying to determine what sort of glm equation you want to estimate, you should think about plausible relationships between the expected value of your target variable given the right hand side (rh
Poisson or quasi poisson in a regression with count data and overdispersion? When trying to determine what sort of glm equation you want to estimate, you should think about plausible relationships between the expected value of your target variable given the right hand side (rhs) variables and the variance of the target...
Poisson or quasi poisson in a regression with count data and overdispersion? When trying to determine what sort of glm equation you want to estimate, you should think about plausible relationships between the expected value of your target variable given the right hand side (rh
14,479
Poisson or quasi poisson in a regression with count data and overdispersion?
You are right, these data might likely be overdispersed. Quasipoisson is a remedy: It estimates a scale parameter as well (which is fixed for poisson models as the variance is also the mean) and will provide better fit. However, it is no longer maximum likelihood what you are then doing, and certain model tests and ind...
Poisson or quasi poisson in a regression with count data and overdispersion?
You are right, these data might likely be overdispersed. Quasipoisson is a remedy: It estimates a scale parameter as well (which is fixed for poisson models as the variance is also the mean) and will
Poisson or quasi poisson in a regression with count data and overdispersion? You are right, these data might likely be overdispersed. Quasipoisson is a remedy: It estimates a scale parameter as well (which is fixed for poisson models as the variance is also the mean) and will provide better fit. However, it is no longe...
Poisson or quasi poisson in a regression with count data and overdispersion? You are right, these data might likely be overdispersed. Quasipoisson is a remedy: It estimates a scale parameter as well (which is fixed for poisson models as the variance is also the mean) and will
14,480
Poisson or quasi poisson in a regression with count data and overdispersion?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. You might want to try this: summary(model,dispersion =...
Poisson or quasi poisson in a regression with count data and overdispersion?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Poisson or quasi poisson in a regression with count data and overdispersion? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. ...
Poisson or quasi poisson in a regression with count data and overdispersion? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
14,481
Showing machine learning results are statistically irrelevant
You answered yourself: I made two additional models (mean and last sample) which often match or beat the RMSE of the RF and ANN models published in the paper. The mean model just takes the mean of training and uses that in all predictions. The dataset is a timeseries (time-varying, usually 1-2 samples per week), so th...
Showing machine learning results are statistically irrelevant
You answered yourself: I made two additional models (mean and last sample) which often match or beat the RMSE of the RF and ANN models published in the paper. The mean model just takes the mean of tr
Showing machine learning results are statistically irrelevant You answered yourself: I made two additional models (mean and last sample) which often match or beat the RMSE of the RF and ANN models published in the paper. The mean model just takes the mean of training and uses that in all predictions. The dataset is a ...
Showing machine learning results are statistically irrelevant You answered yourself: I made two additional models (mean and last sample) which often match or beat the RMSE of the RF and ANN models published in the paper. The mean model just takes the mean of tr
14,482
Showing machine learning results are statistically irrelevant
Piggybacking on Tim's answer. You clearly already have trained a better model, so just show your colleagues its results. Here's a note, though: R2 score could prove to be an unreliable metric depending on the problem. For example, a regression model predicting the price of a stock in the following day. Any small amount...
Showing machine learning results are statistically irrelevant
Piggybacking on Tim's answer. You clearly already have trained a better model, so just show your colleagues its results. Here's a note, though: R2 score could prove to be an unreliable metric dependin
Showing machine learning results are statistically irrelevant Piggybacking on Tim's answer. You clearly already have trained a better model, so just show your colleagues its results. Here's a note, though: R2 score could prove to be an unreliable metric depending on the problem. For example, a regression model predicti...
Showing machine learning results are statistically irrelevant Piggybacking on Tim's answer. You clearly already have trained a better model, so just show your colleagues its results. Here's a note, though: R2 score could prove to be an unreliable metric dependin
14,483
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
This effect not only occurs in leave-one-out but k-fold cross-validation (CV) in general. Your training and your validation sets are not independent because any observation being allocated to your validation set obviously influences your training set (since it is being taken out from it). To which extend this is the c...
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
This effect not only occurs in leave-one-out but k-fold cross-validation (CV) in general. Your training and your validation sets are not independent because any observation being allocated to your val
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? This effect not only occurs in leave-one-out but k-fold cross-validation (CV) in general. Your training and your validation sets are not independent because any observation being allocated to your validation set obviously influences y...
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? This effect not only occurs in leave-one-out but k-fold cross-validation (CV) in general. Your training and your validation sets are not independent because any observation being allocated to your val
14,484
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
Given that the size of the training sample ($n_{training}$) is smaller than the size of the entire sample ($n$) $$ n_{training}<n, $$ the parameter estimates based on training subsamples in CV (be it LOO or K-fold) will in expectation be less accurate/precise than these based on the entire sample. This will cause the...
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
Given that the size of the training sample ($n_{training}$) is smaller than the size of the entire sample ($n$) $$ n_{training}<n, $$ the parameter estimates based on training subsamples in CV (be i
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? Given that the size of the training sample ($n_{training}$) is smaller than the size of the entire sample ($n$) $$ n_{training}<n, $$ the parameter estimates based on training subsamples in CV (be it LOO or K-fold) will in expectati...
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? Given that the size of the training sample ($n_{training}$) is smaller than the size of the entire sample ($n$) $$ n_{training}<n, $$ the parameter estimates based on training subsamples in CV (be i
14,485
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
You aren't adding negative correlation correlation between observation and mean, you're taking out positive correlation between observation and mean. The whole problem with not doing cross-validation is that if you have n data points, then each time you do a prediction for one of the data points, 1/n of the prediction ...
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
You aren't adding negative correlation correlation between observation and mean, you're taking out positive correlation between observation and mean. The whole problem with not doing cross-validation
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? You aren't adding negative correlation correlation between observation and mean, you're taking out positive correlation between observation and mean. The whole problem with not doing cross-validation is that if you have n data points,...
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? You aren't adding negative correlation correlation between observation and mean, you're taking out positive correlation between observation and mean. The whole problem with not doing cross-validation
14,486
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
The answer to both questions is yes: yes, LOO does have a pessimistic bias, and yes, the described effect of additional pessimistic bias is well known. Richard Hardy's answer gives a good explanation of the well-known slight pessimistic bias of a correctly performed resampling validation (including all flavors of cr...
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error?
The answer to both questions is yes: yes, LOO does have a pessimistic bias, and yes, the described effect of additional pessimistic bias is well known. Richard Hardy's answer gives a good explanati
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? The answer to both questions is yes: yes, LOO does have a pessimistic bias, and yes, the described effect of additional pessimistic bias is well known. Richard Hardy's answer gives a good explanation of the well-known slight pessim...
Is leave-one-out cross validation (LOOCV) known to systematically overestimate error? The answer to both questions is yes: yes, LOO does have a pessimistic bias, and yes, the described effect of additional pessimistic bias is well known. Richard Hardy's answer gives a good explanati
14,487
Most effective use of colour in heat/contour maps
Rainbow color maps, as they're often called, remain popular despite documented perceptual inefficiencies. The main problems with rainbow (and other spectral) color maps are: The colors are not in a perceptual order The luminance bounces around: our eyes are mostly rods for luminance, not cones for color We see hues ca...
Most effective use of colour in heat/contour maps
Rainbow color maps, as they're often called, remain popular despite documented perceptual inefficiencies. The main problems with rainbow (and other spectral) color maps are: The colors are not in a p
Most effective use of colour in heat/contour maps Rainbow color maps, as they're often called, remain popular despite documented perceptual inefficiencies. The main problems with rainbow (and other spectral) color maps are: The colors are not in a perceptual order The luminance bounces around: our eyes are mostly rods...
Most effective use of colour in heat/contour maps Rainbow color maps, as they're often called, remain popular despite documented perceptual inefficiencies. The main problems with rainbow (and other spectral) color maps are: The colors are not in a p
14,488
Most effective use of colour in heat/contour maps
I agree with @xan about the inefficiencies of rainbow color maps. Here is another paper that shows that rainbow/categorical color maps are substantially worse than diverging ones for quantitative tasks, from InfoVis '11: Michelle Borkin, Krzysztof Gajos, Amanda Peters, Dimitrios Mitsouras, Simone Melchionna, Frank Ryb...
Most effective use of colour in heat/contour maps
I agree with @xan about the inefficiencies of rainbow color maps. Here is another paper that shows that rainbow/categorical color maps are substantially worse than diverging ones for quantitative task
Most effective use of colour in heat/contour maps I agree with @xan about the inefficiencies of rainbow color maps. Here is another paper that shows that rainbow/categorical color maps are substantially worse than diverging ones for quantitative tasks, from InfoVis '11: Michelle Borkin, Krzysztof Gajos, Amanda Peters,...
Most effective use of colour in heat/contour maps I agree with @xan about the inefficiencies of rainbow color maps. Here is another paper that shows that rainbow/categorical color maps are substantially worse than diverging ones for quantitative task
14,489
Maximum value of coefficient of variation for bounded data set
Geometry provides insight and classical inequalities afford easy access to rigor. Geometric solution We know, from the geometry of least squares, that $\mathbf{\bar{x}} = (\bar{x}, \bar{x}, \ldots, \bar{x})$ is the orthogonal projection of the vector of data $\mathbf{x}=(x_1, x_2, \ldots, x_n)$ onto the linear subspace...
Maximum value of coefficient of variation for bounded data set
Geometry provides insight and classical inequalities afford easy access to rigor. Geometric solution We know, from the geometry of least squares, that $\mathbf{\bar{x}} = (\bar{x}, \bar{x}, \ldots, \b
Maximum value of coefficient of variation for bounded data set Geometry provides insight and classical inequalities afford easy access to rigor. Geometric solution We know, from the geometry of least squares, that $\mathbf{\bar{x}} = (\bar{x}, \bar{x}, \ldots, \bar{x})$ is the orthogonal projection of the vector of dat...
Maximum value of coefficient of variation for bounded data set Geometry provides insight and classical inequalities afford easy access to rigor. Geometric solution We know, from the geometry of least squares, that $\mathbf{\bar{x}} = (\bar{x}, \bar{x}, \ldots, \b
14,490
Maximum value of coefficient of variation for bounded data set
Some references, as small candles on the cakes of others: Katsnelson and Kotz (1957) proved that so long as all $x_i \ge 0$, then the coefficient of variation cannot exceed $\sqrt{n − 1}$. This result was mentioned earlier by Longley (1952). Cramér (1946, p.357) proved a less sharp result, and Kirby (1974) proved a less...
Maximum value of coefficient of variation for bounded data set
Some references, as small candles on the cakes of others: Katsnelson and Kotz (1957) proved that so long as all $x_i \ge 0$, then the coefficient of variation cannot exceed $\sqrt{n − 1}$. This result
Maximum value of coefficient of variation for bounded data set Some references, as small candles on the cakes of others: Katsnelson and Kotz (1957) proved that so long as all $x_i \ge 0$, then the coefficient of variation cannot exceed $\sqrt{n − 1}$. This result was mentioned earlier by Longley (1952). Cramér (1946, p....
Maximum value of coefficient of variation for bounded data set Some references, as small candles on the cakes of others: Katsnelson and Kotz (1957) proved that so long as all $x_i \ge 0$, then the coefficient of variation cannot exceed $\sqrt{n − 1}$. This result
14,491
Maximum value of coefficient of variation for bounded data set
With two numbers $x_i \ge x_j$, some $\delta \gt 0$ and any $\mu$: $$(x_i+\delta - \mu)^2 + (x_j - \delta - \mu)^2 - (x_i - \mu)^2 - (x_j - \mu)^2 = 2\delta(x_i - x_j +\delta) \gt 0.$$ Applying this to $n$ non-negative datapoints, this means that unless all but one of the $n$ numbers are zero and so cannot be reduced f...
Maximum value of coefficient of variation for bounded data set
With two numbers $x_i \ge x_j$, some $\delta \gt 0$ and any $\mu$: $$(x_i+\delta - \mu)^2 + (x_j - \delta - \mu)^2 - (x_i - \mu)^2 - (x_j - \mu)^2 = 2\delta(x_i - x_j +\delta) \gt 0.$$ Applying this t
Maximum value of coefficient of variation for bounded data set With two numbers $x_i \ge x_j$, some $\delta \gt 0$ and any $\mu$: $$(x_i+\delta - \mu)^2 + (x_j - \delta - \mu)^2 - (x_i - \mu)^2 - (x_j - \mu)^2 = 2\delta(x_i - x_j +\delta) \gt 0.$$ Applying this to $n$ non-negative datapoints, this means that unless all...
Maximum value of coefficient of variation for bounded data set With two numbers $x_i \ge x_j$, some $\delta \gt 0$ and any $\mu$: $$(x_i+\delta - \mu)^2 + (x_j - \delta - \mu)^2 - (x_i - \mu)^2 - (x_j - \mu)^2 = 2\delta(x_i - x_j +\delta) \gt 0.$$ Applying this t
14,492
What's the point of asymptotics?
The first reason we look at the asymptotics of estimators is that we want to check that our estimator is sensible. One aspect of this investigation is that we expect a sensible estimator will generally get better as we get more data, and it eventually becomes "perfect" as the amount of data gets to the full population...
What's the point of asymptotics?
The first reason we look at the asymptotics of estimators is that we want to check that our estimator is sensible. One aspect of this investigation is that we expect a sensible estimator will general
What's the point of asymptotics? The first reason we look at the asymptotics of estimators is that we want to check that our estimator is sensible. One aspect of this investigation is that we expect a sensible estimator will generally get better as we get more data, and it eventually becomes "perfect" as the amount of...
What's the point of asymptotics? The first reason we look at the asymptotics of estimators is that we want to check that our estimator is sensible. One aspect of this investigation is that we expect a sensible estimator will general
14,493
What's the point of asymptotics?
It's true, in general, that consistency is not the be-all-end-all of a statistic. But neither is unbiasedness for the same reason! When you accept biased estimators, you open a whole class of Bayes estimators that beat OLS in many respects. Case-in-point: the problem of shrinkage and L2 penalization in the estimation o...
What's the point of asymptotics?
It's true, in general, that consistency is not the be-all-end-all of a statistic. But neither is unbiasedness for the same reason! When you accept biased estimators, you open a whole class of Bayes es
What's the point of asymptotics? It's true, in general, that consistency is not the be-all-end-all of a statistic. But neither is unbiasedness for the same reason! When you accept biased estimators, you open a whole class of Bayes estimators that beat OLS in many respects. Case-in-point: the problem of shrinkage and L2...
What's the point of asymptotics? It's true, in general, that consistency is not the be-all-end-all of a statistic. But neither is unbiasedness for the same reason! When you accept biased estimators, you open a whole class of Bayes es
14,494
What's the point of asymptotics?
Most asymptotic results are closely connected to finite first-order results. We'll review this in the non-probabilistic case and then extend to the probabilistic case. Non random case: reduced order analysis Recall the Taylor series of the function $\sin$ around the point $x=0$: \begin{equation*} \sin x - 0 = x - \fra...
What's the point of asymptotics?
Most asymptotic results are closely connected to finite first-order results. We'll review this in the non-probabilistic case and then extend to the probabilistic case. Non random case: reduced order a
What's the point of asymptotics? Most asymptotic results are closely connected to finite first-order results. We'll review this in the non-probabilistic case and then extend to the probabilistic case. Non random case: reduced order analysis Recall the Taylor series of the function $\sin$ around the point $x=0$: \begin{...
What's the point of asymptotics? Most asymptotic results are closely connected to finite first-order results. We'll review this in the non-probabilistic case and then extend to the probabilistic case. Non random case: reduced order a
14,495
Can glmnet logistic regression directly handle factor (categorical) variables without needing dummy variables? [closed]
glmnet cannot take factor directly, you need to transform factor variables to dummies. It is only one simple step using model.matrix, for instance: x_train <- model.matrix( ~ .-1, train[,features]) lm = cv.glmnet(x=x_train,y = as.factor(train$y), intercept=FALSE ,family = "binomial", alpha=1, nfolds=7) best_lambda <-...
Can glmnet logistic regression directly handle factor (categorical) variables without needing dummy
glmnet cannot take factor directly, you need to transform factor variables to dummies. It is only one simple step using model.matrix, for instance: x_train <- model.matrix( ~ .-1, train[,features]) lm
Can glmnet logistic regression directly handle factor (categorical) variables without needing dummy variables? [closed] glmnet cannot take factor directly, you need to transform factor variables to dummies. It is only one simple step using model.matrix, for instance: x_train <- model.matrix( ~ .-1, train[,features]) lm...
Can glmnet logistic regression directly handle factor (categorical) variables without needing dummy glmnet cannot take factor directly, you need to transform factor variables to dummies. It is only one simple step using model.matrix, for instance: x_train <- model.matrix( ~ .-1, train[,features]) lm
14,496
Is there a measure of 'evenness' of spread?
A standard, powerful, well-understood, theoretically well-established, and frequently implemented measure of "evenness" is the Ripley K function and its close relative, the L function. Although these are typically used to evaluate two-dimensional spatial point configurations, the analysis needed to adapt them to one d...
Is there a measure of 'evenness' of spread?
A standard, powerful, well-understood, theoretically well-established, and frequently implemented measure of "evenness" is the Ripley K function and its close relative, the L function. Although these
Is there a measure of 'evenness' of spread? A standard, powerful, well-understood, theoretically well-established, and frequently implemented measure of "evenness" is the Ripley K function and its close relative, the L function. Although these are typically used to evaluate two-dimensional spatial point configurations...
Is there a measure of 'evenness' of spread? A standard, powerful, well-understood, theoretically well-established, and frequently implemented measure of "evenness" is the Ripley K function and its close relative, the L function. Although these
14,497
Is there a measure of 'evenness' of spread?
I assume that you want to measure how close is the distribution to the uniform. You can look on the distance between cumulative distribution function of uniform distribution and the empirical cumulative distribution function of the sample. Let's assume that the variable is defined on the set $\{1,2,3,4,5\}$. Then the u...
Is there a measure of 'evenness' of spread?
I assume that you want to measure how close is the distribution to the uniform. You can look on the distance between cumulative distribution function of uniform distribution and the empirical cumulati
Is there a measure of 'evenness' of spread? I assume that you want to measure how close is the distribution to the uniform. You can look on the distance between cumulative distribution function of uniform distribution and the empirical cumulative distribution function of the sample. Let's assume that the variable is de...
Is there a measure of 'evenness' of spread? I assume that you want to measure how close is the distribution to the uniform. You can look on the distance between cumulative distribution function of uniform distribution and the empirical cumulati
14,498
Is there a measure of 'evenness' of spread?
If I understand your question correctly, the "most even" distribution for you would be one where the random variable takes every observed value once—uniform in a sense. If there are "clusters" of observations at the same value, that would be uneven. Assuming we are talking discrete observations, perhaps you could look ...
Is there a measure of 'evenness' of spread?
If I understand your question correctly, the "most even" distribution for you would be one where the random variable takes every observed value once—uniform in a sense. If there are "clusters" of obse
Is there a measure of 'evenness' of spread? If I understand your question correctly, the "most even" distribution for you would be one where the random variable takes every observed value once—uniform in a sense. If there are "clusters" of observations at the same value, that would be uneven. Assuming we are talking di...
Is there a measure of 'evenness' of spread? If I understand your question correctly, the "most even" distribution for you would be one where the random variable takes every observed value once—uniform in a sense. If there are "clusters" of obse
14,499
Is there a measure of 'evenness' of spread?
Clumpiness detection Ripley's L function, as noted and nicely illustrated by @whuber, will generate an indicator of "clumpiness" (i.e. amount of clustering) in the distribution for a given target distance (or normalized distance). The same is true for the discrepancy metric outlined in Martin Roberts' answer, which is ...
Is there a measure of 'evenness' of spread?
Clumpiness detection Ripley's L function, as noted and nicely illustrated by @whuber, will generate an indicator of "clumpiness" (i.e. amount of clustering) in the distribution for a given target dist
Is there a measure of 'evenness' of spread? Clumpiness detection Ripley's L function, as noted and nicely illustrated by @whuber, will generate an indicator of "clumpiness" (i.e. amount of clustering) in the distribution for a given target distance (or normalized distance). The same is true for the discrepancy metric o...
Is there a measure of 'evenness' of spread? Clumpiness detection Ripley's L function, as noted and nicely illustrated by @whuber, will generate an indicator of "clumpiness" (i.e. amount of clustering) in the distribution for a given target dist
14,500
Is there a measure of 'evenness' of spread?
The measure you are looking for is formally called discrepancy. The one-dimensional version is as follows: Let $I=[a,b)$ denote the half-open interval and consider a finite sequence $x_1,\ldots,x_N\in{I}$. For a subset $J\subset{I}$, let $A(J,N)$ denote the number of elements of this sequence inside $J$. That is,...
Is there a measure of 'evenness' of spread?
The measure you are looking for is formally called discrepancy. The one-dimensional version is as follows: Let $I=[a,b)$ denote the half-open interval and consider a finite sequence $x_1,\ldots,x_N
Is there a measure of 'evenness' of spread? The measure you are looking for is formally called discrepancy. The one-dimensional version is as follows: Let $I=[a,b)$ denote the half-open interval and consider a finite sequence $x_1,\ldots,x_N\in{I}$. For a subset $J\subset{I}$, let $A(J,N)$ denote the number of elem...
Is there a measure of 'evenness' of spread? The measure you are looking for is formally called discrepancy. The one-dimensional version is as follows: Let $I=[a,b)$ denote the half-open interval and consider a finite sequence $x_1,\ldots,x_N