idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
19,701 | Are MCMC based methods appropriate when Maximum a-posteriori estimation is available? | I would argue that MCMC methods aren't necessarily inappropriate, even when closed-form solutions exist. Obviously, it's nice when an analytical solution exists: they are usually fast, you avoid concerns about convergence (etc).
On the other hand, consistency is also important. Switching from technique to technique complicates your presentation: at best, it's extraneous detail that may confuse or distract the audience away from your substantive result, and at worst it could look like an attempt at biasing the outcomes. If I had several models, only a few of which admit closed-form solutions, I would strongly consider running them all through the same MCMC pipeline even if it weren't strictly necessary.
I suspect this, plus inertia ("we have this script that works") accounts for most of what you're seeing. | Are MCMC based methods appropriate when Maximum a-posteriori estimation is available? | I would argue that MCMC methods aren't necessarily inappropriate, even when closed-form solutions exist. Obviously, it's nice when an analytical solution exists: they are usually fast, you avoid conce | Are MCMC based methods appropriate when Maximum a-posteriori estimation is available?
I would argue that MCMC methods aren't necessarily inappropriate, even when closed-form solutions exist. Obviously, it's nice when an analytical solution exists: they are usually fast, you avoid concerns about convergence (etc).
On the other hand, consistency is also important. Switching from technique to technique complicates your presentation: at best, it's extraneous detail that may confuse or distract the audience away from your substantive result, and at worst it could look like an attempt at biasing the outcomes. If I had several models, only a few of which admit closed-form solutions, I would strongly consider running them all through the same MCMC pipeline even if it weren't strictly necessary.
I suspect this, plus inertia ("we have this script that works") accounts for most of what you're seeing. | Are MCMC based methods appropriate when Maximum a-posteriori estimation is available?
I would argue that MCMC methods aren't necessarily inappropriate, even when closed-form solutions exist. Obviously, it's nice when an analytical solution exists: they are usually fast, you avoid conce |
19,702 | Why is ln[E(x)] > E[ln(x)]? | Recall that $e^x\geq 1+x$
$E\left[e^{Y}\right]=e^{ E(Y)} E\left[e^{Y- E(Y)}\right]\geq e^{E(Y)} E\left[1+{Y- E(Y)}\right] = e^{E(Y)}$
So $e^{E(Y)}\leq E\left[e^{Y}\right] $
Now letting $Y=\ln X$, we have:
$e^{E(\ln X)}\leq E\left[e^{\ln X}\right]=E(X)$
now take logs of both sides
$E[\ln (X)]\leq\ln[E(X)]$
Alternatively:
$\ln X = \ln X - \ln \mu+\ln\mu \qquad$ (where $\mu=E(X)$)
$\qquad= \ln(X/\mu)+\ln \mu $
$\qquad= \ln[ \frac{X-\mu}{\mu} + 1]+\ln \mu$
$\qquad \leq \frac{X-\mu}{\mu} + \ln \mu\qquad$ (since $\ln(t+1)\leq t$)
Now take expectations of both sides:
$E[\ln(X)] \leq \ln\mu$
An illustration (showing the connection to Jensen's inequality):
(Here the roles of X and Y are interchanged so that they match the plot axes; better planning would have swapped their roles above so the plot more directly matched the algebra.)
The solid coloured lines represent means on each axis.
As we see because the relationship "bends toward" $X$ in the middle (and "away from" $Y$), the mean of $Y$ (orange horizontal line) goes along a little further before hitting the curve (giving the small gap (marked in blue) between log(mean(y)) and mean(log(y)) that we see). | Why is ln[E(x)] > E[ln(x)]? | Recall that $e^x\geq 1+x$
$E\left[e^{Y}\right]=e^{ E(Y)} E\left[e^{Y- E(Y)}\right]\geq e^{E(Y)} E\left[1+{Y- E(Y)}\right] = e^{E(Y)}$
So $e^{E(Y)}\leq E\left[e^{Y}\right] $
Now letting $Y=\ln X$, we | Why is ln[E(x)] > E[ln(x)]?
Recall that $e^x\geq 1+x$
$E\left[e^{Y}\right]=e^{ E(Y)} E\left[e^{Y- E(Y)}\right]\geq e^{E(Y)} E\left[1+{Y- E(Y)}\right] = e^{E(Y)}$
So $e^{E(Y)}\leq E\left[e^{Y}\right] $
Now letting $Y=\ln X$, we have:
$e^{E(\ln X)}\leq E\left[e^{\ln X}\right]=E(X)$
now take logs of both sides
$E[\ln (X)]\leq\ln[E(X)]$
Alternatively:
$\ln X = \ln X - \ln \mu+\ln\mu \qquad$ (where $\mu=E(X)$)
$\qquad= \ln(X/\mu)+\ln \mu $
$\qquad= \ln[ \frac{X-\mu}{\mu} + 1]+\ln \mu$
$\qquad \leq \frac{X-\mu}{\mu} + \ln \mu\qquad$ (since $\ln(t+1)\leq t$)
Now take expectations of both sides:
$E[\ln(X)] \leq \ln\mu$
An illustration (showing the connection to Jensen's inequality):
(Here the roles of X and Y are interchanged so that they match the plot axes; better planning would have swapped their roles above so the plot more directly matched the algebra.)
The solid coloured lines represent means on each axis.
As we see because the relationship "bends toward" $X$ in the middle (and "away from" $Y$), the mean of $Y$ (orange horizontal line) goes along a little further before hitting the curve (giving the small gap (marked in blue) between log(mean(y)) and mean(log(y)) that we see). | Why is ln[E(x)] > E[ln(x)]?
Recall that $e^x\geq 1+x$
$E\left[e^{Y}\right]=e^{ E(Y)} E\left[e^{Y- E(Y)}\right]\geq e^{E(Y)} E\left[1+{Y- E(Y)}\right] = e^{E(Y)}$
So $e^{E(Y)}\leq E\left[e^{Y}\right] $
Now letting $Y=\ln X$, we |
19,703 | Intuitively understand why the Poisson distribution is the limiting case of the binomial distribution | I will try a simple intuitive explanation. Record that for a binomial random variable $X \sim \text{Bin}(n,p)$ we have expectation is $n p$ and variance is $n p (1-p)$. Now think that $X$ records the number of events in a very large number $n$ of trials, each with a very small probability $p$, such that we are very close to $1-p=1$ (really $\approx$). Then we have $np=\lambda$ say, and $n p (1-p) \approx n p 1 =\lambda$, so the mean and variance are both equal to $\lambda$. Then remember that for a poisson distributed random variable, we always have mean and variance equal! That is at least a plausibility argument for the poisson approximation, but not a proof.
Then look at it from another viewpoint, the poisson point process https://en.wikipedia.org/wiki/Poisson_point_process on the real line. This is the distribution of random points on the line that we gets if random points occur according to the rules:
points in disjoint intervals are independent
probability of a random point in a very short interval is proportional to length of interval
probability of two or more points in a very short interval is essentially zero.
Then the distribution of number of points in a given interval (not necessarily short) is Poisson (with parameter $\lambda$ proportional to length). Now, if we divide this interval in very many, equally very short subintervals ($n$), the probability of two or more points in a given subinterval is essentially zero, so that number will have, to a very good approximation, a bernolli distribution, that is, $\text{Bin}(1,p)$, so the sum of all this will be $\text{Bin}(n,p)$, so a good approximation of the poisson distribution of number of points in that (long) interval.
Edit from @Ytsen de Boer (OP): question number 2 is satisfactorily answered by @Łukasz Grad. | Intuitively understand why the Poisson distribution is the limiting case of the binomial distributio | I will try a simple intuitive explanation. Record that for a binomial random variable $X \sim \text{Bin}(n,p)$ we have expectation is $n p$ and variance is $n p (1-p)$. Now think that $X$ records t | Intuitively understand why the Poisson distribution is the limiting case of the binomial distribution
I will try a simple intuitive explanation. Record that for a binomial random variable $X \sim \text{Bin}(n,p)$ we have expectation is $n p$ and variance is $n p (1-p)$. Now think that $X$ records the number of events in a very large number $n$ of trials, each with a very small probability $p$, such that we are very close to $1-p=1$ (really $\approx$). Then we have $np=\lambda$ say, and $n p (1-p) \approx n p 1 =\lambda$, so the mean and variance are both equal to $\lambda$. Then remember that for a poisson distributed random variable, we always have mean and variance equal! That is at least a plausibility argument for the poisson approximation, but not a proof.
Then look at it from another viewpoint, the poisson point process https://en.wikipedia.org/wiki/Poisson_point_process on the real line. This is the distribution of random points on the line that we gets if random points occur according to the rules:
points in disjoint intervals are independent
probability of a random point in a very short interval is proportional to length of interval
probability of two or more points in a very short interval is essentially zero.
Then the distribution of number of points in a given interval (not necessarily short) is Poisson (with parameter $\lambda$ proportional to length). Now, if we divide this interval in very many, equally very short subintervals ($n$), the probability of two or more points in a given subinterval is essentially zero, so that number will have, to a very good approximation, a bernolli distribution, that is, $\text{Bin}(1,p)$, so the sum of all this will be $\text{Bin}(n,p)$, so a good approximation of the poisson distribution of number of points in that (long) interval.
Edit from @Ytsen de Boer (OP): question number 2 is satisfactorily answered by @Łukasz Grad. | Intuitively understand why the Poisson distribution is the limiting case of the binomial distributio
I will try a simple intuitive explanation. Record that for a binomial random variable $X \sim \text{Bin}(n,p)$ we have expectation is $n p$ and variance is $n p (1-p)$. Now think that $X$ records t |
19,704 | Intuitively understand why the Poisson distribution is the limiting case of the binomial distribution | Let me provide an alternate heuristic. I'm going to show how to approximate the Poisson process as a binomial (and argue that the approximation is better for many trials with low probability). Therefore the binomial distribution must tend to the Poisson distribution.
Let's say events are happening with a constant rate in time. We want to know the distribution of how many events happened in a day, knowing that the expected number of events is $\lambda$.
Well, the expected number of events per hour is $\lambda/24$. Let's pretend that this means that the probability of an event happening in a given hour is $\lambda/24$. [it's not quite right, but it is a decent approximation if $\lambda/24 \ll 1$ basically if we can assume multiple events don't happen in the same hour]. Then we can approximate the distribution of the number of events as a binomial with $M=24$ trials, each having success probability $\lambda/24$.
We improve the approximation by switching our interval to minutes. Then it's $p=\lambda/1440$ with $M=1440$ trials. If $\lambda$ is around, say 10, then we can be pretty confident that no minute had two events.
Of course it gets better if we switch to seconds. Now we're looking at $M=86400$ events each with the small probability $\lambda/86400$.
No matter how big your $\lambda$ is, I can eventually choose a small enough $\Delta t$ such that it's very likely that no two events happen in the same interval. Then the binomial distribution corresponding to that $\Delta t$ will be an excellent match to the true Poisson distribution.
The only reason they aren't exactly the same is that there is a non-zero probability that two events happen in the same time interval. But given there are only around $\lambda$ events and they are distributed into some number of bins much greater than $\lambda$, it's unlikely that any two of them lie in the same bin.
Or in other words, the binomial distribution tends to the Poisson distribution as $M \to \infty$ if the success probability is $p=\lambda/M$. | Intuitively understand why the Poisson distribution is the limiting case of the binomial distributio | Let me provide an alternate heuristic. I'm going to show how to approximate the Poisson process as a binomial (and argue that the approximation is better for many trials with low probability). There | Intuitively understand why the Poisson distribution is the limiting case of the binomial distribution
Let me provide an alternate heuristic. I'm going to show how to approximate the Poisson process as a binomial (and argue that the approximation is better for many trials with low probability). Therefore the binomial distribution must tend to the Poisson distribution.
Let's say events are happening with a constant rate in time. We want to know the distribution of how many events happened in a day, knowing that the expected number of events is $\lambda$.
Well, the expected number of events per hour is $\lambda/24$. Let's pretend that this means that the probability of an event happening in a given hour is $\lambda/24$. [it's not quite right, but it is a decent approximation if $\lambda/24 \ll 1$ basically if we can assume multiple events don't happen in the same hour]. Then we can approximate the distribution of the number of events as a binomial with $M=24$ trials, each having success probability $\lambda/24$.
We improve the approximation by switching our interval to minutes. Then it's $p=\lambda/1440$ with $M=1440$ trials. If $\lambda$ is around, say 10, then we can be pretty confident that no minute had two events.
Of course it gets better if we switch to seconds. Now we're looking at $M=86400$ events each with the small probability $\lambda/86400$.
No matter how big your $\lambda$ is, I can eventually choose a small enough $\Delta t$ such that it's very likely that no two events happen in the same interval. Then the binomial distribution corresponding to that $\Delta t$ will be an excellent match to the true Poisson distribution.
The only reason they aren't exactly the same is that there is a non-zero probability that two events happen in the same time interval. But given there are only around $\lambda$ events and they are distributed into some number of bins much greater than $\lambda$, it's unlikely that any two of them lie in the same bin.
Or in other words, the binomial distribution tends to the Poisson distribution as $M \to \infty$ if the success probability is $p=\lambda/M$. | Intuitively understand why the Poisson distribution is the limiting case of the binomial distributio
Let me provide an alternate heuristic. I'm going to show how to approximate the Poisson process as a binomial (and argue that the approximation is better for many trials with low probability). There |
19,705 | Intuitively understand why the Poisson distribution is the limiting case of the binomial distribution | The problem is that your characterization of the Poisson as a limiting case of the binomial distribution is not quite correct as stated.
The Poisson is a limiting case of the binomial when: $$M \to \infty \quad \color{red}{\text{and} \quad Mp \to \lambda.}$$ The second part is important. If $p$ remains fixed, the first condition implies that the rate will also increase without bound.
What the Poisson distribution assumes is that events are rare. What we mean by "rare" is not that the rate of events is small--indeed, a Poisson process may have a very high intensity $\lambda$--but rather, that the probability of an event occurring at any instant in time $[t, t + dt)$ is vanishingly small. This is in contrast to a binomial model where the probability $p$ of an event (e.g. "success") is fixed for any given trial.
To illustrate, suppose we model a series of $M$ independent Bernoulli trials each with probability of success $p$, and we look at what happens to the distribution of the number of successes $X$ as $M \to \infty$. For any $N$ as large as we please, and no matter how small $p$ is, the expected number of successes $\operatorname{E}[X] = Mp > N$ for $M > N/p$. Put another way, no matter how unlikely the probability of success, eventually you can achieve an average number of successes as large as you please if you perform sufficiently many trials. So, $M \to \infty$ (or, just saying "$M$ is large") is not enough to justify a Poisson model for $X$.
It is not difficult to algebraically establish $$\Pr[X = x] = e^{-\lambda} \frac{\lambda^x}{x!}, \quad x = 0, 1, 2, \ldots$$ as a limiting case of $$\Pr[X = x] = \binom{M}{x} p^x (1-p)^{M-x}, \quad x = 0, 1, 2, \ldots, M$$ by setting $p = \lambda/M$ and letting $M \to \infty$. Other answers here have addressed the intuition behind this relationship and provided computational guidance as well. But it is important that $p = \lambda/M$. You can't ignore this. | Intuitively understand why the Poisson distribution is the limiting case of the binomial distributio | The problem is that your characterization of the Poisson as a limiting case of the binomial distribution is not quite correct as stated.
The Poisson is a limiting case of the binomial when: $$M \to \ | Intuitively understand why the Poisson distribution is the limiting case of the binomial distribution
The problem is that your characterization of the Poisson as a limiting case of the binomial distribution is not quite correct as stated.
The Poisson is a limiting case of the binomial when: $$M \to \infty \quad \color{red}{\text{and} \quad Mp \to \lambda.}$$ The second part is important. If $p$ remains fixed, the first condition implies that the rate will also increase without bound.
What the Poisson distribution assumes is that events are rare. What we mean by "rare" is not that the rate of events is small--indeed, a Poisson process may have a very high intensity $\lambda$--but rather, that the probability of an event occurring at any instant in time $[t, t + dt)$ is vanishingly small. This is in contrast to a binomial model where the probability $p$ of an event (e.g. "success") is fixed for any given trial.
To illustrate, suppose we model a series of $M$ independent Bernoulli trials each with probability of success $p$, and we look at what happens to the distribution of the number of successes $X$ as $M \to \infty$. For any $N$ as large as we please, and no matter how small $p$ is, the expected number of successes $\operatorname{E}[X] = Mp > N$ for $M > N/p$. Put another way, no matter how unlikely the probability of success, eventually you can achieve an average number of successes as large as you please if you perform sufficiently many trials. So, $M \to \infty$ (or, just saying "$M$ is large") is not enough to justify a Poisson model for $X$.
It is not difficult to algebraically establish $$\Pr[X = x] = e^{-\lambda} \frac{\lambda^x}{x!}, \quad x = 0, 1, 2, \ldots$$ as a limiting case of $$\Pr[X = x] = \binom{M}{x} p^x (1-p)^{M-x}, \quad x = 0, 1, 2, \ldots, M$$ by setting $p = \lambda/M$ and letting $M \to \infty$. Other answers here have addressed the intuition behind this relationship and provided computational guidance as well. But it is important that $p = \lambda/M$. You can't ignore this. | Intuitively understand why the Poisson distribution is the limiting case of the binomial distributio
The problem is that your characterization of the Poisson as a limiting case of the binomial distribution is not quite correct as stated.
The Poisson is a limiting case of the binomial when: $$M \to \ |
19,706 | Intuitively understand why the Poisson distribution is the limiting case of the binomial distribution | Question 1
Recall the definition of the binomial distribution:
a frequency distribution of the possible number of successful outcomes in a given number of trials in each of which there is the same probability of success.
Compare this to the definition of the Poisson distribution:
a discrete frequency distribution which gives the probability of a number of independent events occurring in a fixed time.
The substantial difference between the 2 is the binomial is in $n$ trials, Poisson is over a time period $t$. How can the limit occur intuitively?
Lets say that you have to keep running Bernoulli trials for all eternity. Moreover, you run $n = 30$ per minute. Per minute you count each success. So for all eternity you are running a $Bin(p,30)$ process every minute. Over 24 hours, you have a $Bin(p,43200)$.
As you get tired, you are asked "how many successes occurred between 18:00 and 19:00?". You're answer might be $30*60*p$, i.e. you provide the average successes in an hour. That sounds a lot like the Poisson parameter $\lambda$ to me. | Intuitively understand why the Poisson distribution is the limiting case of the binomial distributio | Question 1
Recall the definition of the binomial distribution:
a frequency distribution of the possible number of successful outcomes in a given number of trials in each of which there is the same pr | Intuitively understand why the Poisson distribution is the limiting case of the binomial distribution
Question 1
Recall the definition of the binomial distribution:
a frequency distribution of the possible number of successful outcomes in a given number of trials in each of which there is the same probability of success.
Compare this to the definition of the Poisson distribution:
a discrete frequency distribution which gives the probability of a number of independent events occurring in a fixed time.
The substantial difference between the 2 is the binomial is in $n$ trials, Poisson is over a time period $t$. How can the limit occur intuitively?
Lets say that you have to keep running Bernoulli trials for all eternity. Moreover, you run $n = 30$ per minute. Per minute you count each success. So for all eternity you are running a $Bin(p,30)$ process every minute. Over 24 hours, you have a $Bin(p,43200)$.
As you get tired, you are asked "how many successes occurred between 18:00 and 19:00?". You're answer might be $30*60*p$, i.e. you provide the average successes in an hour. That sounds a lot like the Poisson parameter $\lambda$ to me. | Intuitively understand why the Poisson distribution is the limiting case of the binomial distributio
Question 1
Recall the definition of the binomial distribution:
a frequency distribution of the possible number of successful outcomes in a given number of trials in each of which there is the same pr |
19,707 | Intuitively understand why the Poisson distribution is the limiting case of the binomial distribution | Question 2)
$$\frac{\frac{M!}{N!(M-N)!}}{\frac{M^N}{N!}} = \frac{M(M-1)\dots(M - N + 1)}{M^N} = 1(1 - \frac{1}{M})\dots(1 - \frac{N - 1}{M})$$
So taking the limit for fixed $N$
$$\lim_{M \to \infty} \frac{\frac{M!}{N!(M-N)!}}{\frac{M^N}{N!}} = \lim_{M \to \infty} 1(1 - \frac{1}{M})\dots(1 - \frac{N - 1}{M}) = 1$$ | Intuitively understand why the Poisson distribution is the limiting case of the binomial distributio | Question 2)
$$\frac{\frac{M!}{N!(M-N)!}}{\frac{M^N}{N!}} = \frac{M(M-1)\dots(M - N + 1)}{M^N} = 1(1 - \frac{1}{M})\dots(1 - \frac{N - 1}{M})$$
So taking the limit for fixed $N$
$$\lim_{M \to \infty} | Intuitively understand why the Poisson distribution is the limiting case of the binomial distribution
Question 2)
$$\frac{\frac{M!}{N!(M-N)!}}{\frac{M^N}{N!}} = \frac{M(M-1)\dots(M - N + 1)}{M^N} = 1(1 - \frac{1}{M})\dots(1 - \frac{N - 1}{M})$$
So taking the limit for fixed $N$
$$\lim_{M \to \infty} \frac{\frac{M!}{N!(M-N)!}}{\frac{M^N}{N!}} = \lim_{M \to \infty} 1(1 - \frac{1}{M})\dots(1 - \frac{N - 1}{M}) = 1$$ | Intuitively understand why the Poisson distribution is the limiting case of the binomial distributio
Question 2)
$$\frac{\frac{M!}{N!(M-N)!}}{\frac{M^N}{N!}} = \frac{M(M-1)\dots(M - N + 1)}{M^N} = 1(1 - \frac{1}{M})\dots(1 - \frac{N - 1}{M})$$
So taking the limit for fixed $N$
$$\lim_{M \to \infty} |
19,708 | Intuitively understand why the Poisson distribution is the limiting case of the binomial distribution | I can only attempt a part answer and it is about the intuition for Question 2, not a rigorous proof.
The binomial coefficient gives you the number of samples of size $N$, from $M$, without replacement and without order.
Here though $M$ becomes so large that you may approximate the scenario as sampling with replacement in which case you get
$M^N$ ordered samples. If you don't care about the order of the $N$ objects chosen this reduces to $M^N/N!$ because those $N$ objects
can be ordered in $N!$ ways. | Intuitively understand why the Poisson distribution is the limiting case of the binomial distributio | I can only attempt a part answer and it is about the intuition for Question 2, not a rigorous proof.
The binomial coefficient gives you the number of samples of size $N$, from $M$, without replacement | Intuitively understand why the Poisson distribution is the limiting case of the binomial distribution
I can only attempt a part answer and it is about the intuition for Question 2, not a rigorous proof.
The binomial coefficient gives you the number of samples of size $N$, from $M$, without replacement and without order.
Here though $M$ becomes so large that you may approximate the scenario as sampling with replacement in which case you get
$M^N$ ordered samples. If you don't care about the order of the $N$ objects chosen this reduces to $M^N/N!$ because those $N$ objects
can be ordered in $N!$ ways. | Intuitively understand why the Poisson distribution is the limiting case of the binomial distributio
I can only attempt a part answer and it is about the intuition for Question 2, not a rigorous proof.
The binomial coefficient gives you the number of samples of size $N$, from $M$, without replacement |
19,709 | Intuitively understand why the Poisson distribution is the limiting case of the binomial distribution | I think this is the best example that intuitively explains how binomial distribution converges to normal with large number of balls. Here, each ball has equal probability of falling on either side of the peg in each layer and all the balls have to face same number of pegs. It can be easily seen that as the number of balls goes very high the distribution of balls in different sections will be like normal distribution.
My answer to your question 2 is same as the answer given by Lukasz. | Intuitively understand why the Poisson distribution is the limiting case of the binomial distributio | I think this is the best example that intuitively explains how binomial distribution converges to normal with large number of balls. Here, each ball has equal probability of falling on either side of | Intuitively understand why the Poisson distribution is the limiting case of the binomial distribution
I think this is the best example that intuitively explains how binomial distribution converges to normal with large number of balls. Here, each ball has equal probability of falling on either side of the peg in each layer and all the balls have to face same number of pegs. It can be easily seen that as the number of balls goes very high the distribution of balls in different sections will be like normal distribution.
My answer to your question 2 is same as the answer given by Lukasz. | Intuitively understand why the Poisson distribution is the limiting case of the binomial distributio
I think this is the best example that intuitively explains how binomial distribution converges to normal with large number of balls. Here, each ball has equal probability of falling on either side of |
19,710 | Normalizations: dividing by mean | The difference between subtracting the mean and dividing by the mean is the difference between subtraction and division; presumably you are not really asking about the mathematics. There is no mystery here, as it's no more than a statistical analogue of
Bill is 5 cm taller than Betty (subtraction)
Bill is twice the weight of his son Bob (division)
with the difference that the mean is used as a reference level, rather than another value. We should emphasise that
(Bill $-$ Betty) or (value $-$ mean) preserves units of measurement
while
(Bill / Bob) or (value / mean) is independent of units of measurement.
and that subtraction of the mean is always possible, while division by the mean usually
only makes sense if the mean is guaranteed to be positive (or more widely that no two values have different signs and the mean cannot be zero).
Taking it further then (value $-$ mean) / SD is scaling by the standard deviation and so again produces a measure independent of units of measurement, and also of the variability of the variable. It's always possible so long as the SD is positive, which does not bite. (If the SD were zero then every value is the same, and detailed summary is easy without any of these devices.) This kind of rescaling is often called standardization, although it is also true that that term too is overloaded.
Note that subtraction of the mean (without or with division by SD) is just a change of units, so distribution plots and time series plots (which you ask about) look just the same before and after; the numeric axis labels will differ, but the shape is preserved.
The choice is usually substantive rather than strictly statistical, so that it is question of which kind of adjustment is a helpful simplification, or indeed whether that is so.
I'll add that your question points up in reverse a point often made on this forum that asking about normalization is futile unless a precise definition is offered; in fact, that are even more meanings in use than those you mentioned.
The OP's context of space-time data is immaterial here; the principles apply regardless of whether you have temporal, spatial or spatial-temporal data. | Normalizations: dividing by mean | The difference between subtracting the mean and dividing by the mean is the difference between subtraction and division; presumably you are not really asking about the mathematics. There is no mystery | Normalizations: dividing by mean
The difference between subtracting the mean and dividing by the mean is the difference between subtraction and division; presumably you are not really asking about the mathematics. There is no mystery here, as it's no more than a statistical analogue of
Bill is 5 cm taller than Betty (subtraction)
Bill is twice the weight of his son Bob (division)
with the difference that the mean is used as a reference level, rather than another value. We should emphasise that
(Bill $-$ Betty) or (value $-$ mean) preserves units of measurement
while
(Bill / Bob) or (value / mean) is independent of units of measurement.
and that subtraction of the mean is always possible, while division by the mean usually
only makes sense if the mean is guaranteed to be positive (or more widely that no two values have different signs and the mean cannot be zero).
Taking it further then (value $-$ mean) / SD is scaling by the standard deviation and so again produces a measure independent of units of measurement, and also of the variability of the variable. It's always possible so long as the SD is positive, which does not bite. (If the SD were zero then every value is the same, and detailed summary is easy without any of these devices.) This kind of rescaling is often called standardization, although it is also true that that term too is overloaded.
Note that subtraction of the mean (without or with division by SD) is just a change of units, so distribution plots and time series plots (which you ask about) look just the same before and after; the numeric axis labels will differ, but the shape is preserved.
The choice is usually substantive rather than strictly statistical, so that it is question of which kind of adjustment is a helpful simplification, or indeed whether that is so.
I'll add that your question points up in reverse a point often made on this forum that asking about normalization is futile unless a precise definition is offered; in fact, that are even more meanings in use than those you mentioned.
The OP's context of space-time data is immaterial here; the principles apply regardless of whether you have temporal, spatial or spatial-temporal data. | Normalizations: dividing by mean
The difference between subtracting the mean and dividing by the mean is the difference between subtraction and division; presumably you are not really asking about the mathematics. There is no mystery |
19,711 | Normalizations: dividing by mean | If you're considering datapoints from multiple years, subtracting or dividing by year-specific means would change plots combining multiple years. Dividing by the mean might be interesting in many applications, one of which I dealt with today. For instance, if you're interested in observing how a sociodemographic group is distributed/concentrated in different equally sized neighbourhoods of a city you might simply look at the percentage of people, in each neighbourhood, that belongs to that group. However, if you're interested in observing how the concentration patterns evolve in time, you might want to net off the effect of changes in the total number of members of the group living in the city (for instance cos you're only interested in location choices within the city). If that is the case, for each t, it would be helpful to divide each neighbourhood level percentage by the share of the group in the city population in time t (that, if neighborhoods are equally sized and cover the whole city, is equal to the mean percentage). And, of course, it could make the difference! | Normalizations: dividing by mean | If you're considering datapoints from multiple years, subtracting or dividing by year-specific means would change plots combining multiple years. Dividing by the mean might be interesting in many app | Normalizations: dividing by mean
If you're considering datapoints from multiple years, subtracting or dividing by year-specific means would change plots combining multiple years. Dividing by the mean might be interesting in many applications, one of which I dealt with today. For instance, if you're interested in observing how a sociodemographic group is distributed/concentrated in different equally sized neighbourhoods of a city you might simply look at the percentage of people, in each neighbourhood, that belongs to that group. However, if you're interested in observing how the concentration patterns evolve in time, you might want to net off the effect of changes in the total number of members of the group living in the city (for instance cos you're only interested in location choices within the city). If that is the case, for each t, it would be helpful to divide each neighbourhood level percentage by the share of the group in the city population in time t (that, if neighborhoods are equally sized and cover the whole city, is equal to the mean percentage). And, of course, it could make the difference! | Normalizations: dividing by mean
If you're considering datapoints from multiple years, subtracting or dividing by year-specific means would change plots combining multiple years. Dividing by the mean might be interesting in many app |
19,712 | Normalizations: dividing by mean | I used the dividing by the mean method in my research because it's actually helpful in assessing the inequality across region.
I'm doing a research which basically about assessing how some certain burden parameters are distributed across different territories in a region. This normalization method let me know how many folds compared to the average value of a burden does a certain region holds. Value of 2 would mean that a region is holding 2 times the average burden (overburden), a value of 0.5 would mean that a region is holding half of the average burden (underburden).
The preferred situation is of course for every region to have the value close to 1 which would indicate low inequality of burden, because the value for all region is already close to the average value.
I might be really late into this but hopefully my answer could somewhat be of any help. | Normalizations: dividing by mean | I used the dividing by the mean method in my research because it's actually helpful in assessing the inequality across region.
I'm doing a research which basically about assessing how some certain bur | Normalizations: dividing by mean
I used the dividing by the mean method in my research because it's actually helpful in assessing the inequality across region.
I'm doing a research which basically about assessing how some certain burden parameters are distributed across different territories in a region. This normalization method let me know how many folds compared to the average value of a burden does a certain region holds. Value of 2 would mean that a region is holding 2 times the average burden (overburden), a value of 0.5 would mean that a region is holding half of the average burden (underburden).
The preferred situation is of course for every region to have the value close to 1 which would indicate low inequality of burden, because the value for all region is already close to the average value.
I might be really late into this but hopefully my answer could somewhat be of any help. | Normalizations: dividing by mean
I used the dividing by the mean method in my research because it's actually helpful in assessing the inequality across region.
I'm doing a research which basically about assessing how some certain bur |
19,713 | Why would I want to bootstrap when computing an independent sample t-test? (how to justify, interpret, and report a bootstrapped t-test) | There are several misunderstandings in you post (some of which are common and you may have been told the wrong thing because the person telling you was just passing on the misinformation).
First is that bootstrap is not the savior of the small sample size. Bootstrap actually fairs quite poorly for small sample sizes, even when the population is normal. This question, answer, and discussion should shed some light on that. Also the article here gives more details and background.
Both the t-test and the bootstrap are based on sampling distributions, what the distribution of the test statistic is.
The exact t-test is based on theory and the condition that the population/process generating the data is normal. The t-test happens to be fairly robust to the normality assumption (as far as the size of the test goes, power and precision can be another matter) so for some cases the combination of "Normal enough" and "Large sample size" means that the sampling distribution is "close enough" to normal that the t-test is a reasonable choice.
The bootstrap instead of assuming a normal population, uses the sample CDF as an estimate of the population and computes/estimates (usually through simulation) the true sampling distribution (which may be normalish, but does not need to be). If the sample does a reasonable job of representing the population then the bootstrap works well. But for small sample sizes it is very easy for the sample to do a poor job of representing the population and the bootstrap methods do lousy in those cases (see the simulation and paper referenced above).
The advantage of the t-test is that if all the assumptions hold (or are close) then it works well (I think it is actually the uniformly most powerful test). The disadvantage is that it does not work well if the assumptions are not true (and not close to being true) and there are some cases where the assumptions make a bigger differences than in others. And the t-test theory does not apply for some parameters/statistics of interest, e.g. trimmed means, standard deviations, quantiles, etc.
The advantage of the bootstrap is that it can estimate the sampling distribution without many of the assumptions needed by parametric methods. It works for statistics other than the mean and in cases where other assumptions do not hold (e.g. 2 samples, unequal variances). The disadvantage of the bootstrap is that it is very dependent on the sample representing population because it does not have the advantages of other assumptions. The bootstrap does not give you normality, it gives you the sampling distribution (which sometimes looks normal, but still works when it is not) without needing the assumptions about the population.
For t-tests where it is reasonable to assume that the population is normal (or at least normal enough) then the t-test will be best (of the 2).
If you do not have normality and do have small samples, then neither the t-test or the bootstrap should be trusted. For the 2 sample case a permutation test will work well if you are willing to assume equal distributions (including equal variances) under the null hypothesis. This is a very reasonable assumption when doing a randomized experiment, but may not be when comparing 2 separate populations (but then if you believe that 2 populations may have different spreads/shapes then maybe a test of means is not the most interesting question or the best place to start).
With huge sample sizes the large sample theory will benefit both t-tests and bootstrapping and you will see little or no difference when comparing means.
With moderate sample sizes the bootstrap can perform well and may be preferred when you are unwilling to make the assumptions needed for the t-test procedures.
The important thing is to understand the assumptions and conditions that are required for the different procedures that you are considering and to consider how those conditions and deviations from them will affect your analysis and how you believe the population/process that produced your data fits those conditions, simulation can help you understand how the deviations affect the different methods. Remember that all statistical procedures have conditions and assumptions (with the possible exception of SnowsCorrectlySizedButOtherwiseUselessTestOfAnything, but if you use that test then people will make assumptions about you). | Why would I want to bootstrap when computing an independent sample t-test? (how to justify, interpre | There are several misunderstandings in you post (some of which are common and you may have been told the wrong thing because the person telling you was just passing on the misinformation).
First is th | Why would I want to bootstrap when computing an independent sample t-test? (how to justify, interpret, and report a bootstrapped t-test)
There are several misunderstandings in you post (some of which are common and you may have been told the wrong thing because the person telling you was just passing on the misinformation).
First is that bootstrap is not the savior of the small sample size. Bootstrap actually fairs quite poorly for small sample sizes, even when the population is normal. This question, answer, and discussion should shed some light on that. Also the article here gives more details and background.
Both the t-test and the bootstrap are based on sampling distributions, what the distribution of the test statistic is.
The exact t-test is based on theory and the condition that the population/process generating the data is normal. The t-test happens to be fairly robust to the normality assumption (as far as the size of the test goes, power and precision can be another matter) so for some cases the combination of "Normal enough" and "Large sample size" means that the sampling distribution is "close enough" to normal that the t-test is a reasonable choice.
The bootstrap instead of assuming a normal population, uses the sample CDF as an estimate of the population and computes/estimates (usually through simulation) the true sampling distribution (which may be normalish, but does not need to be). If the sample does a reasonable job of representing the population then the bootstrap works well. But for small sample sizes it is very easy for the sample to do a poor job of representing the population and the bootstrap methods do lousy in those cases (see the simulation and paper referenced above).
The advantage of the t-test is that if all the assumptions hold (or are close) then it works well (I think it is actually the uniformly most powerful test). The disadvantage is that it does not work well if the assumptions are not true (and not close to being true) and there are some cases where the assumptions make a bigger differences than in others. And the t-test theory does not apply for some parameters/statistics of interest, e.g. trimmed means, standard deviations, quantiles, etc.
The advantage of the bootstrap is that it can estimate the sampling distribution without many of the assumptions needed by parametric methods. It works for statistics other than the mean and in cases where other assumptions do not hold (e.g. 2 samples, unequal variances). The disadvantage of the bootstrap is that it is very dependent on the sample representing population because it does not have the advantages of other assumptions. The bootstrap does not give you normality, it gives you the sampling distribution (which sometimes looks normal, but still works when it is not) without needing the assumptions about the population.
For t-tests where it is reasonable to assume that the population is normal (or at least normal enough) then the t-test will be best (of the 2).
If you do not have normality and do have small samples, then neither the t-test or the bootstrap should be trusted. For the 2 sample case a permutation test will work well if you are willing to assume equal distributions (including equal variances) under the null hypothesis. This is a very reasonable assumption when doing a randomized experiment, but may not be when comparing 2 separate populations (but then if you believe that 2 populations may have different spreads/shapes then maybe a test of means is not the most interesting question or the best place to start).
With huge sample sizes the large sample theory will benefit both t-tests and bootstrapping and you will see little or no difference when comparing means.
With moderate sample sizes the bootstrap can perform well and may be preferred when you are unwilling to make the assumptions needed for the t-test procedures.
The important thing is to understand the assumptions and conditions that are required for the different procedures that you are considering and to consider how those conditions and deviations from them will affect your analysis and how you believe the population/process that produced your data fits those conditions, simulation can help you understand how the deviations affect the different methods. Remember that all statistical procedures have conditions and assumptions (with the possible exception of SnowsCorrectlySizedButOtherwiseUselessTestOfAnything, but if you use that test then people will make assumptions about you). | Why would I want to bootstrap when computing an independent sample t-test? (how to justify, interpre
There are several misunderstandings in you post (some of which are common and you may have been told the wrong thing because the person telling you was just passing on the misinformation).
First is th |
19,714 | Covariance of transformed random variables | One could take the approach of Taylor expansion:
http://en.wikipedia.org/wiki/Taylor_expansions_for_the_moments_of_functions_of_random_variables
Edit:
Take $U=\log(X)$, $V=\log(Y)$.
Use multivariate Taylor expansion to compute an approximation to $\rm{E}(UV)$ (in similar fashion to the example at the end of "First Moment" in the link which does the simpler case of $\rm{E}(X.1/Y))$, and use univariate expansions to compute approximations to $\rm{E}(U)$ and $\rm{E}(V)$ (as given in the first part of the same section) to similar accuracy. From those things, compute the (approximated) covariance.
Expanding out to similar degree of approximation as the example in the link, I think you end up with terms in the mean and variance of each (untransformed) variable, and their covariance.
Edit 2:
But here's a little trick that may save some effort:
Note that $\rm{E}(XY) = \rm{Cov}(X,Y) + \rm{E}(X)\rm{E}(Y)$ and $X=\exp(U)$ and $Y=\exp(V)$.
Given
$$
\operatorname{E}\left[f(X)\right]\approx f(\mu_X) +\frac{f''(\mu_X)}{2}\sigma_X^2
$$
we have
$$
\operatorname{E}(\exp(U)) \approx \exp(\mu_U) + \frac{\exp(\mu_U)}{2}\sigma_U^2 \approx \exp(\mu_U+\frac{1}{2}\sigma_U^2)
$$
Edit: That last step follows from Taylor approximation $\exp(b) \approx 1 + b$, which is good for small $b$ (taking $b =\frac{1}{2}\sigma_U^2$).
(that approximation is exact for $U$, $V$ normal: $\operatorname{E}(\exp(U))=\exp(\mu_U+\frac{1}{2}\sigma_U^2)$)
Let $W = U + V$
$$
\operatorname{E}(XY) = \operatorname{E}(\exp(U).\exp(V)) = \operatorname{E}(\exp(W))
$$
$$
\approx \exp(\mu_{W}) + \frac{exp(\mu_{W})}{2}\sigma_W^2 \approx \exp(\mu_W+\frac{1}{2}\sigma_W^2)
$$
and given $\operatorname{Var}(W) = \operatorname{Var}(U) + \operatorname{Var}(V) + 2 \operatorname{Cov}(U,V)$, then
(Edit:)
$$
1+\frac{\operatorname{Cov}(X,Y)}{\operatorname{E}(X)\operatorname{E}(Y)} = \frac{\operatorname{E}(XY)}{\operatorname{E}(X)\operatorname{E}(Y)}
$$
$$
\approx \frac{\exp(\mu_W+\frac{1}{2}\sigma_W^2)}{\exp(\mu_U+\frac{1}{2}\sigma_U^2).\exp(\mu_V+\frac{1}{2}\sigma_V^2)}
$$
$$
\approx \frac{\exp(\mu_U+\mu_V+\frac{1}{2}(\sigma_U^2+\sigma_V^2+2 \operatorname{Cov}(U,V)))}{\exp(\mu_U+\frac{1}{2}\sigma_U^2).\exp(\mu_V+\frac{1}{2}\sigma_V^2)}
$$
$$
\approx \exp[\operatorname{Cov}(U,V)]
$$
Hence $\operatorname{Cov}(U,V)\approx \log(1+\frac{\operatorname{Cov}(X,Y)}{\operatorname{E}(X)\operatorname{E}(Y)})$. This should be exact for $U,V$ bivariate gaussian.
If you used the first approximation rather than the second, you would get a different approximation here. | Covariance of transformed random variables | One could take the approach of Taylor expansion:
http://en.wikipedia.org/wiki/Taylor_expansions_for_the_moments_of_functions_of_random_variables
Edit:
Take $U=\log(X)$, $V=\log(Y)$.
Use multivariate | Covariance of transformed random variables
One could take the approach of Taylor expansion:
http://en.wikipedia.org/wiki/Taylor_expansions_for_the_moments_of_functions_of_random_variables
Edit:
Take $U=\log(X)$, $V=\log(Y)$.
Use multivariate Taylor expansion to compute an approximation to $\rm{E}(UV)$ (in similar fashion to the example at the end of "First Moment" in the link which does the simpler case of $\rm{E}(X.1/Y))$, and use univariate expansions to compute approximations to $\rm{E}(U)$ and $\rm{E}(V)$ (as given in the first part of the same section) to similar accuracy. From those things, compute the (approximated) covariance.
Expanding out to similar degree of approximation as the example in the link, I think you end up with terms in the mean and variance of each (untransformed) variable, and their covariance.
Edit 2:
But here's a little trick that may save some effort:
Note that $\rm{E}(XY) = \rm{Cov}(X,Y) + \rm{E}(X)\rm{E}(Y)$ and $X=\exp(U)$ and $Y=\exp(V)$.
Given
$$
\operatorname{E}\left[f(X)\right]\approx f(\mu_X) +\frac{f''(\mu_X)}{2}\sigma_X^2
$$
we have
$$
\operatorname{E}(\exp(U)) \approx \exp(\mu_U) + \frac{\exp(\mu_U)}{2}\sigma_U^2 \approx \exp(\mu_U+\frac{1}{2}\sigma_U^2)
$$
Edit: That last step follows from Taylor approximation $\exp(b) \approx 1 + b$, which is good for small $b$ (taking $b =\frac{1}{2}\sigma_U^2$).
(that approximation is exact for $U$, $V$ normal: $\operatorname{E}(\exp(U))=\exp(\mu_U+\frac{1}{2}\sigma_U^2)$)
Let $W = U + V$
$$
\operatorname{E}(XY) = \operatorname{E}(\exp(U).\exp(V)) = \operatorname{E}(\exp(W))
$$
$$
\approx \exp(\mu_{W}) + \frac{exp(\mu_{W})}{2}\sigma_W^2 \approx \exp(\mu_W+\frac{1}{2}\sigma_W^2)
$$
and given $\operatorname{Var}(W) = \operatorname{Var}(U) + \operatorname{Var}(V) + 2 \operatorname{Cov}(U,V)$, then
(Edit:)
$$
1+\frac{\operatorname{Cov}(X,Y)}{\operatorname{E}(X)\operatorname{E}(Y)} = \frac{\operatorname{E}(XY)}{\operatorname{E}(X)\operatorname{E}(Y)}
$$
$$
\approx \frac{\exp(\mu_W+\frac{1}{2}\sigma_W^2)}{\exp(\mu_U+\frac{1}{2}\sigma_U^2).\exp(\mu_V+\frac{1}{2}\sigma_V^2)}
$$
$$
\approx \frac{\exp(\mu_U+\mu_V+\frac{1}{2}(\sigma_U^2+\sigma_V^2+2 \operatorname{Cov}(U,V)))}{\exp(\mu_U+\frac{1}{2}\sigma_U^2).\exp(\mu_V+\frac{1}{2}\sigma_V^2)}
$$
$$
\approx \exp[\operatorname{Cov}(U,V)]
$$
Hence $\operatorname{Cov}(U,V)\approx \log(1+\frac{\operatorname{Cov}(X,Y)}{\operatorname{E}(X)\operatorname{E}(Y)})$. This should be exact for $U,V$ bivariate gaussian.
If you used the first approximation rather than the second, you would get a different approximation here. | Covariance of transformed random variables
One could take the approach of Taylor expansion:
http://en.wikipedia.org/wiki/Taylor_expansions_for_the_moments_of_functions_of_random_variables
Edit:
Take $U=\log(X)$, $V=\log(Y)$.
Use multivariate |
19,715 | Covariance of transformed random variables | Without any additional assumptions on $X$ and $Y$, it is not possible to deduce the covariance of the log knowing the initial covariance.
In the other hand, if you were able to compute $\mathrm{Cov}(X,Y)$ from $X$ and $Y$, what prevents you from calculating $\mathrm{Cov}(\log(X), \log(Y))$ from $\log(X)$ and $\log(Y)$ directly? | Covariance of transformed random variables | Without any additional assumptions on $X$ and $Y$, it is not possible to deduce the covariance of the log knowing the initial covariance.
In the other hand, if you were able to compute $\mathrm{Cov}(X | Covariance of transformed random variables
Without any additional assumptions on $X$ and $Y$, it is not possible to deduce the covariance of the log knowing the initial covariance.
In the other hand, if you were able to compute $\mathrm{Cov}(X,Y)$ from $X$ and $Y$, what prevents you from calculating $\mathrm{Cov}(\log(X), \log(Y))$ from $\log(X)$ and $\log(Y)$ directly? | Covariance of transformed random variables
Without any additional assumptions on $X$ and $Y$, it is not possible to deduce the covariance of the log knowing the initial covariance.
In the other hand, if you were able to compute $\mathrm{Cov}(X |
19,716 | How to perform Neural Network modelling effectively? | The advice I would give is as follows:
Exhaust the possibilities of linear models (e.g. logistic regression) before going on to neural nets, especially if you have many features and not too many observations. For many problems a Neural Net does not out-perform simple linear classifiers, and the only way to find out if your problem is in this category is to try it and see.
Investigate kernel methods (e.g. Support Vector Machines (SVM), kernel logistic regression), Gaussian process models first. In both cases over-fitting is effectively controlled by tuning a small number of hyper-parameters. For kernel methods this is often performed by cross-validation, for Gaussian process models this is performed by maximising the marginal likelihood (also known as the Bayesian "evidence" for the model). I have found it is much easier to get a reasonable model using these methods than with neural networks, as the means of avoiding over-fitting is so much more straightforward.
If you really want to use a neural network, start with a (regularised) radial basis function network, rather than a feedforward Multilayer Perceptron (MLP) type network.
If yo do use an MLP, then use regularisation. If you do, it will be less sensitive to choices about architecture, such as optimising the number of hidden units. Instead, all you have to do is choose a good value for the regularisation parameter. MacKay's Bayesian "evidence framework" provides a good method for setting the regularisation parameter. If you use regularisation, then the number of observations and number of variables becomes much less of an issue.
To detect over-fitting, simply perform cross-validation to test generalisation performance.
As for classes having equal frequencies, the thing to remember is that if you train a model with a balanced training set, but the classes are not balanced in the operational data, then the model is very likely to under-predict the minority class. If you use a probabilistic classifier such as logistic regression or a neural network, you can always correct the estimated probabilities to account for that after training. If your dataset is very imbalanced, I would recommend differential weighting of patterns from the positive and negative classes, with the weighting factors selected by cross-validation.
However, when the classes are very unbalanced, it is normally the case that false-negative and false-positive errors have difference costs (e.g. in medical screening tests a false-negative is much worse than a false-positive). So often all you need to do is include the misclassification costs into the error function used to train the network.
If you are a MATLAB user (like me) I can strongly recommend the NETLAB software (Ian Nabney and Chris Bishop) or the software that goes with the book Gaussian Process for Machine Learning by Rasmussen and Williams. I can else strongly recommend the book "Neural networks for pattern recognition" by Chris Bishop for anyone starting out in neural nets. It is a brilliant book, and covers the material with great clarity and the minimum level of maths required to really understand what you are doing, and most of it is implemented in the NETLAB software (which may also run under Octave).
HTH
P.S. The best way of modelling with a neural net is probably to use a Bayesian approach based on Hybrid Monte Carlo (HMC), as developed by Radford Neal. In general problems start in modelling when you try and optimise some parameters and you end up over-fitting. The best solution is to never optimise anything and marginalise (integrate) over parameters instead. Sadly this integration can't be performed analytically, so you need to use sampling based approaches instead. However, this is (a) computationally expensive and (b) a bit of a "black art" and required deep understanding and experience. | How to perform Neural Network modelling effectively? | The advice I would give is as follows:
Exhaust the possibilities of linear models (e.g. logistic regression) before going on to neural nets, especially if you have many features and not too many obse | How to perform Neural Network modelling effectively?
The advice I would give is as follows:
Exhaust the possibilities of linear models (e.g. logistic regression) before going on to neural nets, especially if you have many features and not too many observations. For many problems a Neural Net does not out-perform simple linear classifiers, and the only way to find out if your problem is in this category is to try it and see.
Investigate kernel methods (e.g. Support Vector Machines (SVM), kernel logistic regression), Gaussian process models first. In both cases over-fitting is effectively controlled by tuning a small number of hyper-parameters. For kernel methods this is often performed by cross-validation, for Gaussian process models this is performed by maximising the marginal likelihood (also known as the Bayesian "evidence" for the model). I have found it is much easier to get a reasonable model using these methods than with neural networks, as the means of avoiding over-fitting is so much more straightforward.
If you really want to use a neural network, start with a (regularised) radial basis function network, rather than a feedforward Multilayer Perceptron (MLP) type network.
If yo do use an MLP, then use regularisation. If you do, it will be less sensitive to choices about architecture, such as optimising the number of hidden units. Instead, all you have to do is choose a good value for the regularisation parameter. MacKay's Bayesian "evidence framework" provides a good method for setting the regularisation parameter. If you use regularisation, then the number of observations and number of variables becomes much less of an issue.
To detect over-fitting, simply perform cross-validation to test generalisation performance.
As for classes having equal frequencies, the thing to remember is that if you train a model with a balanced training set, but the classes are not balanced in the operational data, then the model is very likely to under-predict the minority class. If you use a probabilistic classifier such as logistic regression or a neural network, you can always correct the estimated probabilities to account for that after training. If your dataset is very imbalanced, I would recommend differential weighting of patterns from the positive and negative classes, with the weighting factors selected by cross-validation.
However, when the classes are very unbalanced, it is normally the case that false-negative and false-positive errors have difference costs (e.g. in medical screening tests a false-negative is much worse than a false-positive). So often all you need to do is include the misclassification costs into the error function used to train the network.
If you are a MATLAB user (like me) I can strongly recommend the NETLAB software (Ian Nabney and Chris Bishop) or the software that goes with the book Gaussian Process for Machine Learning by Rasmussen and Williams. I can else strongly recommend the book "Neural networks for pattern recognition" by Chris Bishop for anyone starting out in neural nets. It is a brilliant book, and covers the material with great clarity and the minimum level of maths required to really understand what you are doing, and most of it is implemented in the NETLAB software (which may also run under Octave).
HTH
P.S. The best way of modelling with a neural net is probably to use a Bayesian approach based on Hybrid Monte Carlo (HMC), as developed by Radford Neal. In general problems start in modelling when you try and optimise some parameters and you end up over-fitting. The best solution is to never optimise anything and marginalise (integrate) over parameters instead. Sadly this integration can't be performed analytically, so you need to use sampling based approaches instead. However, this is (a) computationally expensive and (b) a bit of a "black art" and required deep understanding and experience. | How to perform Neural Network modelling effectively?
The advice I would give is as follows:
Exhaust the possibilities of linear models (e.g. logistic regression) before going on to neural nets, especially if you have many features and not too many obse |
19,717 | Statistics conferences? | UseR!
List of previous and upcoming R conferences on r-project
Related Links:
2011: University of Warwick, Coventry, UK
Videos of some keynote speakers from 2010 | Statistics conferences? | UseR!
List of previous and upcoming R conferences on r-project
Related Links:
2011: University of Warwick, Coventry, UK
Videos of some keynote speakers from 2010 | Statistics conferences?
UseR!
List of previous and upcoming R conferences on r-project
Related Links:
2011: University of Warwick, Coventry, UK
Videos of some keynote speakers from 2010 | Statistics conferences?
UseR!
List of previous and upcoming R conferences on r-project
Related Links:
2011: University of Warwick, Coventry, UK
Videos of some keynote speakers from 2010 |
19,718 | Statistics conferences? | In terms of overall breadth, I would say that the ASA/IMS Joint Statistical Meetings are the most significant. Next year, the statisticians are taking their talents to South Beach...or Miami Beach is more correct. I just couldn't help to use that line from Lebron James' infamous press conference. Having said that, I prefer smaller conferences like the UseR! conferences, ICORS (robust statistics), etc. | Statistics conferences? | In terms of overall breadth, I would say that the ASA/IMS Joint Statistical Meetings are the most significant. Next year, the statisticians are taking their talents to South Beach...or Miami Beach is | Statistics conferences?
In terms of overall breadth, I would say that the ASA/IMS Joint Statistical Meetings are the most significant. Next year, the statisticians are taking their talents to South Beach...or Miami Beach is more correct. I just couldn't help to use that line from Lebron James' infamous press conference. Having said that, I prefer smaller conferences like the UseR! conferences, ICORS (robust statistics), etc. | Statistics conferences?
In terms of overall breadth, I would say that the ASA/IMS Joint Statistical Meetings are the most significant. Next year, the statisticians are taking their talents to South Beach...or Miami Beach is |
19,719 | Statistics conferences? | For biostatistics the largest US conferences are the meetings of the local sections of the International Biometrics Society (IBS):
ENAR for the Eastern region
WNAR for the Western region
Of these ENAR is by far larger. | Statistics conferences? | For biostatistics the largest US conferences are the meetings of the local sections of the International Biometrics Society (IBS):
ENAR for the Eastern region
WNAR for the Western region
Of these EN | Statistics conferences?
For biostatistics the largest US conferences are the meetings of the local sections of the International Biometrics Society (IBS):
ENAR for the Eastern region
WNAR for the Western region
Of these ENAR is by far larger. | Statistics conferences?
For biostatistics the largest US conferences are the meetings of the local sections of the International Biometrics Society (IBS):
ENAR for the Eastern region
WNAR for the Western region
Of these EN |
19,720 | Statistics conferences? | Shameless plug: R/Finance which relevant for its intersection of domain-specifics as well as tools, and so far well received by participants of the 2009 and 2010 conference. .
Disclaimer: I am one of the organizers. | Statistics conferences? | Shameless plug: R/Finance which relevant for its intersection of domain-specifics as well as tools, and so far well received by participants of the 2009 and 2010 conference. .
Disclaimer: I am one of | Statistics conferences?
Shameless plug: R/Finance which relevant for its intersection of domain-specifics as well as tools, and so far well received by participants of the 2009 and 2010 conference. .
Disclaimer: I am one of the organizers. | Statistics conferences?
Shameless plug: R/Finance which relevant for its intersection of domain-specifics as well as tools, and so far well received by participants of the 2009 and 2010 conference. .
Disclaimer: I am one of |
19,721 | Statistics conferences? | The main regular conference in Australia is the "Australian Statistics Conference", held every second year. The next one is ASC 2010, to be held in Western Australia in December. | Statistics conferences? | The main regular conference in Australia is the "Australian Statistics Conference", held every second year. The next one is ASC 2010, to be held in Western Australia in December. | Statistics conferences?
The main regular conference in Australia is the "Australian Statistics Conference", held every second year. The next one is ASC 2010, to be held in Western Australia in December. | Statistics conferences?
The main regular conference in Australia is the "Australian Statistics Conference", held every second year. The next one is ASC 2010, to be held in Western Australia in December. |
19,722 | Statistics conferences? | This is a new annual conference that should be great for people using statistics and wanting to improve their practical knowledge. And it will be in warm places in the winter!
The inaugural ASA Conference on Statistical Practice, to be held in Orlando,
Florida, February 16–18, 2012, meets the needs of statistical practitioners
engaged in design, analysis, programming, and consulting. The program for
Statistical Practice 2012 features three main tracks:
Research and Development, Engineering, and Operations
Business Analytics
Communications, Impact, and Career Development
Both the content and format of this conference have been carefully planned to
improve your effectiveness as a statistical problem solver. Invited sessions,
tutorials, and short courses will refresh your statistical training, update
your knowledge of emerging areas of practice, and sharpen your soft skills.
Poster sessions will give you the opportunity to share and discuss your work.
And if you are looking for employment, a virtual career placement service
will benefit your search.
The timing for this conference could not be better, because the opportunities
for statistical practitioners have never been greater.
For more information please visit
http://www.amstat.org/meetings/csp/2012/index.cfm | Statistics conferences? | This is a new annual conference that should be great for people using statistics and wanting to improve their practical knowledge. And it will be in warm places in the winter!
The inaugural ASA Confe | Statistics conferences?
This is a new annual conference that should be great for people using statistics and wanting to improve their practical knowledge. And it will be in warm places in the winter!
The inaugural ASA Conference on Statistical Practice, to be held in Orlando,
Florida, February 16–18, 2012, meets the needs of statistical practitioners
engaged in design, analysis, programming, and consulting. The program for
Statistical Practice 2012 features three main tracks:
Research and Development, Engineering, and Operations
Business Analytics
Communications, Impact, and Career Development
Both the content and format of this conference have been carefully planned to
improve your effectiveness as a statistical problem solver. Invited sessions,
tutorials, and short courses will refresh your statistical training, update
your knowledge of emerging areas of practice, and sharpen your soft skills.
Poster sessions will give you the opportunity to share and discuss your work.
And if you are looking for employment, a virtual career placement service
will benefit your search.
The timing for this conference could not be better, because the opportunities
for statistical practitioners have never been greater.
For more information please visit
http://www.amstat.org/meetings/csp/2012/index.cfm | Statistics conferences?
This is a new annual conference that should be great for people using statistics and wanting to improve their practical knowledge. And it will be in warm places in the winter!
The inaugural ASA Confe |
19,723 | Statistics conferences? | Though it hasn't been around for decades, I'm currently looking forward to MCMSki. One could also mention the Valencia Meetings, but those only happen every four years (and you've already missed the 2010 meeting). | Statistics conferences? | Though it hasn't been around for decades, I'm currently looking forward to MCMSki. One could also mention the Valencia Meetings, but those only happen every four years (and you've already missed the | Statistics conferences?
Though it hasn't been around for decades, I'm currently looking forward to MCMSki. One could also mention the Valencia Meetings, but those only happen every four years (and you've already missed the 2010 meeting). | Statistics conferences?
Though it hasn't been around for decades, I'm currently looking forward to MCMSki. One could also mention the Valencia Meetings, but those only happen every four years (and you've already missed the |
19,724 | Statistics conferences? | Joint Statistical Meetings (JSM) is the biggest meeting I know of. In fact it's almost shocking that after 10 years and as many answers, no one else has offered JSM as a possible "best" conference. But "best" boils down to why you're there.
JSM has 1,000s of statisticians, and many industry sponsors, and top tier keynotes from industry and academic leaders. There's too much to do, from round tables, panel discussions, seminars, workshops, job booths, etc. I never went as a graduate student, but wish I did.
https://www.amstat.org/meetings/joint-statistical-meetings | Statistics conferences? | Joint Statistical Meetings (JSM) is the biggest meeting I know of. In fact it's almost shocking that after 10 years and as many answers, no one else has offered JSM as a possible "best" conference. Bu | Statistics conferences?
Joint Statistical Meetings (JSM) is the biggest meeting I know of. In fact it's almost shocking that after 10 years and as many answers, no one else has offered JSM as a possible "best" conference. But "best" boils down to why you're there.
JSM has 1,000s of statisticians, and many industry sponsors, and top tier keynotes from industry and academic leaders. There's too much to do, from round tables, panel discussions, seminars, workshops, job booths, etc. I never went as a graduate student, but wish I did.
https://www.amstat.org/meetings/joint-statistical-meetings | Statistics conferences?
Joint Statistical Meetings (JSM) is the biggest meeting I know of. In fact it's almost shocking that after 10 years and as many answers, no one else has offered JSM as a possible "best" conference. Bu |
19,725 | Statistics conferences? | Not a "statistics" conference in the technical sense, but Predictive Analytics World is a case study conference on how companies are using predictive and other analytics in theis businesses.
Predictive Analytics World | Statistics conferences? | Not a "statistics" conference in the technical sense, but Predictive Analytics World is a case study conference on how companies are using predictive and other analytics in theis businesses.
Predictiv | Statistics conferences?
Not a "statistics" conference in the technical sense, but Predictive Analytics World is a case study conference on how companies are using predictive and other analytics in theis businesses.
Predictive Analytics World | Statistics conferences?
Not a "statistics" conference in the technical sense, but Predictive Analytics World is a case study conference on how companies are using predictive and other analytics in theis businesses.
Predictiv |
19,726 | Statistics conferences? | ACM SIGKDD 2010
KDD 2011 in San Diego | Statistics conferences? | ACM SIGKDD 2010
KDD 2011 in San Diego | Statistics conferences?
ACM SIGKDD 2010
KDD 2011 in San Diego | Statistics conferences?
ACM SIGKDD 2010
KDD 2011 in San Diego |
19,727 | Statistics conferences? | This is an upcoming regular conference, that is being held in Sydney, Australia in July 2012, normally the meetings are in Europe: 8th international conference on social science methodology.
The Australian Consortium for Social and Political Research Inc has a conference based in Australia every two years, the one for this year happens to be run combined with the conference I linked above. | Statistics conferences? | This is an upcoming regular conference, that is being held in Sydney, Australia in July 2012, normally the meetings are in Europe: 8th international conference on social science methodology.
The Austr | Statistics conferences?
This is an upcoming regular conference, that is being held in Sydney, Australia in July 2012, normally the meetings are in Europe: 8th international conference on social science methodology.
The Australian Consortium for Social and Political Research Inc has a conference based in Australia every two years, the one for this year happens to be run combined with the conference I linked above. | Statistics conferences?
This is an upcoming regular conference, that is being held in Sydney, Australia in July 2012, normally the meetings are in Europe: 8th international conference on social science methodology.
The Austr |
19,728 | Statistics conferences? | for statistics in application to educational and psychological measurement, two of the big name north american conferences are the annual meetings for:
the national council on measurement in education (hosted in
vancouver, british columbia, canada); and
the psychometric
society (lincoln, nebraska) | Statistics conferences? | for statistics in application to educational and psychological measurement, two of the big name north american conferences are the annual meetings for:
the national council on measurement in educatio | Statistics conferences?
for statistics in application to educational and psychological measurement, two of the big name north american conferences are the annual meetings for:
the national council on measurement in education (hosted in
vancouver, british columbia, canada); and
the psychometric
society (lincoln, nebraska) | Statistics conferences?
for statistics in application to educational and psychological measurement, two of the big name north american conferences are the annual meetings for:
the national council on measurement in educatio |
19,729 | Statistics conferences? | These have reasonably a comprehensive list of upcoming statistics events.
Statistics Conferences Worldwide
IMS meetings calender
Conferences and Meetings on Probability and Statistics | Statistics conferences? | These have reasonably a comprehensive list of upcoming statistics events.
Statistics Conferences Worldwide
IMS meetings calender
Conferences and Meetings on Probability and Statistics | Statistics conferences?
These have reasonably a comprehensive list of upcoming statistics events.
Statistics Conferences Worldwide
IMS meetings calender
Conferences and Meetings on Probability and Statistics | Statistics conferences?
These have reasonably a comprehensive list of upcoming statistics events.
Statistics Conferences Worldwide
IMS meetings calender
Conferences and Meetings on Probability and Statistics |
19,730 | What is joint estimation? | Joint estimation is, simply, jointly estimating two (or more) things at the same time. It can be as simple as estimating the mean and standard deviation from a sample.
In a lot of the literature, the term is invoked because a special estimating procedure has to be used. This is usually the case when one quantity depends on the other and vice versa so that an analytic solution to the problem is intractable. How exactly joint estimation is done depends entirely on the problem.
One method that pops up often for "joint modeling" or joint estimation is the EM-algorithm. EM stands for expectation - maximization. By alternating these steps, the E-step fills in the missing data that otherwise depend on component A, and the M-step finds optimal estimates for component B. By iterating the E and M steps, you can find a maximum likelihood estimate of A and B, thus jointly estimate these things. | What is joint estimation? | Joint estimation is, simply, jointly estimating two (or more) things at the same time. It can be as simple as estimating the mean and standard deviation from a sample.
In a lot of the literature, the | What is joint estimation?
Joint estimation is, simply, jointly estimating two (or more) things at the same time. It can be as simple as estimating the mean and standard deviation from a sample.
In a lot of the literature, the term is invoked because a special estimating procedure has to be used. This is usually the case when one quantity depends on the other and vice versa so that an analytic solution to the problem is intractable. How exactly joint estimation is done depends entirely on the problem.
One method that pops up often for "joint modeling" or joint estimation is the EM-algorithm. EM stands for expectation - maximization. By alternating these steps, the E-step fills in the missing data that otherwise depend on component A, and the M-step finds optimal estimates for component B. By iterating the E and M steps, you can find a maximum likelihood estimate of A and B, thus jointly estimate these things. | What is joint estimation?
Joint estimation is, simply, jointly estimating two (or more) things at the same time. It can be as simple as estimating the mean and standard deviation from a sample.
In a lot of the literature, the |
19,731 | What is joint estimation? | In a statistical context, the term "joint estimation" could conceivably mean one of two things:
The simultaneous estimation of two or more scalar parameters (or equivalently, the estimation of a vector parameter with at least two elements); or
The estimation of a single parameter pertaining to a joint (e.g., in the study of carpentry, plumbing systems, or marijuana smoking).
Of those two options, the second one is a joke, so almost certainly, joint estimation refers to simultaneously estimating two scalar parameters at once. | What is joint estimation? | In a statistical context, the term "joint estimation" could conceivably mean one of two things:
The simultaneous estimation of two or more scalar parameters (or equivalently, the estimation of a vec | What is joint estimation?
In a statistical context, the term "joint estimation" could conceivably mean one of two things:
The simultaneous estimation of two or more scalar parameters (or equivalently, the estimation of a vector parameter with at least two elements); or
The estimation of a single parameter pertaining to a joint (e.g., in the study of carpentry, plumbing systems, or marijuana smoking).
Of those two options, the second one is a joke, so almost certainly, joint estimation refers to simultaneously estimating two scalar parameters at once. | What is joint estimation?
In a statistical context, the term "joint estimation" could conceivably mean one of two things:
The simultaneous estimation of two or more scalar parameters (or equivalently, the estimation of a vec |
19,732 | What is joint estimation? | Joint estimation is using data to estimate two or more parameters at the same time. Separate estimation evaluates each parameter one at a time.
Estimation is the result of some form of optimization process. Because of this, there do not exist unique estimation solutions in statistics. If you change your goal, then you change what is optimal. When you first learn things such as regression, no one tells you why you are doing what you are doing. The goal of the instructor is to give you a degree of basic functionality using methods that work in a wide range of circumstances. At the beginning, you are not learning about regression. Instead, you are learning one or two regression methods that are widely applicable in a wide range of circumstances.
The fact you are looking for solutions that solve a hidden goal makes it a bit difficult to understand.
In the context of regression, imagine the following algebraic expression is true $$z=\beta_xx+\beta_yy+\alpha$$. A truism in statistics is the more information that you have, the better off you are. Let us assume that you need to determine what values for $z$ will happen when you see $(x,y)$. The problem is that you do not know the true values for $\{\beta_x,\beta_y,\alpha\}$. You have a large, complete data set of $\{x,y,z\}$.
In separate estimation, you would estimate one parameter at a time. In joint estimation, you would estimate all of them at once.
As a rule of thumb, joint estimation is more accurate than a separate estimate with a large complete data set. There is one general exception to that. Imagine you have a large set of $x$ and $z$ but a small set of $y$. Imagine most of your $y$ values are missing.
In many estimation routines, you would delete the missing $x$s and $z$s and reduce down the set you are working from until all sets are complete. If you have deleted enough data, it can be more accurate to use the large number of $x$s and $z$s separately to estimate $z=\beta_xx+\alpha$ and $z=\beta_yy+\alpha$ than together.
Now as to how it is done. All estimation, excluding a few exceptional cases, uses calculus to find an estimator that minimizes some form of loss or some type of risk. The concern is that you will be unlucky in choosing your sample. Unfortunately, there is an infinite number of loss functions. There is also an infinite number of risk functions.
I found several videos for you because it is a giant topic so that you can look at it in a more general form. They are from Mathematical Monk.
https://www.youtube.com/watch?v=6GhSiM0frIk
https://www.youtube.com/watch?v=5SPm4TmYTX0
https://www.youtube.com/watch?v=b1GxZdFN6cY
and
https://www.youtube.com/watch?v=WdnP1gmb8Hw. | What is joint estimation? | Joint estimation is using data to estimate two or more parameters at the same time. Separate estimation evaluates each parameter one at a time.
Estimation is the result of some form of optimization p | What is joint estimation?
Joint estimation is using data to estimate two or more parameters at the same time. Separate estimation evaluates each parameter one at a time.
Estimation is the result of some form of optimization process. Because of this, there do not exist unique estimation solutions in statistics. If you change your goal, then you change what is optimal. When you first learn things such as regression, no one tells you why you are doing what you are doing. The goal of the instructor is to give you a degree of basic functionality using methods that work in a wide range of circumstances. At the beginning, you are not learning about regression. Instead, you are learning one or two regression methods that are widely applicable in a wide range of circumstances.
The fact you are looking for solutions that solve a hidden goal makes it a bit difficult to understand.
In the context of regression, imagine the following algebraic expression is true $$z=\beta_xx+\beta_yy+\alpha$$. A truism in statistics is the more information that you have, the better off you are. Let us assume that you need to determine what values for $z$ will happen when you see $(x,y)$. The problem is that you do not know the true values for $\{\beta_x,\beta_y,\alpha\}$. You have a large, complete data set of $\{x,y,z\}$.
In separate estimation, you would estimate one parameter at a time. In joint estimation, you would estimate all of them at once.
As a rule of thumb, joint estimation is more accurate than a separate estimate with a large complete data set. There is one general exception to that. Imagine you have a large set of $x$ and $z$ but a small set of $y$. Imagine most of your $y$ values are missing.
In many estimation routines, you would delete the missing $x$s and $z$s and reduce down the set you are working from until all sets are complete. If you have deleted enough data, it can be more accurate to use the large number of $x$s and $z$s separately to estimate $z=\beta_xx+\alpha$ and $z=\beta_yy+\alpha$ than together.
Now as to how it is done. All estimation, excluding a few exceptional cases, uses calculus to find an estimator that minimizes some form of loss or some type of risk. The concern is that you will be unlucky in choosing your sample. Unfortunately, there is an infinite number of loss functions. There is also an infinite number of risk functions.
I found several videos for you because it is a giant topic so that you can look at it in a more general form. They are from Mathematical Monk.
https://www.youtube.com/watch?v=6GhSiM0frIk
https://www.youtube.com/watch?v=5SPm4TmYTX0
https://www.youtube.com/watch?v=b1GxZdFN6cY
and
https://www.youtube.com/watch?v=WdnP1gmb8Hw. | What is joint estimation?
Joint estimation is using data to estimate two or more parameters at the same time. Separate estimation evaluates each parameter one at a time.
Estimation is the result of some form of optimization p |
19,733 | Is going from continuous data to categorical always wrong? | Is there a sharp discontinuity at your thresholds?
For instance, suppose you have two patients A and B with values 3.9 and 4.1, and another two patients C and D with values 6.7 and 6.9. Is the difference in the likelihood for cancer between A and B much larger than the corresponding difference between C and D?
If yes, then discretizing makes sense.
If not, then your thresholds may make sense in understanding your data, but they are not "well determined" in a statistically meaningful sense. Don't discretize. Instead, use your test scores "as-is", and if you suspect some kind of nonlinearity, use splines.
This is very much recommended. | Is going from continuous data to categorical always wrong? | Is there a sharp discontinuity at your thresholds?
For instance, suppose you have two patients A and B with values 3.9 and 4.1, and another two patients C and D with values 6.7 and 6.9. Is the differe | Is going from continuous data to categorical always wrong?
Is there a sharp discontinuity at your thresholds?
For instance, suppose you have two patients A and B with values 3.9 and 4.1, and another two patients C and D with values 6.7 and 6.9. Is the difference in the likelihood for cancer between A and B much larger than the corresponding difference between C and D?
If yes, then discretizing makes sense.
If not, then your thresholds may make sense in understanding your data, but they are not "well determined" in a statistically meaningful sense. Don't discretize. Instead, use your test scores "as-is", and if you suspect some kind of nonlinearity, use splines.
This is very much recommended. | Is going from continuous data to categorical always wrong?
Is there a sharp discontinuity at your thresholds?
For instance, suppose you have two patients A and B with values 3.9 and 4.1, and another two patients C and D with values 6.7 and 6.9. Is the differe |
19,734 | Is going from continuous data to categorical always wrong? | I think the standard answer is it is always bad because you lose information in the process. It is hard to believe there is any case where you would gain anything from taking natural interval data and making it categorical. | Is going from continuous data to categorical always wrong? | I think the standard answer is it is always bad because you lose information in the process. It is hard to believe there is any case where you would gain anything from taking natural interval data and | Is going from continuous data to categorical always wrong?
I think the standard answer is it is always bad because you lose information in the process. It is hard to believe there is any case where you would gain anything from taking natural interval data and making it categorical. | Is going from continuous data to categorical always wrong?
I think the standard answer is it is always bad because you lose information in the process. It is hard to believe there is any case where you would gain anything from taking natural interval data and |
19,735 | Do test scores really follow a normal distribution? | Height, for instance, is often modelled as being normal. Maybe the height of men is something like 5 foot 10 with a standard deviation of 2 inches. We know negative height is unphysical, but under this model, the probability of observing a negative height is essentially zero. We use the model anyway because it is a good enough approximation.
All models are wrong. The question is "can this model still be useful", and in instances where we are modelling things like height and test scores, modelling the phenomenon as normal is useful despite it technically allowing for unphysical things. | Do test scores really follow a normal distribution? | Height, for instance, is often modelled as being normal. Maybe the height of men is something like 5 foot 10 with a standard deviation of 2 inches. We know negative height is unphysical, but under th | Do test scores really follow a normal distribution?
Height, for instance, is often modelled as being normal. Maybe the height of men is something like 5 foot 10 with a standard deviation of 2 inches. We know negative height is unphysical, but under this model, the probability of observing a negative height is essentially zero. We use the model anyway because it is a good enough approximation.
All models are wrong. The question is "can this model still be useful", and in instances where we are modelling things like height and test scores, modelling the phenomenon as normal is useful despite it technically allowing for unphysical things. | Do test scores really follow a normal distribution?
Height, for instance, is often modelled as being normal. Maybe the height of men is something like 5 foot 10 with a standard deviation of 2 inches. We know negative height is unphysical, but under th |
19,736 | Do test scores really follow a normal distribution? | Doesn't the normal distribution allow for negative values?
Correct. It also has no upper bound.
In one part of my textbook, it says that a normal distribution could be good for modeling exam scores.
In spite of the previous statements, nevertheless this is sometimes the case. If you have many components to the test, not too strongly related (e.g. so you're not essentially the same question a dozen times, nor having each part requiring a correct answer to the previous part), and not very easy or very hard (so that most marks are somewhere near the middle), then marks may often be reasonably well approximated by a normal distribution; often well enough that typical analyses should cause little concern.
We know for sure that they aren't normal, but that's not automatically a problem -- as long as the behaviour of the procedures we use are close enough to what they should be for our purposes (e.g. standard errors, confidence intervals, significance levels and power - whichever are needed - do close to what we expect them to)
In the next part, it asks what distribution would be appropriate to model a car insurance claim. This time, it said that the appropriate distributions would be Gamma or Inverse Gaussian because they're continuous with only positive values.
Yes, but more than that -- they tend to be heavily right skew and the variability tends to increase when the mean gets larger.
Here's an example of a claim-size distribution for vehicle claims:
https://ars.els-cdn.com/content/image/1-s2.0-S0167668715303358-gr5.jpg
(Fig 5 from Garrido, Genest & Schulz (2016) "Generalized linear models for dependent frequency and severity of insurance claims", Insurance: Mathematics and Economics, Vol 70, Sept., p205-215. https://www.sciencedirect.com/science/article/pii/S0167668715303358)
This shows a typical right-skew and heavy right tail. However we must be very careful because this is a marginal distribution, and we are writing a model for the conditional distribution, which will typically be much less skew (the marginal distribution we look at if we just do a histogram of claim sizes being a mixture of these conditional distributions). Nevertheless it is typically the case that if we look at the claim size in subgroups of the predictors (perhaps categorizing continuous variables) that the distribution is still strongly right skew and quite heavy tailed on the right, suggesting that something like a gamma model* is likely to be much more suitable than a Gaussian model.
* there may be any number of other distributions which would be more suitable than a Gaussian - the inverse Gaussian is another choice - though less common; lognormal or Weibull models, while not GLMs as they stand, may be quite useful also.
[It's rarely the case that any of these distributions are near-perfect descriptions; they're inexact approximations, but in many cases sufficiently good that the analysis is useful and has close to the desired properties.]
Well, I believe that exam scores would also be continuous with only positive values, so why would we use a normal distribution there?
Because (under the conditions I mentioned before -- lots of components, not too dependent, not to hard or easy) the distribution tends to be fairly close to symmetric, unimodal and not heavy-tailed. | Do test scores really follow a normal distribution? | Doesn't the normal distribution allow for negative values?
Correct. It also has no upper bound.
In one part of my textbook, it says that a normal distribution could be good for modeling exam scores. | Do test scores really follow a normal distribution?
Doesn't the normal distribution allow for negative values?
Correct. It also has no upper bound.
In one part of my textbook, it says that a normal distribution could be good for modeling exam scores.
In spite of the previous statements, nevertheless this is sometimes the case. If you have many components to the test, not too strongly related (e.g. so you're not essentially the same question a dozen times, nor having each part requiring a correct answer to the previous part), and not very easy or very hard (so that most marks are somewhere near the middle), then marks may often be reasonably well approximated by a normal distribution; often well enough that typical analyses should cause little concern.
We know for sure that they aren't normal, but that's not automatically a problem -- as long as the behaviour of the procedures we use are close enough to what they should be for our purposes (e.g. standard errors, confidence intervals, significance levels and power - whichever are needed - do close to what we expect them to)
In the next part, it asks what distribution would be appropriate to model a car insurance claim. This time, it said that the appropriate distributions would be Gamma or Inverse Gaussian because they're continuous with only positive values.
Yes, but more than that -- they tend to be heavily right skew and the variability tends to increase when the mean gets larger.
Here's an example of a claim-size distribution for vehicle claims:
https://ars.els-cdn.com/content/image/1-s2.0-S0167668715303358-gr5.jpg
(Fig 5 from Garrido, Genest & Schulz (2016) "Generalized linear models for dependent frequency and severity of insurance claims", Insurance: Mathematics and Economics, Vol 70, Sept., p205-215. https://www.sciencedirect.com/science/article/pii/S0167668715303358)
This shows a typical right-skew and heavy right tail. However we must be very careful because this is a marginal distribution, and we are writing a model for the conditional distribution, which will typically be much less skew (the marginal distribution we look at if we just do a histogram of claim sizes being a mixture of these conditional distributions). Nevertheless it is typically the case that if we look at the claim size in subgroups of the predictors (perhaps categorizing continuous variables) that the distribution is still strongly right skew and quite heavy tailed on the right, suggesting that something like a gamma model* is likely to be much more suitable than a Gaussian model.
* there may be any number of other distributions which would be more suitable than a Gaussian - the inverse Gaussian is another choice - though less common; lognormal or Weibull models, while not GLMs as they stand, may be quite useful also.
[It's rarely the case that any of these distributions are near-perfect descriptions; they're inexact approximations, but in many cases sufficiently good that the analysis is useful and has close to the desired properties.]
Well, I believe that exam scores would also be continuous with only positive values, so why would we use a normal distribution there?
Because (under the conditions I mentioned before -- lots of components, not too dependent, not to hard or easy) the distribution tends to be fairly close to symmetric, unimodal and not heavy-tailed. | Do test scores really follow a normal distribution?
Doesn't the normal distribution allow for negative values?
Correct. It also has no upper bound.
In one part of my textbook, it says that a normal distribution could be good for modeling exam scores. |
19,737 | Do test scores really follow a normal distribution? | Exam scores might be better modeled by a binomial distribution. In a highly simplified case, you might have 100 true/false questions each worth 1 point, so the score would be an integer between 0 and 100. If you assume no correlation between the test-taker's correctness from problem to problem (dubious assumption though), the score is a sum of independent random variables, and the Central Limit Theorem applies. As the number of questions increases, the fraction of correct problems converges to a normal distribution.
You ask a good question about the values less than 0. You could also ask the same question about the values greater than 100%. As the number of test questions increases, the variance of the sum decreases, so the peak gets pulled towards the mean. Similarly, the best fit normal distribution will have smaller variance and the weight of the pdf outside the [0, 1] interval tends towards 0, although it will always be nonzero. The space between possible values of "fraction correct" will also decrease (1/100 for 100 questions, 1/1000 for 1000 questions, etc.), so informally, the pdf begins to behave more and more like a continuous pdf. | Do test scores really follow a normal distribution? | Exam scores might be better modeled by a binomial distribution. In a highly simplified case, you might have 100 true/false questions each worth 1 point, so the score would be an integer between 0 and | Do test scores really follow a normal distribution?
Exam scores might be better modeled by a binomial distribution. In a highly simplified case, you might have 100 true/false questions each worth 1 point, so the score would be an integer between 0 and 100. If you assume no correlation between the test-taker's correctness from problem to problem (dubious assumption though), the score is a sum of independent random variables, and the Central Limit Theorem applies. As the number of questions increases, the fraction of correct problems converges to a normal distribution.
You ask a good question about the values less than 0. You could also ask the same question about the values greater than 100%. As the number of test questions increases, the variance of the sum decreases, so the peak gets pulled towards the mean. Similarly, the best fit normal distribution will have smaller variance and the weight of the pdf outside the [0, 1] interval tends towards 0, although it will always be nonzero. The space between possible values of "fraction correct" will also decrease (1/100 for 100 questions, 1/1000 for 1000 questions, etc.), so informally, the pdf begins to behave more and more like a continuous pdf. | Do test scores really follow a normal distribution?
Exam scores might be better modeled by a binomial distribution. In a highly simplified case, you might have 100 true/false questions each worth 1 point, so the score would be an integer between 0 and |
19,738 | What is the difference between loss function and MLE? | A loss function is a measurement of model misfit as a function of the model parameters. Loss functions are more general than solely MLE.
MLE is a specific type of probability model estimation, where the loss function is the (log) likelihood. To paraphrase Matthew Drury's comment, MLE is one way to justify loss functions for probability models. | What is the difference between loss function and MLE? | A loss function is a measurement of model misfit as a function of the model parameters. Loss functions are more general than solely MLE.
MLE is a specific type of probability model estimation, where t | What is the difference between loss function and MLE?
A loss function is a measurement of model misfit as a function of the model parameters. Loss functions are more general than solely MLE.
MLE is a specific type of probability model estimation, where the loss function is the (log) likelihood. To paraphrase Matthew Drury's comment, MLE is one way to justify loss functions for probability models. | What is the difference between loss function and MLE?
A loss function is a measurement of model misfit as a function of the model parameters. Loss functions are more general than solely MLE.
MLE is a specific type of probability model estimation, where t |
19,739 | What is the difference between loss function and MLE? | Loss
In machine learning applications, such as neural networks, the loss function is used to assess the goodness of fit of a model. For instance, consider a simple neural net with one neuron and linear (identity) activation that has one input $x$ and one output $y$:
$$y=b+wx$$
We train this NN on the sample dataset: $(x_i,y_i)$ with $i=1,\dots,n$ observations. The training is trying different values of parameters $b,w$ and checking how good is the fit using the loss function. Suppose, that we want to use the quadratic cost:
$$C(e)=e^2$$
Then we have the following loss:
$$Loss(b,w|x,y)=\frac 1 n \sum_{i=1}^n C(y_i-b-wx_i)$$
Learning means minimizing this loss:
$$\min_{b,w} Loss(b,w|x,y)$$
MLE connection
You can pick the loss function which ever way you want, or fits your problem. However, sometimes the loss function choice follows the MLE approach to your problem. For instance, the quadratic cost and the above loss function are natural choices if you deal with Gaussian linear regression. Here's how.
Suppose that somehow you know that the true model is $$y=b+wx+\varepsilon$$ with $\varepsilon\sim\mathcal N(0,\sigma^2)$ - random Gaussian error with a constant variance. If this is truly the case then it happens so that the MLE of the parameters $b,w$ is the same as the optimal solution using the above NN with quadratic cost (loss).
Note, that in NN you're not obliged to always pick cost (loss) function that matches some kind of MLE approach. Also, although I described this approach using the neural networks, it applies to other statistical learning techniques in machine learning and beyond. | What is the difference between loss function and MLE? | Loss
In machine learning applications, such as neural networks, the loss function is used to assess the goodness of fit of a model. For instance, consider a simple neural net with one neuron and linea | What is the difference between loss function and MLE?
Loss
In machine learning applications, such as neural networks, the loss function is used to assess the goodness of fit of a model. For instance, consider a simple neural net with one neuron and linear (identity) activation that has one input $x$ and one output $y$:
$$y=b+wx$$
We train this NN on the sample dataset: $(x_i,y_i)$ with $i=1,\dots,n$ observations. The training is trying different values of parameters $b,w$ and checking how good is the fit using the loss function. Suppose, that we want to use the quadratic cost:
$$C(e)=e^2$$
Then we have the following loss:
$$Loss(b,w|x,y)=\frac 1 n \sum_{i=1}^n C(y_i-b-wx_i)$$
Learning means minimizing this loss:
$$\min_{b,w} Loss(b,w|x,y)$$
MLE connection
You can pick the loss function which ever way you want, or fits your problem. However, sometimes the loss function choice follows the MLE approach to your problem. For instance, the quadratic cost and the above loss function are natural choices if you deal with Gaussian linear regression. Here's how.
Suppose that somehow you know that the true model is $$y=b+wx+\varepsilon$$ with $\varepsilon\sim\mathcal N(0,\sigma^2)$ - random Gaussian error with a constant variance. If this is truly the case then it happens so that the MLE of the parameters $b,w$ is the same as the optimal solution using the above NN with quadratic cost (loss).
Note, that in NN you're not obliged to always pick cost (loss) function that matches some kind of MLE approach. Also, although I described this approach using the neural networks, it applies to other statistical learning techniques in machine learning and beyond. | What is the difference between loss function and MLE?
Loss
In machine learning applications, such as neural networks, the loss function is used to assess the goodness of fit of a model. For instance, consider a simple neural net with one neuron and linea |
19,740 | What is the difference between loss function and MLE? | In machine leanring, many people do not talk about assumptions (for example residual to be Gaussian) too much. And many people view the problem is a deterministic problem, where (large amount of) data is given, and we want to minimize the loss.
In classical statistics literature, usually the data is not too many, and people talk about the probabilistic interpretation of the model, where there are many probabilistic assumptions (such as residual to be Gaussian). With probabilistic assumptions, the likelihood can be calculated and the loss function can be negative likelihood instead of (or as a proxy of) minimizing mis classification rate.
It is also interesting to think about generative model vs discriminative model perspective. Maximize likelihood is usually coming from generative model, and minimize loss is usually coming from discriminative model. | What is the difference between loss function and MLE? | In machine leanring, many people do not talk about assumptions (for example residual to be Gaussian) too much. And many people view the problem is a deterministic problem, where (large amount of) data | What is the difference between loss function and MLE?
In machine leanring, many people do not talk about assumptions (for example residual to be Gaussian) too much. And many people view the problem is a deterministic problem, where (large amount of) data is given, and we want to minimize the loss.
In classical statistics literature, usually the data is not too many, and people talk about the probabilistic interpretation of the model, where there are many probabilistic assumptions (such as residual to be Gaussian). With probabilistic assumptions, the likelihood can be calculated and the loss function can be negative likelihood instead of (or as a proxy of) minimizing mis classification rate.
It is also interesting to think about generative model vs discriminative model perspective. Maximize likelihood is usually coming from generative model, and minimize loss is usually coming from discriminative model. | What is the difference between loss function and MLE?
In machine leanring, many people do not talk about assumptions (for example residual to be Gaussian) too much. And many people view the problem is a deterministic problem, where (large amount of) data |
19,741 | What is the difference between loss function and MLE? | Can someone give examples of each under different modes (e.g. what is
the MLE in a Discrete Naive Bayes or in Logistic Regression), also how
they are related to the loss functions?
When we deal with machine learning algorithms we are:
1) specifying a probabilistic model that has parameters. For example the parameters in logistic regression and naive bayes in this answer.
2) learning the value of those parameters from data(sometimes maybe from some experts). Normally there are two methods: Maximum Likelihood Estimation(MLE) and Maximum A Prosteriori(MAP). And the key point of MLE is that after training the learned parameters can make the observed data the most likely: $\theta_{ML}=\arg \max E_{x\sim \hat p_{data}}\log p_{model}(x; \theta)$. Source: Deep Learning Book 5.5. For an example you can see the 4.2 of this tutorial.
To get the parameters that can make the observed data most likely we need to get the likelihood function and to optimize the value of it by tuning the parameters. $L(\theta)=\prod_{i=1}^n f(X_i|\theta)$.
Other references: Stanford CS109 Parameter Estimation | What is the difference between loss function and MLE? | Can someone give examples of each under different modes (e.g. what is
the MLE in a Discrete Naive Bayes or in Logistic Regression), also how
they are related to the loss functions?
When we deal w | What is the difference between loss function and MLE?
Can someone give examples of each under different modes (e.g. what is
the MLE in a Discrete Naive Bayes or in Logistic Regression), also how
they are related to the loss functions?
When we deal with machine learning algorithms we are:
1) specifying a probabilistic model that has parameters. For example the parameters in logistic regression and naive bayes in this answer.
2) learning the value of those parameters from data(sometimes maybe from some experts). Normally there are two methods: Maximum Likelihood Estimation(MLE) and Maximum A Prosteriori(MAP). And the key point of MLE is that after training the learned parameters can make the observed data the most likely: $\theta_{ML}=\arg \max E_{x\sim \hat p_{data}}\log p_{model}(x; \theta)$. Source: Deep Learning Book 5.5. For an example you can see the 4.2 of this tutorial.
To get the parameters that can make the observed data most likely we need to get the likelihood function and to optimize the value of it by tuning the parameters. $L(\theta)=\prod_{i=1}^n f(X_i|\theta)$.
Other references: Stanford CS109 Parameter Estimation | What is the difference between loss function and MLE?
Can someone give examples of each under different modes (e.g. what is
the MLE in a Discrete Naive Bayes or in Logistic Regression), also how
they are related to the loss functions?
When we deal w |
19,742 | Does correlated input data lead to overfitting with neural networks? | Actually no.
The question as such is a bit general, and mixes two things that are not really related. Overfitting usually is meant as the opposing quality to being a generalized description; in the sense that an overfitted (or overtrained) network will have less generalization power. This quality is primarily determined by the network architecture, the training and the validation procedure. The data and its properties only enter as "something that the training procedure happens on". This is more or less "text book knowledge"; you could try "An Introduction to Statistical Learning" by James, Witten, Hastie and Tibshirani. Or "Pattern Recognition" by Bishop (my favourite book ever on the general topic). Or "Pattern Recognition and Machine Learning", also by Bishop.
For the correlation itself: Consider the input space having a certain dimension. No matter what transformation you use, the dimensionality will remain the same -- linear algebra says so. In one case the given base will be completely uncorrelated -- this is what you get, when you de-correlate the variables, or simply apply PAT (Principle Axis Transformation.) Take any linear algebra book for this.
Since a neural network with an appropriate architecture can model any (!) function, you can safely assume, that it also could first model the PAT and then do whatever it also should do -- e.g. classification, regression, etc.
You could also consider the correlation a feature, which should be part of the neural network description, since it's a property of the data. The nature of the correlation is not really important, unless it is something that should not be a part of the data. This would actually be a different topic -- you should model or quantify something like noise in the input and account for it.
So, in summary no. Correlated data means you should work harder to make the handling of data technically simpler and more effective. Overfitting can occur, but in won't happen because there is correlated data. | Does correlated input data lead to overfitting with neural networks? | Actually no.
The question as such is a bit general, and mixes two things that are not really related. Overfitting usually is meant as the opposing quality to being a generalized description; in the se | Does correlated input data lead to overfitting with neural networks?
Actually no.
The question as such is a bit general, and mixes two things that are not really related. Overfitting usually is meant as the opposing quality to being a generalized description; in the sense that an overfitted (or overtrained) network will have less generalization power. This quality is primarily determined by the network architecture, the training and the validation procedure. The data and its properties only enter as "something that the training procedure happens on". This is more or less "text book knowledge"; you could try "An Introduction to Statistical Learning" by James, Witten, Hastie and Tibshirani. Or "Pattern Recognition" by Bishop (my favourite book ever on the general topic). Or "Pattern Recognition and Machine Learning", also by Bishop.
For the correlation itself: Consider the input space having a certain dimension. No matter what transformation you use, the dimensionality will remain the same -- linear algebra says so. In one case the given base will be completely uncorrelated -- this is what you get, when you de-correlate the variables, or simply apply PAT (Principle Axis Transformation.) Take any linear algebra book for this.
Since a neural network with an appropriate architecture can model any (!) function, you can safely assume, that it also could first model the PAT and then do whatever it also should do -- e.g. classification, regression, etc.
You could also consider the correlation a feature, which should be part of the neural network description, since it's a property of the data. The nature of the correlation is not really important, unless it is something that should not be a part of the data. This would actually be a different topic -- you should model or quantify something like noise in the input and account for it.
So, in summary no. Correlated data means you should work harder to make the handling of data technically simpler and more effective. Overfitting can occur, but in won't happen because there is correlated data. | Does correlated input data lead to overfitting with neural networks?
Actually no.
The question as such is a bit general, and mixes two things that are not really related. Overfitting usually is meant as the opposing quality to being a generalized description; in the se |
19,743 | Does correlated input data lead to overfitting with neural networks? | cherub is correct in regards to his statement's pertaining to over-fitting. However, I think the discussion of highly correlated features and ANN overly simplifies the issue.
Yes, it is true in theory that an ANN can approximate any function. However, in practice is it not a good idea to include numerous highly correlated features. Doing so will introduce many redundancies within the model. The inclusion of such redundancies will introduce unnecessary complexities and in doing so could increase the number of local minima. Given that the loss function of an ANN is not inherently smooth, introducing unnecessary roughness is not a great idea. | Does correlated input data lead to overfitting with neural networks? | cherub is correct in regards to his statement's pertaining to over-fitting. However, I think the discussion of highly correlated features and ANN overly simplifies the issue.
Yes, it is true in the | Does correlated input data lead to overfitting with neural networks?
cherub is correct in regards to his statement's pertaining to over-fitting. However, I think the discussion of highly correlated features and ANN overly simplifies the issue.
Yes, it is true in theory that an ANN can approximate any function. However, in practice is it not a good idea to include numerous highly correlated features. Doing so will introduce many redundancies within the model. The inclusion of such redundancies will introduce unnecessary complexities and in doing so could increase the number of local minima. Given that the loss function of an ANN is not inherently smooth, introducing unnecessary roughness is not a great idea. | Does correlated input data lead to overfitting with neural networks?
cherub is correct in regards to his statement's pertaining to over-fitting. However, I think the discussion of highly correlated features and ANN overly simplifies the issue.
Yes, it is true in the |
19,744 | Does correlated input data lead to overfitting with neural networks? | Well, AlphaGo's value network has suffered from correlated data according to the authors, David Silver et al.. The network predicts the outcome of a game, and the training data consists of professional games. They say that the network learned to recognize the games in the training data instead of learning the game itself. This is very plausible, learning the game is hard, but recognizing distinct games are much easier, every game probably has some easily recognizable characteristic features. To mitigate this problem, they created a new dataset by taking only a single state from a game. This reduced the overfitting and the network generalized.
References:
Silver, David, et al. "Mastering the game of Go with deep neural networks and tree search." nature 529.7587 (2016): 484-489. | Does correlated input data lead to overfitting with neural networks? | Well, AlphaGo's value network has suffered from correlated data according to the authors, David Silver et al.. The network predicts the outcome of a game, and the training data consists of professiona | Does correlated input data lead to overfitting with neural networks?
Well, AlphaGo's value network has suffered from correlated data according to the authors, David Silver et al.. The network predicts the outcome of a game, and the training data consists of professional games. They say that the network learned to recognize the games in the training data instead of learning the game itself. This is very plausible, learning the game is hard, but recognizing distinct games are much easier, every game probably has some easily recognizable characteristic features. To mitigate this problem, they created a new dataset by taking only a single state from a game. This reduced the overfitting and the network generalized.
References:
Silver, David, et al. "Mastering the game of Go with deep neural networks and tree search." nature 529.7587 (2016): 484-489. | Does correlated input data lead to overfitting with neural networks?
Well, AlphaGo's value network has suffered from correlated data according to the authors, David Silver et al.. The network predicts the outcome of a game, and the training data consists of professiona |
19,745 | Predicting with random effects in mgcv gam | From version 1.8.8 of mgcv predict.gam has gained an exclude argument which allows for the zeroing out of terms in the model, including random effects, when predicting, without the dummy trick that was suggested previously.
predict.gam and predict.bam now accept an 'exclude' argument allowing terms (e.g. random effects) to be zeroed for prediction. For efficiency, smooth terms not in terms or in exclude are no longer evaluated, and are instead set to zero or not returned. See ?predict.gam.
library("mgcv")
require("nlme")
dum <- rep(1,18)
b1 <- gam(travel ~ s(Rail, bs="re", by=dum), data=Rail, method="REML")
b2 <- gam(travel ~ s(Rail, bs="re"), data=Rail, method="REML")
head(predict(b1, newdata = cbind(Rail, dum = dum))) # ranefs on
head(predict(b1, newdata = cbind(Rail, dum = 0))) # ranefs off
head(predict(b2, newdata = Rail, exclude = "s(Rail)")) # ranefs off, no dummy
> head(predict(b1, newdata = cbind(Rail, dum = dum))) # ranefs on
1 2 3 4 5 6
54.10852 54.10852 54.10852 31.96909 31.96909 31.96909
> head(predict(b1, newdata = cbind(Rail, dum = 0))) # ranefs off
1 2 3 4 5 6
66.5 66.5 66.5 66.5 66.5 66.5
> head(predict(b2, newdata = Rail, exclude = "s(Rail)")) # ranefs off, no dummy
1 2 3 4 5 6
66.5 66.5 66.5 66.5 66.5 66.5
Older approach
Simon Wood has used the following simple example to check this is working:
library("mgcv")
require("nlme")
dum <- rep(1,18)
b <- gam(travel ~ s(Rail, bs="re", by=dum), data=Rail, method="REML")
predict(b, newdata=data.frame(Rail="1", dum=0)) ## r.e. "turned off"
predict(b, newdata=data.frame(Rail="1", dum=1)) ## prediction with r.e
Which works for me. Likewise:
dum <- rep(1, NROW(na.omit(Orthodont)))
m <- gam(distance ~ s(age, bs = "re", by = dum) + Sex, data = Orthodont)
predict(m, data.frame(age = 8, Sex = "Female", dum = 1))
predict(m, data.frame(age = 8, Sex = "Female", dum = 0))
also works.
So I would check the data you are supplying in newdata is what you think it is as the problem may not be with VesselID — the error is coming from the function that would have been called by the predict() calls in the examples above, and Rail is a factor in the first example. | Predicting with random effects in mgcv gam | From version 1.8.8 of mgcv predict.gam has gained an exclude argument which allows for the zeroing out of terms in the model, including random effects, when predicting, without the dummy trick that wa | Predicting with random effects in mgcv gam
From version 1.8.8 of mgcv predict.gam has gained an exclude argument which allows for the zeroing out of terms in the model, including random effects, when predicting, without the dummy trick that was suggested previously.
predict.gam and predict.bam now accept an 'exclude' argument allowing terms (e.g. random effects) to be zeroed for prediction. For efficiency, smooth terms not in terms or in exclude are no longer evaluated, and are instead set to zero or not returned. See ?predict.gam.
library("mgcv")
require("nlme")
dum <- rep(1,18)
b1 <- gam(travel ~ s(Rail, bs="re", by=dum), data=Rail, method="REML")
b2 <- gam(travel ~ s(Rail, bs="re"), data=Rail, method="REML")
head(predict(b1, newdata = cbind(Rail, dum = dum))) # ranefs on
head(predict(b1, newdata = cbind(Rail, dum = 0))) # ranefs off
head(predict(b2, newdata = Rail, exclude = "s(Rail)")) # ranefs off, no dummy
> head(predict(b1, newdata = cbind(Rail, dum = dum))) # ranefs on
1 2 3 4 5 6
54.10852 54.10852 54.10852 31.96909 31.96909 31.96909
> head(predict(b1, newdata = cbind(Rail, dum = 0))) # ranefs off
1 2 3 4 5 6
66.5 66.5 66.5 66.5 66.5 66.5
> head(predict(b2, newdata = Rail, exclude = "s(Rail)")) # ranefs off, no dummy
1 2 3 4 5 6
66.5 66.5 66.5 66.5 66.5 66.5
Older approach
Simon Wood has used the following simple example to check this is working:
library("mgcv")
require("nlme")
dum <- rep(1,18)
b <- gam(travel ~ s(Rail, bs="re", by=dum), data=Rail, method="REML")
predict(b, newdata=data.frame(Rail="1", dum=0)) ## r.e. "turned off"
predict(b, newdata=data.frame(Rail="1", dum=1)) ## prediction with r.e
Which works for me. Likewise:
dum <- rep(1, NROW(na.omit(Orthodont)))
m <- gam(distance ~ s(age, bs = "re", by = dum) + Sex, data = Orthodont)
predict(m, data.frame(age = 8, Sex = "Female", dum = 1))
predict(m, data.frame(age = 8, Sex = "Female", dum = 0))
also works.
So I would check the data you are supplying in newdata is what you think it is as the problem may not be with VesselID — the error is coming from the function that would have been called by the predict() calls in the examples above, and Rail is a factor in the first example. | Predicting with random effects in mgcv gam
From version 1.8.8 of mgcv predict.gam has gained an exclude argument which allows for the zeroing out of terms in the model, including random effects, when predicting, without the dummy trick that wa |
19,746 | Why does R's lm() return different coefficient estimates than my textbook? | It looks like the author made a mathematical error somewhere.
If you expand the sum-of-squares deviation
$$
S = ((b+m)-1)^2+ ((b+2m)-5)^2 + ((b+4m)-9)^2
$$
you get
$$
\begin{split}
S = & b^2+2 b m+ m^2 + 1 - 2 b - 2 m \\
+ & b^2+4 b m+ 4 m^2 + 25 - 10 b -20 m \\
+ & b^2+8 b m+16 m^2 + 81 - 18 b -72 m
\end{split}
$$
which reduces to
$$
3 b^2 + 14 b m + 21 m^2 + 107 - 30 b - 94 m
$$
which is the same as the author's expression, except the constant term, which doesn't matter anyway).
Now we need to try to minimize this by setting the derivatives of $S$ with respect to $b$ and $m$ to zero and solving the system.
$$
dS/db = 6 b + 14 m -30 \to 3 b +7 m-15 = 0
$$
$$
dS/dm = 14 b +42 m -94 \to 7 b + 21 m -47 = 0
$$
Solve
$$
\begin{split}
b & = (15-7m)/3 \\
0 & = 7 (15-7m)/3 + 21 m-47 \\
47 - 35 & = (-49/3 + 21) m \\
m & = (47-35)/(21-49/3) = 18/7
\end{split}
$$
R says this is indeed 2.571429 ...
Based on this link this seems to be from a Coursera course ... ? Maybe there was a mis-transcription of the data somewhere?
The other, independent way to do this calculation is to know that the estimated regression slope is equal to the sum of cross products ($\sum (y-\bar y) (x-\bar x)$) divided by the sum of squares ($\sum (x-\bar x)^2$).
g <- c(1,2,4)
g0 <- g - mean(g)
s <- c(1,5,9)
s0 <- s- mean(s)
sum(g0*s0)/(sum(g0^2))
## [1] 2.571429
If think if the shoe sizes were $\{1,11/3,9\}$ instead of $\{1,5,9\}$ then the slope would come out to 8/3 ... | Why does R's lm() return different coefficient estimates than my textbook? | It looks like the author made a mathematical error somewhere.
If you expand the sum-of-squares deviation
$$
S = ((b+m)-1)^2+ ((b+2m)-5)^2 + ((b+4m)-9)^2
$$
you get
$$
\begin{split}
S = & b^2+2 b m+ | Why does R's lm() return different coefficient estimates than my textbook?
It looks like the author made a mathematical error somewhere.
If you expand the sum-of-squares deviation
$$
S = ((b+m)-1)^2+ ((b+2m)-5)^2 + ((b+4m)-9)^2
$$
you get
$$
\begin{split}
S = & b^2+2 b m+ m^2 + 1 - 2 b - 2 m \\
+ & b^2+4 b m+ 4 m^2 + 25 - 10 b -20 m \\
+ & b^2+8 b m+16 m^2 + 81 - 18 b -72 m
\end{split}
$$
which reduces to
$$
3 b^2 + 14 b m + 21 m^2 + 107 - 30 b - 94 m
$$
which is the same as the author's expression, except the constant term, which doesn't matter anyway).
Now we need to try to minimize this by setting the derivatives of $S$ with respect to $b$ and $m$ to zero and solving the system.
$$
dS/db = 6 b + 14 m -30 \to 3 b +7 m-15 = 0
$$
$$
dS/dm = 14 b +42 m -94 \to 7 b + 21 m -47 = 0
$$
Solve
$$
\begin{split}
b & = (15-7m)/3 \\
0 & = 7 (15-7m)/3 + 21 m-47 \\
47 - 35 & = (-49/3 + 21) m \\
m & = (47-35)/(21-49/3) = 18/7
\end{split}
$$
R says this is indeed 2.571429 ...
Based on this link this seems to be from a Coursera course ... ? Maybe there was a mis-transcription of the data somewhere?
The other, independent way to do this calculation is to know that the estimated regression slope is equal to the sum of cross products ($\sum (y-\bar y) (x-\bar x)$) divided by the sum of squares ($\sum (x-\bar x)^2$).
g <- c(1,2,4)
g0 <- g - mean(g)
s <- c(1,5,9)
s0 <- s- mean(s)
sum(g0*s0)/(sum(g0^2))
## [1] 2.571429
If think if the shoe sizes were $\{1,11/3,9\}$ instead of $\{1,5,9\}$ then the slope would come out to 8/3 ... | Why does R's lm() return different coefficient estimates than my textbook?
It looks like the author made a mathematical error somewhere.
If you expand the sum-of-squares deviation
$$
S = ((b+m)-1)^2+ ((b+2m)-5)^2 + ((b+4m)-9)^2
$$
you get
$$
\begin{split}
S = & b^2+2 b m+ |
19,747 | In left skewed data, what is the relationship between mean and median? | It's a nontrivial question (surely not as trivial as the people asking the question appear to think).
The difficulty is ultimately caused by the fact that we don't really know what we mean by 'skewness' - a lot of the time it's kind of obvious, but sometimes it really isn't. Given the difficulty in pinning down what we mean by 'location' and 'spread' in nontrivial cases (for example, the mean isn't always what we mean when we talk about location), it should be no great surprise that a more subtle concept like skewness is at least as slippery. So this leads us to try various algebraic definitions of what we mean, and they don't always agree with each other.
If you measure skewness by the second Pearson skewness coefficient, then the mean ($\mu$) will be less than the median ($\stackrel{\sim}{\mu}$ -- i.e. in this case you have it backwards).
The (population) second Pearson skewness is $$\frac{3(\mu-\stackrel{\sim}{\mu})}{\sigma}\,,$$ and will be negative ("left skew") when $\mu<\stackrel{\sim}{\mu}$.
The sample versions of these statistics work similarly.
The reason for the necessary relationship between mean and median in this case is because that's how the skewness measure is defined.
Here's a left-skewed density (by both the second Pearson measure and the more common measure in (2) below):
The median is marked in the lower margin in green, the mean in red.
So I expect the answer they want you to give is that the mean is less than the median. It's usually the case with the sorts of distributions we tend to give names to.
(But read on, and see why that's not actually correct as a general statement.)
If you measure it by the more usual standardized third moment, then it is often, but by no means always, the case that the mean will be less than the median.
That is, it's possible to construct examples where the opposite is true, or where one skewness measure is zero while the other is non-zero.
Which is to say, there's no necessary relationship between the locations of the mean, median and the moment-skewness.
Consider, for example, the following sample (the same example can be constructed as a discrete probability distribution):
2.7 15.0 15.0 15.0 30.0 30.0
mean: 17.95
median: 15
The mean is larger than the median, yet the third-moment skewness coefficient is negative (i.e. by its lights, we have left-skew data) since the sum of the cubes of the deviations from the mean is negative.
So in that sense, left-skew, but mean>median.
(On the other hand, if you change 2.7 in the above example to 3, then you have an example where the moment-skewness is zero, yet the mean exceeds the median. If you make it 3.3, then the moment-skewness is positive, and the mean exceeds the median - i.e. is finally in the 'anticipated' direction.)
If you use the first Pearson skewness instead of either of the above definitions, you have a similar issue to this case - the direction of the skewness does not pin down the relation between mean and median in general.
Edit: in answer to a question in comments -- an example where the mean and median are equal, but the moment-skewness is negative. Consider the following data (as before, it also counts as an example for a discrete population; consider writing the numbers on the faces of a die).
1 5 6 6 8 10
the mean and the median are both 6, but the sum of cubes of deviations from the mean are negative, so the third moment skewness is negative. | In left skewed data, what is the relationship between mean and median? | It's a nontrivial question (surely not as trivial as the people asking the question appear to think).
The difficulty is ultimately caused by the fact that we don't really know what we mean by 'skewnes | In left skewed data, what is the relationship between mean and median?
It's a nontrivial question (surely not as trivial as the people asking the question appear to think).
The difficulty is ultimately caused by the fact that we don't really know what we mean by 'skewness' - a lot of the time it's kind of obvious, but sometimes it really isn't. Given the difficulty in pinning down what we mean by 'location' and 'spread' in nontrivial cases (for example, the mean isn't always what we mean when we talk about location), it should be no great surprise that a more subtle concept like skewness is at least as slippery. So this leads us to try various algebraic definitions of what we mean, and they don't always agree with each other.
If you measure skewness by the second Pearson skewness coefficient, then the mean ($\mu$) will be less than the median ($\stackrel{\sim}{\mu}$ -- i.e. in this case you have it backwards).
The (population) second Pearson skewness is $$\frac{3(\mu-\stackrel{\sim}{\mu})}{\sigma}\,,$$ and will be negative ("left skew") when $\mu<\stackrel{\sim}{\mu}$.
The sample versions of these statistics work similarly.
The reason for the necessary relationship between mean and median in this case is because that's how the skewness measure is defined.
Here's a left-skewed density (by both the second Pearson measure and the more common measure in (2) below):
The median is marked in the lower margin in green, the mean in red.
So I expect the answer they want you to give is that the mean is less than the median. It's usually the case with the sorts of distributions we tend to give names to.
(But read on, and see why that's not actually correct as a general statement.)
If you measure it by the more usual standardized third moment, then it is often, but by no means always, the case that the mean will be less than the median.
That is, it's possible to construct examples where the opposite is true, or where one skewness measure is zero while the other is non-zero.
Which is to say, there's no necessary relationship between the locations of the mean, median and the moment-skewness.
Consider, for example, the following sample (the same example can be constructed as a discrete probability distribution):
2.7 15.0 15.0 15.0 30.0 30.0
mean: 17.95
median: 15
The mean is larger than the median, yet the third-moment skewness coefficient is negative (i.e. by its lights, we have left-skew data) since the sum of the cubes of the deviations from the mean is negative.
So in that sense, left-skew, but mean>median.
(On the other hand, if you change 2.7 in the above example to 3, then you have an example where the moment-skewness is zero, yet the mean exceeds the median. If you make it 3.3, then the moment-skewness is positive, and the mean exceeds the median - i.e. is finally in the 'anticipated' direction.)
If you use the first Pearson skewness instead of either of the above definitions, you have a similar issue to this case - the direction of the skewness does not pin down the relation between mean and median in general.
Edit: in answer to a question in comments -- an example where the mean and median are equal, but the moment-skewness is negative. Consider the following data (as before, it also counts as an example for a discrete population; consider writing the numbers on the faces of a die).
1 5 6 6 8 10
the mean and the median are both 6, but the sum of cubes of deviations from the mean are negative, so the third moment skewness is negative. | In left skewed data, what is the relationship between mean and median?
It's a nontrivial question (surely not as trivial as the people asking the question appear to think).
The difficulty is ultimately caused by the fact that we don't really know what we mean by 'skewnes |
19,748 | In left skewed data, what is the relationship between mean and median? | No. Left skewed data has a long tail on the left (low end) so the mean will usually be less than the median. (But see @Glen_b 's answer for an exception). Casually, I think data that "looks" left skewed will have mean less than median.
Right skewed data is more common; for instance, income. There the mean is greater than the median.
R code
set.seed(123) #set random seed
normdata <- rnorm(1000) #Normal data, skew = 0
extleft <- c(rep(-10, 5), rep(-20, 5)) #Some data to make skew left
alldata <- c(normdata,extleft)
library(moments)
skewness(alldata) #-6.77
mean(alldata) #-0.13
median(alldata) #-0.001 | In left skewed data, what is the relationship between mean and median? | No. Left skewed data has a long tail on the left (low end) so the mean will usually be less than the median. (But see @Glen_b 's answer for an exception). Casually, I think data that "looks" left ske | In left skewed data, what is the relationship between mean and median?
No. Left skewed data has a long tail on the left (low end) so the mean will usually be less than the median. (But see @Glen_b 's answer for an exception). Casually, I think data that "looks" left skewed will have mean less than median.
Right skewed data is more common; for instance, income. There the mean is greater than the median.
R code
set.seed(123) #set random seed
normdata <- rnorm(1000) #Normal data, skew = 0
extleft <- c(rep(-10, 5), rep(-20, 5)) #Some data to make skew left
alldata <- c(normdata,extleft)
library(moments)
skewness(alldata) #-6.77
mean(alldata) #-0.13
median(alldata) #-0.001 | In left skewed data, what is the relationship between mean and median?
No. Left skewed data has a long tail on the left (low end) so the mean will usually be less than the median. (But see @Glen_b 's answer for an exception). Casually, I think data that "looks" left ske |
19,749 | Gaussian Process regression for high dimensional data sets | Gaussian process models are generally fine with high dimensional datasets (I have used them with microarray data etc). They key is in choosing good values for the hyper-parameters (which effectively control the complexity of the model in a similar manner that regularisation does).
Sparse methods and pseudo-input methods are more for datasets with a large number of samples (> approx 4000 for my computer) rather than a large number of features. If you have a powerful enough computer to perform a Cholesky decomposition of the covariance matrix (n by n where n is the number of samples), then you probably don't need these methods.
If you are a MATLAB user, then I'd strongly recommend the GPML toolbox and the book by Rasmussen and Williams as good places to start.
HOWEVER, if you are interested in feature selection, then I would avoid GPs. The standard approach to feature selection with GPs is to use an Automatic Relevance Determination kernel (e.g. covSEard in GPML), and then achieve feature selection by tuning the kernel parameters to maximise the marginal likelihood. Unfortunately that is very likely to end up over-fitting the marginal likelihood and ending up with a model that performs (possibly much) worse than a model with a simple spherical radial basis function (covSEiso in GPML) covariance.
My current research focus lies on over-fitting in model selection at the moment and I have found that this is as much a problem for evidence maximisation in GPs as it is for cross-validation based optimisation of hyper-paraneters in kernel models, for details see this paper, and this one.
Feature selection for non-linear models is very tricky. Often you get better performance by sticking to a linear model and using L1 regularisation type approaches (Lasso/LARS/Elastic net etc.) to achieve sparsity or random forest methods. | Gaussian Process regression for high dimensional data sets | Gaussian process models are generally fine with high dimensional datasets (I have used them with microarray data etc). They key is in choosing good values for the hyper-parameters (which effectively | Gaussian Process regression for high dimensional data sets
Gaussian process models are generally fine with high dimensional datasets (I have used them with microarray data etc). They key is in choosing good values for the hyper-parameters (which effectively control the complexity of the model in a similar manner that regularisation does).
Sparse methods and pseudo-input methods are more for datasets with a large number of samples (> approx 4000 for my computer) rather than a large number of features. If you have a powerful enough computer to perform a Cholesky decomposition of the covariance matrix (n by n where n is the number of samples), then you probably don't need these methods.
If you are a MATLAB user, then I'd strongly recommend the GPML toolbox and the book by Rasmussen and Williams as good places to start.
HOWEVER, if you are interested in feature selection, then I would avoid GPs. The standard approach to feature selection with GPs is to use an Automatic Relevance Determination kernel (e.g. covSEard in GPML), and then achieve feature selection by tuning the kernel parameters to maximise the marginal likelihood. Unfortunately that is very likely to end up over-fitting the marginal likelihood and ending up with a model that performs (possibly much) worse than a model with a simple spherical radial basis function (covSEiso in GPML) covariance.
My current research focus lies on over-fitting in model selection at the moment and I have found that this is as much a problem for evidence maximisation in GPs as it is for cross-validation based optimisation of hyper-paraneters in kernel models, for details see this paper, and this one.
Feature selection for non-linear models is very tricky. Often you get better performance by sticking to a linear model and using L1 regularisation type approaches (Lasso/LARS/Elastic net etc.) to achieve sparsity or random forest methods. | Gaussian Process regression for high dimensional data sets
Gaussian process models are generally fine with high dimensional datasets (I have used them with microarray data etc). They key is in choosing good values for the hyper-parameters (which effectively |
19,750 | Gaussian Process regression for high dimensional data sets | You can try to use covariance functions designed specially to treat high dimensional data. Look through the paper on Additive covariance function for example. They have worked better than other state-of-the-art covariance functions in my numerical experiments with some real data of rather big input dimension (about $30$).
However, if the input dimension is really huge (more than $100$ or $200$) it seems that any kernel method will fail, and there is no exclusion for Gaussian processes regression. | Gaussian Process regression for high dimensional data sets | You can try to use covariance functions designed specially to treat high dimensional data. Look through the paper on Additive covariance function for example. They have worked better than other state- | Gaussian Process regression for high dimensional data sets
You can try to use covariance functions designed specially to treat high dimensional data. Look through the paper on Additive covariance function for example. They have worked better than other state-of-the-art covariance functions in my numerical experiments with some real data of rather big input dimension (about $30$).
However, if the input dimension is really huge (more than $100$ or $200$) it seems that any kernel method will fail, and there is no exclusion for Gaussian processes regression. | Gaussian Process regression for high dimensional data sets
You can try to use covariance functions designed specially to treat high dimensional data. Look through the paper on Additive covariance function for example. They have worked better than other state- |
19,751 | Gaussian Process regression for high dimensional data sets | Something you can try is to use Gaussian process regressors (GPRs) as the base regressors of a Bagging Regressor. Each base regressor will be trained on a subset of the samples and the features such that the dimensionality of the training data is drastically reduced for each regressor. I have observed that doing so increases the speed and the accuracy because it may be faster to train multiple GPRs on subsets of the data than one GPR on the whole high-dimensional data. Moreover, aggregating the outcomes of multiple weak GPRs improves performance by mitigating variance. | Gaussian Process regression for high dimensional data sets | Something you can try is to use Gaussian process regressors (GPRs) as the base regressors of a Bagging Regressor. Each base regressor will be trained on a subset of the samples and the features such t | Gaussian Process regression for high dimensional data sets
Something you can try is to use Gaussian process regressors (GPRs) as the base regressors of a Bagging Regressor. Each base regressor will be trained on a subset of the samples and the features such that the dimensionality of the training data is drastically reduced for each regressor. I have observed that doing so increases the speed and the accuracy because it may be faster to train multiple GPRs on subsets of the data than one GPR on the whole high-dimensional data. Moreover, aggregating the outcomes of multiple weak GPRs improves performance by mitigating variance. | Gaussian Process regression for high dimensional data sets
Something you can try is to use Gaussian process regressors (GPRs) as the base regressors of a Bagging Regressor. Each base regressor will be trained on a subset of the samples and the features such t |
19,752 | Comparing importance of different sets of predictors | Suggestions
You could perform individual multiple regressions for each type of predictor, and compare across multiple regressions, adjusted r-square, generalised r-square, or some other parsimony adjusted measure of variance explained.
You could alternatively explore the general literature on variable importance (see here for a discussion with links). This would encourage a focus on the importance of individual predictors.
In some situations hierarchical regression may provide a useful framework. You would enter one type of variable in one block (e.g., cognitive variables), and in the second block another type (e.g., social variables). This would help answer the question of whether one type of variable predicts over and above another type.
As a side examination, you could run a factor analysis on the predictor variables to examine whether the correlations between predictor variables map on to the assignment of variables to types.
Caveats
Types of variables such as cognitive, social, and behavioural are broad classes of variables. A given study will always include only a subset of the possible variables, and typically such a subset is small relative to the possible variables.
Furthermore, the measured variables may not be the most reliable or valid means of measuring the intended construct.
Thus, you need to be careful when drawing the broader inference about the relative importance of a given type of variable over and beyond what was actually measured.
You also need to consider any bias in the way that the dependent variable was measured. Particularly in psychological studies, there is a tendency for self-report measures to correlate well with self-report, ability with ability, other-report with other report, and so on. The issue is that the mode of measurement has a large effect over and beyond the actual construct of interest. Thus, if the dependent variable is measured in a particular way (e.g., self-report), then don't over-interpret larger correlations with one type of predictor if that type also uses self-report. | Comparing importance of different sets of predictors | Suggestions
You could perform individual multiple regressions for each type of predictor, and compare across multiple regressions, adjusted r-square, generalised r-square, or some other parsimony adj | Comparing importance of different sets of predictors
Suggestions
You could perform individual multiple regressions for each type of predictor, and compare across multiple regressions, adjusted r-square, generalised r-square, or some other parsimony adjusted measure of variance explained.
You could alternatively explore the general literature on variable importance (see here for a discussion with links). This would encourage a focus on the importance of individual predictors.
In some situations hierarchical regression may provide a useful framework. You would enter one type of variable in one block (e.g., cognitive variables), and in the second block another type (e.g., social variables). This would help answer the question of whether one type of variable predicts over and above another type.
As a side examination, you could run a factor analysis on the predictor variables to examine whether the correlations between predictor variables map on to the assignment of variables to types.
Caveats
Types of variables such as cognitive, social, and behavioural are broad classes of variables. A given study will always include only a subset of the possible variables, and typically such a subset is small relative to the possible variables.
Furthermore, the measured variables may not be the most reliable or valid means of measuring the intended construct.
Thus, you need to be careful when drawing the broader inference about the relative importance of a given type of variable over and beyond what was actually measured.
You also need to consider any bias in the way that the dependent variable was measured. Particularly in psychological studies, there is a tendency for self-report measures to correlate well with self-report, ability with ability, other-report with other report, and so on. The issue is that the mode of measurement has a large effect over and beyond the actual construct of interest. Thus, if the dependent variable is measured in a particular way (e.g., self-report), then don't over-interpret larger correlations with one type of predictor if that type also uses self-report. | Comparing importance of different sets of predictors
Suggestions
You could perform individual multiple regressions for each type of predictor, and compare across multiple regressions, adjusted r-square, generalised r-square, or some other parsimony adj |
19,753 | Comparing importance of different sets of predictors | Suppose that the first set of predictors requires $a$ degrees of freedom ($a \geq 4$ allowing for nonlinear terms), the second set requires $b$, and the third requires $c$ ($c \geq 3$) allowing for nonlinear terms. Compute the likelihood ratio $\chi^2$ test for the combined partial effects of each set, yielding $L_{1}, L_{2}, L_{3}$. The expected value of a $\chi^2$ random variable with $d$ degrees of freedom is $d$, so subtract $d$ to level the playing field. I.e., compute $L_{1}-a, L_{2}-b, L_{3}-c$. If using F-tests, multiple F by its numerator d.f. to get the $\chi^2$ scale. | Comparing importance of different sets of predictors | Suppose that the first set of predictors requires $a$ degrees of freedom ($a \geq 4$ allowing for nonlinear terms), the second set requires $b$, and the third requires $c$ ($c \geq 3$) allowing for no | Comparing importance of different sets of predictors
Suppose that the first set of predictors requires $a$ degrees of freedom ($a \geq 4$ allowing for nonlinear terms), the second set requires $b$, and the third requires $c$ ($c \geq 3$) allowing for nonlinear terms. Compute the likelihood ratio $\chi^2$ test for the combined partial effects of each set, yielding $L_{1}, L_{2}, L_{3}$. The expected value of a $\chi^2$ random variable with $d$ degrees of freedom is $d$, so subtract $d$ to level the playing field. I.e., compute $L_{1}-a, L_{2}-b, L_{3}-c$. If using F-tests, multiple F by its numerator d.f. to get the $\chi^2$ scale. | Comparing importance of different sets of predictors
Suppose that the first set of predictors requires $a$ degrees of freedom ($a \geq 4$ allowing for nonlinear terms), the second set requires $b$, and the third requires $c$ ($c \geq 3$) allowing for no |
19,754 | Comparing importance of different sets of predictors | Importance
First thing to do is operationalise 'importance of predictors'. I shall assume that it means something like 'sensitivity of mean outcome to changes in predictor values'. Since your predictors are grouped then sensitivity of the mean outcome to groups of predictors is more interesting than a variable by variable analysis. I leave it open whether sensitivity is understood causally. That issue is picked up later.
Three version of importance
Lots of variance explained: I'm guessing that psychologists' first port of call is probably a variance decomposition leading to a measure of how much outcome variance is explained by the variance-covarance structure in each group of predictors. Not being an experimentalist I can't suggest much here, except to note that the whole 'variance explained' concept is a bit ungrounded for my taste, even without the 'which sum of which squares' issue. Others are welcome to disagree and develop it further.
Large standardized coefficients: SPSS offers the (misnamed) beta to measure impact in a way that is comparable across variable. There are several reasons not to use this, discussed in Fox's regression textbook, here, and elsewhere. All apply here. It also ignores group structure.
On the other hand, I imagine that one could standardise predictors in groups and use covariance information to judge the effect of a one standard deviation movement in all of them. Personally the motto: "if a things not worth doing, it's not worth doing well" damps my interest in doing so.
Large marginal effects: The other approach is to stay on the scale of the measurements and calculate marginal effects between carefully chosen sample points.
Because you are interested in groups it is useful to choose points to vary groups of variables rather than single ones, e.g. manipulating both cognitive variables at once. (Lots of opportunity for cool plots here). Basic paper here. The effects package in R will do this nicely.
There are two caveats here:
If you do that you will want to watch out that you are not choosing two cognitive variables that while individually plausible, e.g. medians, are jointly far from any subject observation.
Some variables are not even theoretically manipulable, so the interpretation of marginal effects as causal is more delicate, though still useful.
Different numbers of predictors
Issues arise due to the grouped variables covariance structure, which we normally try not to worry about but for this task should.
In particular when calculating marginal effects (or standardized coefficients for that matter) on groups rather than single variables the curse of dimensionality will for larger groups make it easier for comparisons to stray into regions where there are no cases. More predictors in a group lead to a more sparsely populated space, so any importance measure will depend more on model assumptions and less on observations (but will not tell you that...) But these are the same issues as in the model fitting phase really. Certainly the same ones as would arise in a model-based causal impact assessment. | Comparing importance of different sets of predictors | Importance
First thing to do is operationalise 'importance of predictors'. I shall assume that it means something like 'sensitivity of mean outcome to changes in predictor values'. Since your predic | Comparing importance of different sets of predictors
Importance
First thing to do is operationalise 'importance of predictors'. I shall assume that it means something like 'sensitivity of mean outcome to changes in predictor values'. Since your predictors are grouped then sensitivity of the mean outcome to groups of predictors is more interesting than a variable by variable analysis. I leave it open whether sensitivity is understood causally. That issue is picked up later.
Three version of importance
Lots of variance explained: I'm guessing that psychologists' first port of call is probably a variance decomposition leading to a measure of how much outcome variance is explained by the variance-covarance structure in each group of predictors. Not being an experimentalist I can't suggest much here, except to note that the whole 'variance explained' concept is a bit ungrounded for my taste, even without the 'which sum of which squares' issue. Others are welcome to disagree and develop it further.
Large standardized coefficients: SPSS offers the (misnamed) beta to measure impact in a way that is comparable across variable. There are several reasons not to use this, discussed in Fox's regression textbook, here, and elsewhere. All apply here. It also ignores group structure.
On the other hand, I imagine that one could standardise predictors in groups and use covariance information to judge the effect of a one standard deviation movement in all of them. Personally the motto: "if a things not worth doing, it's not worth doing well" damps my interest in doing so.
Large marginal effects: The other approach is to stay on the scale of the measurements and calculate marginal effects between carefully chosen sample points.
Because you are interested in groups it is useful to choose points to vary groups of variables rather than single ones, e.g. manipulating both cognitive variables at once. (Lots of opportunity for cool plots here). Basic paper here. The effects package in R will do this nicely.
There are two caveats here:
If you do that you will want to watch out that you are not choosing two cognitive variables that while individually plausible, e.g. medians, are jointly far from any subject observation.
Some variables are not even theoretically manipulable, so the interpretation of marginal effects as causal is more delicate, though still useful.
Different numbers of predictors
Issues arise due to the grouped variables covariance structure, which we normally try not to worry about but for this task should.
In particular when calculating marginal effects (or standardized coefficients for that matter) on groups rather than single variables the curse of dimensionality will for larger groups make it easier for comparisons to stray into regions where there are no cases. More predictors in a group lead to a more sparsely populated space, so any importance measure will depend more on model assumptions and less on observations (but will not tell you that...) But these are the same issues as in the model fitting phase really. Certainly the same ones as would arise in a model-based causal impact assessment. | Comparing importance of different sets of predictors
Importance
First thing to do is operationalise 'importance of predictors'. I shall assume that it means something like 'sensitivity of mean outcome to changes in predictor values'. Since your predic |
19,755 | Comparing importance of different sets of predictors | One method is to combine the sets of variables into sheaf variables. This methods has been used extensively in sociology and related areas.
Refs:
Whitt, Hugh P. 1986. "The Sheaf Coefficient: A Simplified and Expanded Approach." Social Science Research 15:174-189. | Comparing importance of different sets of predictors | One method is to combine the sets of variables into sheaf variables. This methods has been used extensively in sociology and related areas.
Refs:
Whitt, Hugh P. 1986. "The Sheaf Coefficient: A Simplif | Comparing importance of different sets of predictors
One method is to combine the sets of variables into sheaf variables. This methods has been used extensively in sociology and related areas.
Refs:
Whitt, Hugh P. 1986. "The Sheaf Coefficient: A Simplified and Expanded Approach." Social Science Research 15:174-189. | Comparing importance of different sets of predictors
One method is to combine the sets of variables into sheaf variables. This methods has been used extensively in sociology and related areas.
Refs:
Whitt, Hugh P. 1986. "The Sheaf Coefficient: A Simplif |
19,756 | Simulation of ARIMA (1,1,0) series | If you want to simulate ARIMA you can use arima.sim in R, there is no need to do it by hand. This will generate the series you want.
e <- rnorm(100,0,0.345)
arima.sim(n=100,model=list(ar=-0.7048,order=c(1,1,0)),start.innov=4.1,n.start=1,innov=2.1+e)
You can look at the code of how this is achieved by typing arima.sim in R command line.
Alternatively if you do it yourself, the function you are probably looking is diffinv. It computes the inverse of lagged differences.
For recursive sequences R has a nice function filter. So instead of using loop
z <- rep(NA,100)
z[1] <- 4.1
for (i in 2:100) z[i]=cons+phi*z[i-1]+e[i]
you can write
filter(c(4.1,2.1+e),filter=-0.7048,method="recursive")
This will give the identical result to arima.sim example above:
diffinv(filter(c(4.1,2.1+e),filter=-0.7048,method="recursive")[-1]) | Simulation of ARIMA (1,1,0) series | If you want to simulate ARIMA you can use arima.sim in R, there is no need to do it by hand. This will generate the series you want.
e <- rnorm(100,0,0.345)
arima.sim(n=100,model=list(ar=-0.7048,orde | Simulation of ARIMA (1,1,0) series
If you want to simulate ARIMA you can use arima.sim in R, there is no need to do it by hand. This will generate the series you want.
e <- rnorm(100,0,0.345)
arima.sim(n=100,model=list(ar=-0.7048,order=c(1,1,0)),start.innov=4.1,n.start=1,innov=2.1+e)
You can look at the code of how this is achieved by typing arima.sim in R command line.
Alternatively if you do it yourself, the function you are probably looking is diffinv. It computes the inverse of lagged differences.
For recursive sequences R has a nice function filter. So instead of using loop
z <- rep(NA,100)
z[1] <- 4.1
for (i in 2:100) z[i]=cons+phi*z[i-1]+e[i]
you can write
filter(c(4.1,2.1+e),filter=-0.7048,method="recursive")
This will give the identical result to arima.sim example above:
diffinv(filter(c(4.1,2.1+e),filter=-0.7048,method="recursive")[-1]) | Simulation of ARIMA (1,1,0) series
If you want to simulate ARIMA you can use arima.sim in R, there is no need to do it by hand. This will generate the series you want.
e <- rnorm(100,0,0.345)
arima.sim(n=100,model=list(ar=-0.7048,orde |
19,757 | Simulation of ARIMA (1,1,0) series | While it's true that you can inspect arima.sim to see what it's doing, I found this blog post more helpful for pedagogical purposes. It starts from a single white noise series and shows how various ARIMA models are built up from that series. | Simulation of ARIMA (1,1,0) series | While it's true that you can inspect arima.sim to see what it's doing, I found this blog post more helpful for pedagogical purposes. It starts from a single white noise series and shows how various AR | Simulation of ARIMA (1,1,0) series
While it's true that you can inspect arima.sim to see what it's doing, I found this blog post more helpful for pedagogical purposes. It starts from a single white noise series and shows how various ARIMA models are built up from that series. | Simulation of ARIMA (1,1,0) series
While it's true that you can inspect arima.sim to see what it's doing, I found this blog post more helpful for pedagogical purposes. It starts from a single white noise series and shows how various AR |
19,758 | Variance of two weighted random variables | Using $w_1 + w_2 = 1$, compute
$$\eqalign{
\text{Var}(w_1 A + w_2 B) &= \left( w_1 \sigma_1 + w_2 \sigma_2 \right)^2 \cr
&= \left( w_1(\sigma_1 - \sigma_2) + \sigma_2 \right)^2 \text{.}
} $$
This shows that when $\sigma_1 \ne \sigma_2$, the graph of the variance versus $w_1$ (shown sideways in the illustration) is a parabola centered at $\sigma_2 / (\sigma_2 - \sigma_1)$. No portion of any parabola is linear. With $\sigma_1 = 5$ and $\sigma_2 = 4$, the center is at $-5$: way below the graph at the scale in which it is drawn. Thus, you are looking at a small piece of a parabola, which will appear linear.
When $\sigma_1 = \sigma_2$, the variance is a linear function of $w_1$. In this case the plot would be a perfectly vertical line segment.
BTW, you knew this answer already, without calculation, because basic principles imply the plot of variance cannot be a line unless it is vertical. After all, there is no mathematical or statistical prohibition to restrict $w_1$ to lie between $0$ and $1$: any value of $w_1$ determines a new random variable (a linear combination of the random variables A and B) and therefore must have a non-negative value for its variance. Therefore all these curves (even when extended to the full vertical range of $w_1$ ) must lie to the right of the vertical axis. That precludes all lines except vertical ones.
Plot of the variance for $\rho = 1 - 2^{-k}, k = -1, 0, 1, \ldots, 10$: | Variance of two weighted random variables | Using $w_1 + w_2 = 1$, compute
$$\eqalign{
\text{Var}(w_1 A + w_2 B) &= \left( w_1 \sigma_1 + w_2 \sigma_2 \right)^2 \cr
&= \left( w_1(\sigma_1 - \sigma_2) + \sigma_2 \right)^2 \text{.}
} $$
This show | Variance of two weighted random variables
Using $w_1 + w_2 = 1$, compute
$$\eqalign{
\text{Var}(w_1 A + w_2 B) &= \left( w_1 \sigma_1 + w_2 \sigma_2 \right)^2 \cr
&= \left( w_1(\sigma_1 - \sigma_2) + \sigma_2 \right)^2 \text{.}
} $$
This shows that when $\sigma_1 \ne \sigma_2$, the graph of the variance versus $w_1$ (shown sideways in the illustration) is a parabola centered at $\sigma_2 / (\sigma_2 - \sigma_1)$. No portion of any parabola is linear. With $\sigma_1 = 5$ and $\sigma_2 = 4$, the center is at $-5$: way below the graph at the scale in which it is drawn. Thus, you are looking at a small piece of a parabola, which will appear linear.
When $\sigma_1 = \sigma_2$, the variance is a linear function of $w_1$. In this case the plot would be a perfectly vertical line segment.
BTW, you knew this answer already, without calculation, because basic principles imply the plot of variance cannot be a line unless it is vertical. After all, there is no mathematical or statistical prohibition to restrict $w_1$ to lie between $0$ and $1$: any value of $w_1$ determines a new random variable (a linear combination of the random variables A and B) and therefore must have a non-negative value for its variance. Therefore all these curves (even when extended to the full vertical range of $w_1$ ) must lie to the right of the vertical axis. That precludes all lines except vertical ones.
Plot of the variance for $\rho = 1 - 2^{-k}, k = -1, 0, 1, \ldots, 10$: | Variance of two weighted random variables
Using $w_1 + w_2 = 1$, compute
$$\eqalign{
\text{Var}(w_1 A + w_2 B) &= \left( w_1 \sigma_1 + w_2 \sigma_2 \right)^2 \cr
&= \left( w_1(\sigma_1 - \sigma_2) + \sigma_2 \right)^2 \text{.}
} $$
This show |
19,759 | Variance of two weighted random variables | It isn't linear. The formula says it isn't linear. Trust your mathematical instinct!
It only appears linear in the graph because of the scale, with $\sigma_{1}=5$ and $\sigma_{2}=4$. Try it yourself: calculate the slopes at a few places and you will see that they differ. You can exaggerate the difference by picking $\sigma_{1}=37$, say.
Here is some R code:
a <- 5; b <- 4; p <- 1
f <- function(w) w^2*a^2 + (1-w)^2*b^2 + 2*w*(1-w)*p*a*b
curve(f, from = 0, to = 1)
If you would like to check some slopes:
(f(0.5) - f(0.4)) / 0.1
(f(0.8) - f(0.7)) / 0.1 | Variance of two weighted random variables | It isn't linear. The formula says it isn't linear. Trust your mathematical instinct!
It only appears linear in the graph because of the scale, with $\sigma_{1}=5$ and $\sigma_{2}=4$. Try it yourse | Variance of two weighted random variables
It isn't linear. The formula says it isn't linear. Trust your mathematical instinct!
It only appears linear in the graph because of the scale, with $\sigma_{1}=5$ and $\sigma_{2}=4$. Try it yourself: calculate the slopes at a few places and you will see that they differ. You can exaggerate the difference by picking $\sigma_{1}=37$, say.
Here is some R code:
a <- 5; b <- 4; p <- 1
f <- function(w) w^2*a^2 + (1-w)^2*b^2 + 2*w*(1-w)*p*a*b
curve(f, from = 0, to = 1)
If you would like to check some slopes:
(f(0.5) - f(0.4)) / 0.1
(f(0.8) - f(0.7)) / 0.1 | Variance of two weighted random variables
It isn't linear. The formula says it isn't linear. Trust your mathematical instinct!
It only appears linear in the graph because of the scale, with $\sigma_{1}=5$ and $\sigma_{2}=4$. Try it yourse |
19,760 | How do I compute whether my linear regression has a statistically significant difference from a known theoretical line? | This type of situation can be handled by a standard F-test for nested models. Since you want to test both of the parameters against a null model with fixed parameters, your hypotheses are:
$$H_0: \boldsymbol{\beta} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \quad \quad \quad H_A: \boldsymbol{\beta} \neq \begin{bmatrix} 0 \\ 1 \end{bmatrix} .$$
The F-test involves fitting both models and comparing their residual sum-of-squares, which are:
$$SSE_0 = \sum_{i=1}^n (y_i-x_i)^2 \quad \quad \quad SSE_A = \sum_{i=1}^n (y_i - \hat{\beta}_0 - \hat{\beta}_1 x_i)^2$$
The test statistic is:
$$F \equiv F(\mathbf{y}, \mathbf{x}) = \frac{n-2}{2} \cdot \frac{SSE_0 - SSE_A}{SSE_A}.$$
The corresponding p-value is:
$$p \equiv p(\mathbf{y}, \mathbf{x}) = \int \limits_{F(\mathbf{y}, \mathbf{x}) }^\infty \text{F-Dist}(r | 2, n-2) \ dr.$$
Implementation in R: Suppose your data is in a data-frame called DATA with variables called y and x. The F-test can be performed manually with the following code. In the simulated mock data I have used, you can see that the estimated coefficients are close to the ones in the null hypothesis, and the p-value of the test shows no significant evidence to falsify the null hypothesis that the true regression function is the identity function.
#Generate mock data (you can substitute your data if you prefer)
set.seed(12345);
n <- 1000;
x <- rnorm(n, mean = 0, sd = 5);
e <- rnorm(n, mean = 0, sd = 2/sqrt(1+abs(x)));
y <- x + e;
DATA <- data.frame(y = y, x = x);
#Fit initial regression model
MODEL <- lm(y ~ x, data = DATA);
#Calculate test statistic
SSE0 <- sum((DATA$y-DATA$x)^2);
SSEA <- sum(MODEL$residuals^2);
F_STAT <- ((n-2)/2)*((SSE0 - SSEA)/SSEA);
P_VAL <- pf(q = F_STAT, df1 = 2, df2 = n-2, lower.tail = FALSE);
#Plot the data and show test outcome
plot(DATA$x, DATA$y,
main = 'All Residuals',
sub = paste0('(Test against identity function - F-Stat = ',
sprintf("%.4f", F_STAT), ', p-value = ', sprintf("%.4f", P_VAL), ')'),
xlab = 'Dataset #1 Normalized residuals',
ylab = 'Dataset #2 Normalized residuals');
abline(lm(y ~ x, DATA), col = 'red', lty = 2, lwd = 2);
The summary output and plot for this data look like this:
summary(MODEL);
Call:
lm(formula = y ~ x, data = DATA)
Residuals:
Min 1Q Median 3Q Max
-4.8276 -0.6742 0.0043 0.6703 5.1462
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.02784 0.03552 -0.784 0.433
x 1.00507 0.00711 141.370 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.122 on 998 degrees of freedom
Multiple R-squared: 0.9524, Adjusted R-squared: 0.9524
F-statistic: 1.999e+04 on 1 and 998 DF, p-value: < 2.2e-16
F_STAT;
[1] 0.5370824
P_VAL;
[1] 0.5846198 | How do I compute whether my linear regression has a statistically significant difference from a know | This type of situation can be handled by a standard F-test for nested models. Since you want to test both of the parameters against a null model with fixed parameters, your hypotheses are:
$$H_0: \bo | How do I compute whether my linear regression has a statistically significant difference from a known theoretical line?
This type of situation can be handled by a standard F-test for nested models. Since you want to test both of the parameters against a null model with fixed parameters, your hypotheses are:
$$H_0: \boldsymbol{\beta} = \begin{bmatrix} 0 \\ 1 \end{bmatrix} \quad \quad \quad H_A: \boldsymbol{\beta} \neq \begin{bmatrix} 0 \\ 1 \end{bmatrix} .$$
The F-test involves fitting both models and comparing their residual sum-of-squares, which are:
$$SSE_0 = \sum_{i=1}^n (y_i-x_i)^2 \quad \quad \quad SSE_A = \sum_{i=1}^n (y_i - \hat{\beta}_0 - \hat{\beta}_1 x_i)^2$$
The test statistic is:
$$F \equiv F(\mathbf{y}, \mathbf{x}) = \frac{n-2}{2} \cdot \frac{SSE_0 - SSE_A}{SSE_A}.$$
The corresponding p-value is:
$$p \equiv p(\mathbf{y}, \mathbf{x}) = \int \limits_{F(\mathbf{y}, \mathbf{x}) }^\infty \text{F-Dist}(r | 2, n-2) \ dr.$$
Implementation in R: Suppose your data is in a data-frame called DATA with variables called y and x. The F-test can be performed manually with the following code. In the simulated mock data I have used, you can see that the estimated coefficients are close to the ones in the null hypothesis, and the p-value of the test shows no significant evidence to falsify the null hypothesis that the true regression function is the identity function.
#Generate mock data (you can substitute your data if you prefer)
set.seed(12345);
n <- 1000;
x <- rnorm(n, mean = 0, sd = 5);
e <- rnorm(n, mean = 0, sd = 2/sqrt(1+abs(x)));
y <- x + e;
DATA <- data.frame(y = y, x = x);
#Fit initial regression model
MODEL <- lm(y ~ x, data = DATA);
#Calculate test statistic
SSE0 <- sum((DATA$y-DATA$x)^2);
SSEA <- sum(MODEL$residuals^2);
F_STAT <- ((n-2)/2)*((SSE0 - SSEA)/SSEA);
P_VAL <- pf(q = F_STAT, df1 = 2, df2 = n-2, lower.tail = FALSE);
#Plot the data and show test outcome
plot(DATA$x, DATA$y,
main = 'All Residuals',
sub = paste0('(Test against identity function - F-Stat = ',
sprintf("%.4f", F_STAT), ', p-value = ', sprintf("%.4f", P_VAL), ')'),
xlab = 'Dataset #1 Normalized residuals',
ylab = 'Dataset #2 Normalized residuals');
abline(lm(y ~ x, DATA), col = 'red', lty = 2, lwd = 2);
The summary output and plot for this data look like this:
summary(MODEL);
Call:
lm(formula = y ~ x, data = DATA)
Residuals:
Min 1Q Median 3Q Max
-4.8276 -0.6742 0.0043 0.6703 5.1462
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.02784 0.03552 -0.784 0.433
x 1.00507 0.00711 141.370 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.122 on 998 degrees of freedom
Multiple R-squared: 0.9524, Adjusted R-squared: 0.9524
F-statistic: 1.999e+04 on 1 and 998 DF, p-value: < 2.2e-16
F_STAT;
[1] 0.5370824
P_VAL;
[1] 0.5846198 | How do I compute whether my linear regression has a statistically significant difference from a know
This type of situation can be handled by a standard F-test for nested models. Since you want to test both of the parameters against a null model with fixed parameters, your hypotheses are:
$$H_0: \bo |
19,761 | How do I compute whether my linear regression has a statistically significant difference from a known theoretical line? | Here is a cool graphical method which I cribbed from Julian Faraway's excellent book "Linear Models With R (Second Edition)". It's simultaneous 95% confidence intervals for the intercept and slope, plotted as an ellipse.
For illustration, I created 500 observations with a variable "x" having N(mean=10,sd=5) distribution and then a variable "y" whose distribution is N(mean=x,sd=2). That yields a correlation of a little over 0.9 which may not be quite as tight as your data.
You can check the ellipse to see if the point (intercept=0,slope=1) fall within or outside that simultaneous confidence interval.
library(tidyverse)
library(ellipse)
#>
#> Attaching package: 'ellipse'
#> The following object is masked from 'package:graphics':
#>
#> pairs
set.seed(50)
dat <- data.frame(x=rnorm(500,10,5)) %>% mutate(y=rnorm(n(),x,2))
lmod1 <- lm(y~x,data=dat)
summary(lmod1)
#>
#> Call:
#> lm(formula = y ~ x, data = dat)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -6.9652 -1.1796 -0.0576 1.2802 6.0212
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 0.24171 0.20074 1.204 0.229
#> x 0.97753 0.01802 54.246 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 2.057 on 498 degrees of freedom
#> Multiple R-squared: 0.8553, Adjusted R-squared: 0.855
#> F-statistic: 2943 on 1 and 498 DF, p-value: < 2.2e-16
cor(dat$y,dat$x)
#> [1] 0.9248032
plot(y~x,dat)
abline(0,1)
confint(lmod1)
#> 2.5 % 97.5 %
#> (Intercept) -0.1526848 0.6361047
#> x 0.9421270 1.0129370
plot(ellipse(lmod1,c("(Intercept)","x")),type="l")
points(coef(lmod1)["(Intercept)"],coef(lmod1)["x"],pch=19)
abline(v=confint(lmod1)["(Intercept)",],lty=2)
abline(h=confint(lmod1)["x",],lty=2)
points(0,1,pch=1,size=3)
#> Warning in plot.xy(xy.coords(x, y), type = type, ...): "size" is not a
#> graphical parameter
abline(v=0,lty=10)
abline(h=0,lty=10)
Created on 2019-01-21 by the reprex package (v0.2.1) | How do I compute whether my linear regression has a statistically significant difference from a know | Here is a cool graphical method which I cribbed from Julian Faraway's excellent book "Linear Models With R (Second Edition)". It's simultaneous 95% confidence intervals for the intercept and slope, pl | How do I compute whether my linear regression has a statistically significant difference from a known theoretical line?
Here is a cool graphical method which I cribbed from Julian Faraway's excellent book "Linear Models With R (Second Edition)". It's simultaneous 95% confidence intervals for the intercept and slope, plotted as an ellipse.
For illustration, I created 500 observations with a variable "x" having N(mean=10,sd=5) distribution and then a variable "y" whose distribution is N(mean=x,sd=2). That yields a correlation of a little over 0.9 which may not be quite as tight as your data.
You can check the ellipse to see if the point (intercept=0,slope=1) fall within or outside that simultaneous confidence interval.
library(tidyverse)
library(ellipse)
#>
#> Attaching package: 'ellipse'
#> The following object is masked from 'package:graphics':
#>
#> pairs
set.seed(50)
dat <- data.frame(x=rnorm(500,10,5)) %>% mutate(y=rnorm(n(),x,2))
lmod1 <- lm(y~x,data=dat)
summary(lmod1)
#>
#> Call:
#> lm(formula = y ~ x, data = dat)
#>
#> Residuals:
#> Min 1Q Median 3Q Max
#> -6.9652 -1.1796 -0.0576 1.2802 6.0212
#>
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 0.24171 0.20074 1.204 0.229
#> x 0.97753 0.01802 54.246 <2e-16 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 2.057 on 498 degrees of freedom
#> Multiple R-squared: 0.8553, Adjusted R-squared: 0.855
#> F-statistic: 2943 on 1 and 498 DF, p-value: < 2.2e-16
cor(dat$y,dat$x)
#> [1] 0.9248032
plot(y~x,dat)
abline(0,1)
confint(lmod1)
#> 2.5 % 97.5 %
#> (Intercept) -0.1526848 0.6361047
#> x 0.9421270 1.0129370
plot(ellipse(lmod1,c("(Intercept)","x")),type="l")
points(coef(lmod1)["(Intercept)"],coef(lmod1)["x"],pch=19)
abline(v=confint(lmod1)["(Intercept)",],lty=2)
abline(h=confint(lmod1)["x",],lty=2)
points(0,1,pch=1,size=3)
#> Warning in plot.xy(xy.coords(x, y), type = type, ...): "size" is not a
#> graphical parameter
abline(v=0,lty=10)
abline(h=0,lty=10)
Created on 2019-01-21 by the reprex package (v0.2.1) | How do I compute whether my linear regression has a statistically significant difference from a know
Here is a cool graphical method which I cribbed from Julian Faraway's excellent book "Linear Models With R (Second Edition)". It's simultaneous 95% confidence intervals for the intercept and slope, pl |
19,762 | How do I compute whether my linear regression has a statistically significant difference from a known theoretical line? | You could compute the coefficients with n bootstrapped samples. This will likely result in normal distributed coefficient values (Central limit theorem). With that you could then construct a (e.g. 95%) confidence interval with t-values (n-1 degrees of freedom) around the mean. If your CI does not include 1 (0), it is statistically significant different, or more precise: You can reject the null hypothesis of an equal slope. | How do I compute whether my linear regression has a statistically significant difference from a know | You could compute the coefficients with n bootstrapped samples. This will likely result in normal distributed coefficient values (Central limit theorem). With that you could then construct a (e.g. 95% | How do I compute whether my linear regression has a statistically significant difference from a known theoretical line?
You could compute the coefficients with n bootstrapped samples. This will likely result in normal distributed coefficient values (Central limit theorem). With that you could then construct a (e.g. 95%) confidence interval with t-values (n-1 degrees of freedom) around the mean. If your CI does not include 1 (0), it is statistically significant different, or more precise: You can reject the null hypothesis of an equal slope. | How do I compute whether my linear regression has a statistically significant difference from a know
You could compute the coefficients with n bootstrapped samples. This will likely result in normal distributed coefficient values (Central limit theorem). With that you could then construct a (e.g. 95% |
19,763 | How do I compute whether my linear regression has a statistically significant difference from a known theoretical line? | You could perform a simple test of hypothesis, namely a t-test. For the intercept your null hypothesis is $\beta_0=0$ (note that this is the significance test), and for the slope you have that under H0 $\beta_1=1$. | How do I compute whether my linear regression has a statistically significant difference from a know | You could perform a simple test of hypothesis, namely a t-test. For the intercept your null hypothesis is $\beta_0=0$ (note that this is the significance test), and for the slope you have that under H | How do I compute whether my linear regression has a statistically significant difference from a known theoretical line?
You could perform a simple test of hypothesis, namely a t-test. For the intercept your null hypothesis is $\beta_0=0$ (note that this is the significance test), and for the slope you have that under H0 $\beta_1=1$. | How do I compute whether my linear regression has a statistically significant difference from a know
You could perform a simple test of hypothesis, namely a t-test. For the intercept your null hypothesis is $\beta_0=0$ (note that this is the significance test), and for the slope you have that under H |
19,764 | How do I compute whether my linear regression has a statistically significant difference from a known theoretical line? | You should fit a linear regression and check the 95% confidence intervals for the two parameters. If the CI of the slope includes 1 and the CI of the offset includes 0 the two sided test is insignificant approx. on the (95%)^2 level -- as we use two separate tests the typ-I risk increases.
Using R:
fit = lm(Y ~ X)
confint(fit)
or you use
summary(fit)
and calc the 2 sigma intervals by yourself. | How do I compute whether my linear regression has a statistically significant difference from a know | You should fit a linear regression and check the 95% confidence intervals for the two parameters. If the CI of the slope includes 1 and the CI of the offset includes 0 the two sided test is insignific | How do I compute whether my linear regression has a statistically significant difference from a known theoretical line?
You should fit a linear regression and check the 95% confidence intervals for the two parameters. If the CI of the slope includes 1 and the CI of the offset includes 0 the two sided test is insignificant approx. on the (95%)^2 level -- as we use two separate tests the typ-I risk increases.
Using R:
fit = lm(Y ~ X)
confint(fit)
or you use
summary(fit)
and calc the 2 sigma intervals by yourself. | How do I compute whether my linear regression has a statistically significant difference from a know
You should fit a linear regression and check the 95% confidence intervals for the two parameters. If the CI of the slope includes 1 and the CI of the offset includes 0 the two sided test is insignific |
19,765 | Should $ R^2$ be calculated on training data or test data? | The test data shows you how well your model has generalized. When you run the test data through your model, it is the moment you've been waiting for: is it good enough?
In the machine learning world, it is very common to present all of the train, validation and the test metrics, but it is the test accuracy that is the most important.
However, if you get a low $R^2$ score on one, and not the other, then something is off! E.g. If the $R^2_{\text{test}}\ll R^2_{\text{training}}$, then it indicates that your model does not generalize well. That is, if e.g. your test set only contains "unseen" data points, then your model would not appear to extrapolate well (aka a form of covariate shift).
In conclusion: you should compare them! However, in many cases, it's the test-set results you're most interested in. | Should $ R^2$ be calculated on training data or test data? | The test data shows you how well your model has generalized. When you run the test data through your model, it is the moment you've been waiting for: is it good enough?
In the machine learning world, | Should $ R^2$ be calculated on training data or test data?
The test data shows you how well your model has generalized. When you run the test data through your model, it is the moment you've been waiting for: is it good enough?
In the machine learning world, it is very common to present all of the train, validation and the test metrics, but it is the test accuracy that is the most important.
However, if you get a low $R^2$ score on one, and not the other, then something is off! E.g. If the $R^2_{\text{test}}\ll R^2_{\text{training}}$, then it indicates that your model does not generalize well. That is, if e.g. your test set only contains "unseen" data points, then your model would not appear to extrapolate well (aka a form of covariate shift).
In conclusion: you should compare them! However, in many cases, it's the test-set results you're most interested in. | Should $ R^2$ be calculated on training data or test data?
The test data shows you how well your model has generalized. When you run the test data through your model, it is the moment you've been waiting for: is it good enough?
In the machine learning world, |
19,766 | Should $ R^2$ be calculated on training data or test data? | When calculating the $R^2$
value of
a linear regression model, should it be calculated on the training
dataset, test dataset or both and why?
The usual $R^2$ is a fitting measure and must be calculated on the training set. In some regression analysis there is no split in vs out of sample and "in sample = all data".
Furthermore, when calculating $SS_{\text{res}}$ and $SS_{\text{tot}}$
as per the wikipedia article above, should both sums be over the same
data set?
Yes, of course.You have three object: residual sum of square, total sum of square, explained sum of square. All of them are computed "in sample".
If you are interested in prediction accuracy, to use in sample measures, as the usual $R^2$, is not a good idea (quite common mistake in the past). You need the so called out of sample $R^2$. Read here: How to calculate out of sample R squared? | Should $ R^2$ be calculated on training data or test data? | When calculating the $R^2$
value of
a linear regression model, should it be calculated on the training
dataset, test dataset or both and why?
The usual $R^2$ is a fitting measure and must be calculat | Should $ R^2$ be calculated on training data or test data?
When calculating the $R^2$
value of
a linear regression model, should it be calculated on the training
dataset, test dataset or both and why?
The usual $R^2$ is a fitting measure and must be calculated on the training set. In some regression analysis there is no split in vs out of sample and "in sample = all data".
Furthermore, when calculating $SS_{\text{res}}$ and $SS_{\text{tot}}$
as per the wikipedia article above, should both sums be over the same
data set?
Yes, of course.You have three object: residual sum of square, total sum of square, explained sum of square. All of them are computed "in sample".
If you are interested in prediction accuracy, to use in sample measures, as the usual $R^2$, is not a good idea (quite common mistake in the past). You need the so called out of sample $R^2$. Read here: How to calculate out of sample R squared? | Should $ R^2$ be calculated on training data or test data?
When calculating the $R^2$
value of
a linear regression model, should it be calculated on the training
dataset, test dataset or both and why?
The usual $R^2$ is a fitting measure and must be calculat |
19,767 | Should $ R^2$ be calculated on training data or test data? | An elaboration of the above answer on why it's not a good idea to calculate $R^2$ on test data, different than learning data.
To measure "predictive power" of model, how good it performs on data outside of learning dataset, one should use $R^2_{oos}$ instead of $R^2$. OOS stands from "out of sample".
In $R^2_{oos}$ in denominator we replace $ \Sigma (y - \bar{y}_{test})^2 $ by $ \Sigma (y - \bar{y}_{train})^2 $
If you want to know exactly what happens if one ignores $R^2_{oos}$ and uses $R^2$ on test dataset, read below.
I discovered, to my surprise, when the target variable has high variance compared to "signal" (dependency on feature), then calculating $R^2$ on test dataset (different from learning dataset) will produce negative $R^2$ with guarantee.
Below I put jupyter notebook code in Pyhton so anyone can reproduce it and see it themeselves:
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
x = np.linspace(-1, 1, num=1_000_000)
X = x.reshape(-1, 1)
# notice 0.05 << 1 (variance of y)
y = np.random.normal(x * 0.05)
df = pd.DataFrame({'X': pd.Series(x), 'y': pd.Series(y)})
ax = sns.histplot(data=df, x='X', y='y', bins=(200, 100))
ax.figure.set_figwidth(18)
ax.figure.set_figheight(9)
ax.grid()
plt.show()
from sklearn import ensemble
from sklearn.model_selection import cross_val_score
fraction=0.0001
reg = ensemble.ExtraTreesRegressor(
n_estimators=20, min_samples_split=fraction * 2, min_samples_leaf=fraction
)
_ = reg.fit(X, y)
print(f'r2 score on learn dataset: {reg.score(X, y)}')
print('Notice above, r2 calculated on learn dataset is positive')
X_pred = np.linspace(-1, 1, num=100)
y_pred = reg.predict(X_pred.reshape(-1, 1))
plt.plot(X_pred, y_pred)
plt.gca().grid()
plt.gca().set_title('Model has correctly captured the trend')
plt.show()
r2 score on learn dataset: 0.0049158435364208275
Notice above, r2 calculated on learn dataset is positive
scores = cross_val_score(reg, X, y, scoring='r2')
print(f'r2 {scores.mean():.4f} ± {scores.std():.4f}')
r2 -0.0023 ± 0.0028
Despite model correctly capturing the trend, cross-validation consistently produces negative r2 on test dataset, different from learning dataset.
UPDATE 2022-01-19
The example I presented above has a technical error, the actual reason for negative $R^2$ is lack of shuffling of $X$, $y$ before cross-validating. Still, the point is correct, after shuffling one can still get negative $R^2$ reliably, and this is still fixed by using $R^2_{oos}$ instead. See Corrected example at github.
UPDATE 2023-04-28
Finally, a good article with detailed study of $R^2_{oos}$
https://arxiv.org/pdf/2302.05131.pdf | Should $ R^2$ be calculated on training data or test data? | An elaboration of the above answer on why it's not a good idea to calculate $R^2$ on test data, different than learning data.
To measure "predictive power" of model, how good it performs on data outsi | Should $ R^2$ be calculated on training data or test data?
An elaboration of the above answer on why it's not a good idea to calculate $R^2$ on test data, different than learning data.
To measure "predictive power" of model, how good it performs on data outside of learning dataset, one should use $R^2_{oos}$ instead of $R^2$. OOS stands from "out of sample".
In $R^2_{oos}$ in denominator we replace $ \Sigma (y - \bar{y}_{test})^2 $ by $ \Sigma (y - \bar{y}_{train})^2 $
If you want to know exactly what happens if one ignores $R^2_{oos}$ and uses $R^2$ on test dataset, read below.
I discovered, to my surprise, when the target variable has high variance compared to "signal" (dependency on feature), then calculating $R^2$ on test dataset (different from learning dataset) will produce negative $R^2$ with guarantee.
Below I put jupyter notebook code in Pyhton so anyone can reproduce it and see it themeselves:
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
x = np.linspace(-1, 1, num=1_000_000)
X = x.reshape(-1, 1)
# notice 0.05 << 1 (variance of y)
y = np.random.normal(x * 0.05)
df = pd.DataFrame({'X': pd.Series(x), 'y': pd.Series(y)})
ax = sns.histplot(data=df, x='X', y='y', bins=(200, 100))
ax.figure.set_figwidth(18)
ax.figure.set_figheight(9)
ax.grid()
plt.show()
from sklearn import ensemble
from sklearn.model_selection import cross_val_score
fraction=0.0001
reg = ensemble.ExtraTreesRegressor(
n_estimators=20, min_samples_split=fraction * 2, min_samples_leaf=fraction
)
_ = reg.fit(X, y)
print(f'r2 score on learn dataset: {reg.score(X, y)}')
print('Notice above, r2 calculated on learn dataset is positive')
X_pred = np.linspace(-1, 1, num=100)
y_pred = reg.predict(X_pred.reshape(-1, 1))
plt.plot(X_pred, y_pred)
plt.gca().grid()
plt.gca().set_title('Model has correctly captured the trend')
plt.show()
r2 score on learn dataset: 0.0049158435364208275
Notice above, r2 calculated on learn dataset is positive
scores = cross_val_score(reg, X, y, scoring='r2')
print(f'r2 {scores.mean():.4f} ± {scores.std():.4f}')
r2 -0.0023 ± 0.0028
Despite model correctly capturing the trend, cross-validation consistently produces negative r2 on test dataset, different from learning dataset.
UPDATE 2022-01-19
The example I presented above has a technical error, the actual reason for negative $R^2$ is lack of shuffling of $X$, $y$ before cross-validating. Still, the point is correct, after shuffling one can still get negative $R^2$ reliably, and this is still fixed by using $R^2_{oos}$ instead. See Corrected example at github.
UPDATE 2023-04-28
Finally, a good article with detailed study of $R^2_{oos}$
https://arxiv.org/pdf/2302.05131.pdf | Should $ R^2$ be calculated on training data or test data?
An elaboration of the above answer on why it's not a good idea to calculate $R^2$ on test data, different than learning data.
To measure "predictive power" of model, how good it performs on data outsi |
19,768 | Why a T-statistic needs the data to follow a normal distribution | The information you require is in the "Characterization" section of the Wiki page. A $t$-distribution with degrees of freedom $\nu$ may be defined as the distribution of the random variable $T$ such that
$$T = \dfrac{Z}{\sqrt{V/\nu}} \,,$$
where $Z$ is a standard normal distribution random variable and $V$ is a $\chi^2$ random variable with degrees of freedom $\nu$. In addition, $Z$ and $V$ must be independent. So given any $Z$ and $V$ that follow the above definition, you can then arrive at a random variable that has a $t$-distribution.
Now, suppose $X_1, X_2, \dots, X_n$ is distributed according to a distribution $F$. Let $F$ have mean $\mu$ and variance $\sigma^2$. Let $\bar{X}$ be the sample mean and $S^2$ be the sample variance. We will then look at the formulae:
$$\dfrac{\bar{X} - \mu}{S/\sqrt{n}} = \dfrac{\frac{\bar{X} - \mu}{\sigma/\sqrt{n}}}{\sqrt{\frac{(n-1)S^2}{(n-1)\sigma^2}}} \,.$$
If, $F$ denotes the normal distribution, then $\bar{X} \sim N(\mu, \sigma^2/n)$, and thus $\frac{\bar{X} - \mu}{\sigma/\sqrt{n}} \sim N(0,1)$. In addition, $\frac{(n-1)S^2}{\sigma^2} \sim \chi^2_{n-1}$ by Cochran's Theorem. Finally, by an application of Basu's theorem, $\bar{X}$ and $S^2$ are independent. This then implies that the resulting statistic has a $t$-distribution with $n-1$ degrees of freedom.
If the original data distribution $F$ was not normal, then, the exact distribution of the numerator and denominator will not be standard normal and $\chi^2$, respectively, and thus the resulting statistics will not have a $t$-distribution. | Why a T-statistic needs the data to follow a normal distribution | The information you require is in the "Characterization" section of the Wiki page. A $t$-distribution with degrees of freedom $\nu$ may be defined as the distribution of the random variable $T$ such | Why a T-statistic needs the data to follow a normal distribution
The information you require is in the "Characterization" section of the Wiki page. A $t$-distribution with degrees of freedom $\nu$ may be defined as the distribution of the random variable $T$ such that
$$T = \dfrac{Z}{\sqrt{V/\nu}} \,,$$
where $Z$ is a standard normal distribution random variable and $V$ is a $\chi^2$ random variable with degrees of freedom $\nu$. In addition, $Z$ and $V$ must be independent. So given any $Z$ and $V$ that follow the above definition, you can then arrive at a random variable that has a $t$-distribution.
Now, suppose $X_1, X_2, \dots, X_n$ is distributed according to a distribution $F$. Let $F$ have mean $\mu$ and variance $\sigma^2$. Let $\bar{X}$ be the sample mean and $S^2$ be the sample variance. We will then look at the formulae:
$$\dfrac{\bar{X} - \mu}{S/\sqrt{n}} = \dfrac{\frac{\bar{X} - \mu}{\sigma/\sqrt{n}}}{\sqrt{\frac{(n-1)S^2}{(n-1)\sigma^2}}} \,.$$
If, $F$ denotes the normal distribution, then $\bar{X} \sim N(\mu, \sigma^2/n)$, and thus $\frac{\bar{X} - \mu}{\sigma/\sqrt{n}} \sim N(0,1)$. In addition, $\frac{(n-1)S^2}{\sigma^2} \sim \chi^2_{n-1}$ by Cochran's Theorem. Finally, by an application of Basu's theorem, $\bar{X}$ and $S^2$ are independent. This then implies that the resulting statistic has a $t$-distribution with $n-1$ degrees of freedom.
If the original data distribution $F$ was not normal, then, the exact distribution of the numerator and denominator will not be standard normal and $\chi^2$, respectively, and thus the resulting statistics will not have a $t$-distribution. | Why a T-statistic needs the data to follow a normal distribution
The information you require is in the "Characterization" section of the Wiki page. A $t$-distribution with degrees of freedom $\nu$ may be defined as the distribution of the random variable $T$ such |
19,769 | Why a T-statistic needs the data to follow a normal distribution | Just to add to the earlier responses something I think is relevant to the question, albeit possibly only indirectly: The normality of the data as pointed out in the answers is both necessary and sufficient for the t-statistic to have a t-distribution (hence, a characterization of it as a t-distributed random variable) because the normality of the data also characterizes the independence of the sample mean and sample variance (see, e.g., Lucaks (1942). A characterization of the normal distribution. Annals of Mathematical Statistics, 13(1), 91-93), which is crucial to the t-statistic having a t-distribution. An investigation of the necessity and sufficiency of the normality of the data for the t-distribution in this case is provided in Chen and Adatia (1997), "Independence and t distribution," The American Statistician, 51(2), 176-177. | Why a T-statistic needs the data to follow a normal distribution | Just to add to the earlier responses something I think is relevant to the question, albeit possibly only indirectly: The normality of the data as pointed out in the answers is both necessary and suffi | Why a T-statistic needs the data to follow a normal distribution
Just to add to the earlier responses something I think is relevant to the question, albeit possibly only indirectly: The normality of the data as pointed out in the answers is both necessary and sufficient for the t-statistic to have a t-distribution (hence, a characterization of it as a t-distributed random variable) because the normality of the data also characterizes the independence of the sample mean and sample variance (see, e.g., Lucaks (1942). A characterization of the normal distribution. Annals of Mathematical Statistics, 13(1), 91-93), which is crucial to the t-statistic having a t-distribution. An investigation of the necessity and sufficiency of the normality of the data for the t-distribution in this case is provided in Chen and Adatia (1997), "Independence and t distribution," The American Statistician, 51(2), 176-177. | Why a T-statistic needs the data to follow a normal distribution
Just to add to the earlier responses something I think is relevant to the question, albeit possibly only indirectly: The normality of the data as pointed out in the answers is both necessary and suffi |
19,770 | Why a T-statistic needs the data to follow a normal distribution | I think there may be some confusion between the statistic and its formula, versus the distribution and its formula. You can apply the t-statistic formula to any dataset and get a "t-statistic", but this statistic will not be distributed according to the student-t distribution unless the data came from a normal distribution (or at least, will not be guaranteed to be; my guess is that non-normal distributions won't produce a student-t distribution when the t-statistic formula is applied, but I'm not certain of that). The reason for this is simply that the distribution of the t-statistic is calculated from the distribution of the data that generated it, so if you have a different underlying distribution, then you're not guaranteed to have the same distribution for derived statistics. | Why a T-statistic needs the data to follow a normal distribution | I think there may be some confusion between the statistic and its formula, versus the distribution and its formula. You can apply the t-statistic formula to any dataset and get a "t-statistic", but th | Why a T-statistic needs the data to follow a normal distribution
I think there may be some confusion between the statistic and its formula, versus the distribution and its formula. You can apply the t-statistic formula to any dataset and get a "t-statistic", but this statistic will not be distributed according to the student-t distribution unless the data came from a normal distribution (or at least, will not be guaranteed to be; my guess is that non-normal distributions won't produce a student-t distribution when the t-statistic formula is applied, but I'm not certain of that). The reason for this is simply that the distribution of the t-statistic is calculated from the distribution of the data that generated it, so if you have a different underlying distribution, then you're not guaranteed to have the same distribution for derived statistics. | Why a T-statistic needs the data to follow a normal distribution
I think there may be some confusion between the statistic and its formula, versus the distribution and its formula. You can apply the t-statistic formula to any dataset and get a "t-statistic", but th |
19,771 | Why a T-statistic needs the data to follow a normal distribution | All that is needed is that $\bar{X}$ is normally distributed. If $\bar{X}$ is exactly normally distributed (not approximately normal) then the $X_i$ are normally distributed, $(n-1)S^2/\sigma^2$ is chi-square distributed and independent of $\bar{X}$, and $\frac{\sqrt{n}(\bar{X}-\mu)}{S}\sim T_{n-1}$. If $\bar{X}$ is only normally distributed asymptotically there is no guarantee that $\bar{X}$ and $S$ are independent nor that $(n-1)S^2/\sigma^2$ is chi-square distributed, but $\frac{\sqrt{n}(\bar{X}-\mu)}{S}\overset{asymp}{\sim}N(0,1)$ and of course a $T_{n-1}$ distribution and a $N(0,1)$ distribution are indistinguishable asymptotically.
Below is a histogram of $X_1,...,X_{100}\sim Gamma(2,3)$ with mean $\mu=2\times 3=6$, and below that is the sampling distribution of $\bar{X}$.
Of course the sample standard deviation is not independent of the sample mean as evidenced by the scatter plot below.
Nevertheless, the sampling distribution of $\sqrt{n}(\bar{X}-\mu)/S$ is well approximated by a $T_{n-1}$ distribution, i.e. $\sqrt{n}(\bar{X}-\mu)/S\overset{asymp}{\sim} T_{n-1}$.
For the distribution of $\sqrt{n}(\bar{X}-\mu)/S$ to be exactly $T_{n-1}$ distributed for any sample size then $X_i$ must come from a normal distribution. | Why a T-statistic needs the data to follow a normal distribution | All that is needed is that $\bar{X}$ is normally distributed. If $\bar{X}$ is exactly normally distributed (not approximately normal) then the $X_i$ are normally distributed, $(n-1)S^2/\sigma^2$ is ch | Why a T-statistic needs the data to follow a normal distribution
All that is needed is that $\bar{X}$ is normally distributed. If $\bar{X}$ is exactly normally distributed (not approximately normal) then the $X_i$ are normally distributed, $(n-1)S^2/\sigma^2$ is chi-square distributed and independent of $\bar{X}$, and $\frac{\sqrt{n}(\bar{X}-\mu)}{S}\sim T_{n-1}$. If $\bar{X}$ is only normally distributed asymptotically there is no guarantee that $\bar{X}$ and $S$ are independent nor that $(n-1)S^2/\sigma^2$ is chi-square distributed, but $\frac{\sqrt{n}(\bar{X}-\mu)}{S}\overset{asymp}{\sim}N(0,1)$ and of course a $T_{n-1}$ distribution and a $N(0,1)$ distribution are indistinguishable asymptotically.
Below is a histogram of $X_1,...,X_{100}\sim Gamma(2,3)$ with mean $\mu=2\times 3=6$, and below that is the sampling distribution of $\bar{X}$.
Of course the sample standard deviation is not independent of the sample mean as evidenced by the scatter plot below.
Nevertheless, the sampling distribution of $\sqrt{n}(\bar{X}-\mu)/S$ is well approximated by a $T_{n-1}$ distribution, i.e. $\sqrt{n}(\bar{X}-\mu)/S\overset{asymp}{\sim} T_{n-1}$.
For the distribution of $\sqrt{n}(\bar{X}-\mu)/S$ to be exactly $T_{n-1}$ distributed for any sample size then $X_i$ must come from a normal distribution. | Why a T-statistic needs the data to follow a normal distribution
All that is needed is that $\bar{X}$ is normally distributed. If $\bar{X}$ is exactly normally distributed (not approximately normal) then the $X_i$ are normally distributed, $(n-1)S^2/\sigma^2$ is ch |
19,772 | Why is it valid to detrend time series with regression? | You're astute in sensing that there may be conflict between classical assumptions of ordinary least squares linear regression and the serial dependence commonly found in the time series setting.
Consider Assumption 1.2 (Strict Exogeneity) of Fumio Hayashi's Econometrics.
$$ \mathrm{E}[\epsilon_i \mid X] = 0 $$
This in turn implies $\mathrm{E}[\epsilon_i \mathbf{x}_j] = \mathbf{0}$, that any residual $\epsilon_i$ is orthogonal to any regressor $\mathbf{x}_j$.
As Hayashi points out, this assumption is violated in the simplest autoregressive model.[1] Consider the AR(1) process:
$$y_{t} = \beta y_{t-1} + \epsilon_t$$
We can see that $y_t$ will be a regressor for $y_{t+1}$, but $\epsilon_t$ isn't orthogonal to $y_t$ (i.e. $\mathrm{E}[\epsilon_ty_t]\neq0$).
Since the strict exogeneity assumption is violated, none of the arguments that rely on that assumption can be applied to this simple AR(1) model!
So we have an intractable problem?
No, we don't! Estimating AR(1) models with ordinary least squares is entirely valid, standard behavior. Why can it still be ok?
Large sample, asymptotic arguments don't need strict exogeniety. A sufficient assumption (that can be used instead of strict exogeneity) is that the regressors are predetermined, that regressors are orthogonal to the contemporaneous error term. See Hayashi Chapter 2 for a full argument.
References
[1] Fumio Hayashi, Econometrics (2000), p. 35
[2] ibid., p. 134 | Why is it valid to detrend time series with regression? | You're astute in sensing that there may be conflict between classical assumptions of ordinary least squares linear regression and the serial dependence commonly found in the time series setting.
Consi | Why is it valid to detrend time series with regression?
You're astute in sensing that there may be conflict between classical assumptions of ordinary least squares linear regression and the serial dependence commonly found in the time series setting.
Consider Assumption 1.2 (Strict Exogeneity) of Fumio Hayashi's Econometrics.
$$ \mathrm{E}[\epsilon_i \mid X] = 0 $$
This in turn implies $\mathrm{E}[\epsilon_i \mathbf{x}_j] = \mathbf{0}$, that any residual $\epsilon_i$ is orthogonal to any regressor $\mathbf{x}_j$.
As Hayashi points out, this assumption is violated in the simplest autoregressive model.[1] Consider the AR(1) process:
$$y_{t} = \beta y_{t-1} + \epsilon_t$$
We can see that $y_t$ will be a regressor for $y_{t+1}$, but $\epsilon_t$ isn't orthogonal to $y_t$ (i.e. $\mathrm{E}[\epsilon_ty_t]\neq0$).
Since the strict exogeneity assumption is violated, none of the arguments that rely on that assumption can be applied to this simple AR(1) model!
So we have an intractable problem?
No, we don't! Estimating AR(1) models with ordinary least squares is entirely valid, standard behavior. Why can it still be ok?
Large sample, asymptotic arguments don't need strict exogeniety. A sufficient assumption (that can be used instead of strict exogeneity) is that the regressors are predetermined, that regressors are orthogonal to the contemporaneous error term. See Hayashi Chapter 2 for a full argument.
References
[1] Fumio Hayashi, Econometrics (2000), p. 35
[2] ibid., p. 134 | Why is it valid to detrend time series with regression?
You're astute in sensing that there may be conflict between classical assumptions of ordinary least squares linear regression and the serial dependence commonly found in the time series setting.
Consi |
19,773 | Why is it valid to detrend time series with regression? | Basic least-squares type regression methods don't assume that the y-values are i.i.d. They assume that the residuals (i.e. y-value minus true trend) are i.i.d.
Other methods of regression exist which make different assumptions, but that'd probably be over-complicating this answer. | Why is it valid to detrend time series with regression? | Basic least-squares type regression methods don't assume that the y-values are i.i.d. They assume that the residuals (i.e. y-value minus true trend) are i.i.d.
Other methods of regression exist which | Why is it valid to detrend time series with regression?
Basic least-squares type regression methods don't assume that the y-values are i.i.d. They assume that the residuals (i.e. y-value minus true trend) are i.i.d.
Other methods of regression exist which make different assumptions, but that'd probably be over-complicating this answer. | Why is it valid to detrend time series with regression?
Basic least-squares type regression methods don't assume that the y-values are i.i.d. They assume that the residuals (i.e. y-value minus true trend) are i.i.d.
Other methods of regression exist which |
19,774 | Why is it valid to detrend time series with regression? | It's a good question! The issue is not even mentioned on my time series books (I probably need better books :) First of all, note that you're not forced to use linear regression to detrend a time series, if the series has a stochastic trend (unit root) - you could simply take the first difference. But you do have to use linear regression, if the series has a deterministic trend. In this case it's true that the residuals are not iid , as you say. Just think of a series which has a linear trend, seasonal components, cyclic components, etc. all together - after linear regression the residuals are all but independent. The point is that you're not then using linear regression to make predictions or to form prediction intervals. It's just a part of your procedure for inference: you still need to apply other methods to arrive at uncorrelated residuals. So, while linear regression per se is not a valid inference procedure (it is not the correct statistical model) for most time series, a procedure which includes linear regression as one of its steps may be a valid model, if the model it assumes corresponds to the data generating process for the time series. | Why is it valid to detrend time series with regression? | It's a good question! The issue is not even mentioned on my time series books (I probably need better books :) First of all, note that you're not forced to use linear regression to detrend a time ser | Why is it valid to detrend time series with regression?
It's a good question! The issue is not even mentioned on my time series books (I probably need better books :) First of all, note that you're not forced to use linear regression to detrend a time series, if the series has a stochastic trend (unit root) - you could simply take the first difference. But you do have to use linear regression, if the series has a deterministic trend. In this case it's true that the residuals are not iid , as you say. Just think of a series which has a linear trend, seasonal components, cyclic components, etc. all together - after linear regression the residuals are all but independent. The point is that you're not then using linear regression to make predictions or to form prediction intervals. It's just a part of your procedure for inference: you still need to apply other methods to arrive at uncorrelated residuals. So, while linear regression per se is not a valid inference procedure (it is not the correct statistical model) for most time series, a procedure which includes linear regression as one of its steps may be a valid model, if the model it assumes corresponds to the data generating process for the time series. | Why is it valid to detrend time series with regression?
It's a good question! The issue is not even mentioned on my time series books (I probably need better books :) First of all, note that you're not forced to use linear regression to detrend a time ser |
19,775 | Any example of (roughly) independent variables that are dependent at extreme values? | Here's an example where $X$ and $Y$ even have normal marginals.
Let:
$$X \sim N(0,1)$$
Conditional on $X$, let $Y = X$ if $|X| > \phi$, or $Y = -X$ otherwise, for some constant $\phi$.
You can show that, independently of $\phi$, marginally we have:
$$Y \sim N(0,1)$$
There is a value of $\phi$ such that $\text{cor}(X,Y) = 0$. If $\phi = 1.54$ then $\text{cor}(X,Y)\approx 0$.
However, $X$ and $Y$ are not independent, and extreme values of both are perfectly dependent. See simulation in R below, and the plot that follows.
Nsim <- 10000
set.seed(123)
x <- rnorm(Nsim)
y <- ifelse(abs(x)>1.54,x,-x)
print(cor(x,y)) # 0.00284 \approx 0
plot(x,y)
extreme.x <- which(abs(x)>qnorm(0.95))
extreme.y <- which(abs(y)>qnorm(0.95))
extreme.both <- intersect(extreme.x,extreme.y)
print(cor(x[extreme.both],y[extreme.both])) # Exactly 1 | Any example of (roughly) independent variables that are dependent at extreme values? | Here's an example where $X$ and $Y$ even have normal marginals.
Let:
$$X \sim N(0,1)$$
Conditional on $X$, let $Y = X$ if $|X| > \phi$, or $Y = -X$ otherwise, for some constant $\phi$.
You can show | Any example of (roughly) independent variables that are dependent at extreme values?
Here's an example where $X$ and $Y$ even have normal marginals.
Let:
$$X \sim N(0,1)$$
Conditional on $X$, let $Y = X$ if $|X| > \phi$, or $Y = -X$ otherwise, for some constant $\phi$.
You can show that, independently of $\phi$, marginally we have:
$$Y \sim N(0,1)$$
There is a value of $\phi$ such that $\text{cor}(X,Y) = 0$. If $\phi = 1.54$ then $\text{cor}(X,Y)\approx 0$.
However, $X$ and $Y$ are not independent, and extreme values of both are perfectly dependent. See simulation in R below, and the plot that follows.
Nsim <- 10000
set.seed(123)
x <- rnorm(Nsim)
y <- ifelse(abs(x)>1.54,x,-x)
print(cor(x,y)) # 0.00284 \approx 0
plot(x,y)
extreme.x <- which(abs(x)>qnorm(0.95))
extreme.y <- which(abs(y)>qnorm(0.95))
extreme.both <- intersect(extreme.x,extreme.y)
print(cor(x[extreme.both],y[extreme.both])) # Exactly 1 | Any example of (roughly) independent variables that are dependent at extreme values?
Here's an example where $X$ and $Y$ even have normal marginals.
Let:
$$X \sim N(0,1)$$
Conditional on $X$, let $Y = X$ if $|X| > \phi$, or $Y = -X$ otherwise, for some constant $\phi$.
You can show |
19,776 | Scikit-learn Normalization mode (L1 vs L2 & Max) | The options lead to different normalizations. if $x$ is the vector of covariates of length $n$, and say that the normalized vector is $y = x / z$ then the three options denote what to use for $z$:
L1: $z = \| x\|_1 = \sum_{i=1}^n |x_i|$
L2: $z = \| x\|_2 = \sqrt{\sum_{i=1}^n x_i^2}$
Max: $z = \|x \|_\infty = \max |x_i|$
Edit: previously, using Max does not take absolute values first, so it is not equal to the $l_\infty$ norm -- however, that seems to have been updated and now it is equal to the infinity norm
(source code) | Scikit-learn Normalization mode (L1 vs L2 & Max) | The options lead to different normalizations. if $x$ is the vector of covariates of length $n$, and say that the normalized vector is $y = x / z$ then the three options denote what to use for $z$:
L1 | Scikit-learn Normalization mode (L1 vs L2 & Max)
The options lead to different normalizations. if $x$ is the vector of covariates of length $n$, and say that the normalized vector is $y = x / z$ then the three options denote what to use for $z$:
L1: $z = \| x\|_1 = \sum_{i=1}^n |x_i|$
L2: $z = \| x\|_2 = \sqrt{\sum_{i=1}^n x_i^2}$
Max: $z = \|x \|_\infty = \max |x_i|$
Edit: previously, using Max does not take absolute values first, so it is not equal to the $l_\infty$ norm -- however, that seems to have been updated and now it is equal to the infinity norm
(source code) | Scikit-learn Normalization mode (L1 vs L2 & Max)
The options lead to different normalizations. if $x$ is the vector of covariates of length $n$, and say that the normalized vector is $y = x / z$ then the three options denote what to use for $z$:
L1 |
19,777 | Factor rotation methods (varimax, quartimax, oblimin, etc.) - what do the names mean and what do the methods do? | This answer succeeds this general question on rotations in factor analysis (please read it) and briefly describes a number of specific methods.
Rotations are performed iteratively and on every pair of factors (columns of the loading matrix). This is needed because the task to optimize (maximize or minimize) the objective criterion simultaneously for all the factors would be mathematically difficult. However, in the end the final rotation matrix $\bf Q$ is assembled so that you can reproduce the rotation yourself with it, multiplying the extracted loadings $\bf A$ by it, $\bf AQ=S$, getting the rotated factor structure matrix $\bf S$. The objective criterion is some property of the elements (loadings) of resultant matrix $\bf S$.
Quartimax orthogonal rotation seeks to maximize the sum of all loadings raised to power 4 in $\bf S$. Hence its name ("quarti", four). It was shown that reaching this mathematical objective corresponds enough to satisfying the 3rd Thurstone's criterion of "simple structure" which sounds as: for every pair of factors there is several (ideally >= m) variables with loadings near zero for any one of the two and far from zero for the other factor. In other words, there will be many large and many small loadings; and points on the loading plot drawn for a pair of rotated factors would, ideally, lie close to one of the two axes. Quartimax thus minimizes the number of factors needed to explain a variable: it "simplifies" the rows of the loading matrix. But quartimax often produces the so called "general factor" (which most of the time is not desirable in FA of variables; it is more desirable, I believe, in the so called Q-mode FA of respondents).
Varimax orthogonal rotation tries to maximize variance of the squared loadings in each factor in $\bf S$. Hence its name (variance). As the result, each factor has only few variables with large loadings by the factor. Varimax directly "simplifies" columns of the loading matrix and by that it greatly facilitates the interpretability of factors. On the loading plot, points are spread wide along a factor axis and tend to polarize themselves into near-zero and far-from-zero. This property seems to satisfy a mixture of Thurstones's simple structure points to an extent. Varimax, however, is not safe from producing points lying far away from the axes, i.e. "complex" variables loaded high by more than one factor. Whether this is bad or ok depends of the field of the study. Varimax performs well mostly in combination with the so called Kaiser's normalization (equalizing communalities temporarily while rotating), it is advised to always use it with varimax (and recommended to use it with any other method, too). It is the most popular orthogonal rotation method, especially in psychometry and social sciences.
Equamax (rarely, Equimax) orthogonal rotation can be seen as a method sharpening some properties of varimax. It was invented in attempts to further improve it. Equalization refers to a special weighting which Saunders (1962) introduced into a working formula of the algorithm. Equamax self-adjusts for the number of the being rotated factors. It tends to distribute variables (highly loaded) more uniformly between factors than varimax does and thus further is less prone to giving "general" factors. On the other hand, equamax wasn't conceived to give up the quartimax's aim to simplify rows; equamax is rather a combination of varimax and quartimax than their in-between. However, equamax is claimed to be considerably less "reliable" or "stable" than varimax or quartimax: for some data it can give disastrously bad solutions while for other data it gives perfectly interpretable factors with simple structure. One more method, similar to equamax and even more ventured in quest of simple structure is called parsimax ("maximizing parsimony") (See Mulaik, 2010, for discussion).
I am sorry for stopping now and not reviewing the oblique methods - oblimin ("oblique" with "minimizing" a criterion) and promax (unrestricted procrustes rotation after varimax). The oblique methods would require probably longer paragraphs to describe them, but I didn't plan any long answer today. Both methods are mentioned in Footnote 5 of this answer. I may refer you to Mulaik, Foundations of factor analysis (2010); classic old Harman's book Modern factor analysis (1976); and whatever pops out in the internet when you search.
See also The difference between varimax and oblimin rotations in factor analysis; What does “varimax” mean in SPSS factor analysis?
Later addendum, with the history and the formulae, for meticulous
Quartimax
In the 1950s, several factor analysis experts tried to embody Thurstone’s qualitative features of a “simple structure” (See footnote 1 here) into strict, quantitative criteria:
Ferguson reasoned that the most parsimonious disposition of points (loadings) in the space of factors (axes) will be when, for most pairs of factors, each of the two axes pierces its own clot of points, thus maximizing its own coordinates and minimizing the coordinates onto the perpendicular axis. So he suggested to minimize the products of loadings for each variable in pairs of factors (i,j), summed across all variables: $\sum^p\sum_{i,j;i<j}(a_i a_j)^2$ ($a$ is a loading, an element of a p variables x m factors loading matrix $\bf A$, in this case we mean, the final loadings - after a rotation).
Carroll also thought of pairs of factors (i,j) and wanted to minimize $\sum_{i,j;i<j}\sum^p(a_i^2 a_j^2)$. The idea was that for each pair of factors, the loadings should mostly be unequal-sized or both small, ideally a zero one against a nonzero or zero one.
Neuhaus and Wrigley wanted to maximize the variance of the squared values of loadings in the whole $\bf A$, in order the loadings to split themselves into big ones and near-zero ones.
Kaiser also chose variance, but variance of the squared loadings in rows of $\bf A$; and wanted to maximize the sum of these variances across the rows.
Saunders offered to maximize the kurtosis in the doubled distribution of the loadings (i.e., every loading from $\bf A$ is taken twice - with positive and with negative sign, since the sign of a loading is basically arbitrary). High kurtosis in this symmetric around zero distribution implies maximization of the share (contribution) of extreme (big) loadings as well of near-zero loadings, at the expense of the moderate-size loadings.
It then occurred (and it can be shown mathematically) that, in the milieu of orthogonal rotation, the optimization of all these five criteria is in fact equivalent from the “argmax” point of view, and they all can boil down to the maximization of
$Q= \sum^p\sum^m a^4$,
the overall sum of the 4-th power of loadings. The criterion therefore was called the quartimax. To repeat what was said in the beginning of the answer, quartimax minimizes the number of factors needed to explain a variable: it "simplifies" the rows of the loading matrix. But quartimax not rarely produces the so called "general factor".
Varimax
Having observed that quartimax simplifies well rows (variables) but is prone to “general factor”, Kaiser suggested to simplify $\bf A$’s columns (factors) instead. It was put above, that Kaiser’s idea for quartimax was to maximize the summed variance of squared loadings in rows of $\bf A$. Now he transposed the proposal and suggested to maximize the summed variance of squared loadings in columns of $\bf A$. That is, to maximize $\sum^m[\frac{1}{p} \sum^p (a^2)^2 - \frac{1}{p^2} (\sum^p a^2)^2]$ (the bracketed part is the formula of the variance of p squared values $a$), or, if multiplied by $p^2$, for convenience:
$V = \sum^m[p \sum^p (a^2)^2 - (\sum^p a^2)^2] = p \sum^m\sum^p (a^4) - \sum^m(\sum^p a^2)^2 = pQ - W$
where $V$ is the varimax criterion, $Q$ is the quartimax criterion, and $W$ is the sum of squared variances of the factors (after the rotation) [a factor's variance is the sum of its squared loadings].
[I’ve remarked that Kaiser obtained varimax by simply transposing the quartimax’s problem - to simplify columns in place of rows, - and you may switch places of m and p in the formula for $V$, to get the symmetric corresponding expression, $mQ – W^*$, for quartimax. Since we are rotating columns, not rows, of the loading matrix, the quartimax’s term $W^*$, the sum of squared communalities of the variables, does not change with rotation and therefore can be dropped from the objective statement; after which can also drop multiplier m - and stay with sole $Q$, what quartimax is. While in case of varimax, term $W$ changes with rotations and thus stays an important part of the formula, to be optimized along with it.]
Kaiser normalization. Kaiser felt dissatisfied with that variables with large communalities dictate the rotation by $V$ criterion much more than variables with small communalities. So he introduced normalizing all communalities to unit before launching the procedure maximizing $V$ (and, of course, de-normalizing back after the performed rotation - communalities don’t change in an orthogonal rotation). Per tradition, Kaiser normalization is often recommended to do – mainly with varimax, but sometimes along with quartimax and other rotation methods too, because, logically, it is not tied with varimax solely. Whether the trick is really beneficial, is an unsettled issue. Some software do it by default, some – by default only for varimax, still some – don’t set it to be a default option. (In the end of this answer, I have a remark on the normalization.)
So was varimax, who maximizes variances of squared loadings in columns of $\bf A$ and therefore simplifies the factors – in exact opposition to quartimax, who did that in rows of $\bf A$, simplifying the variables. Kaiser demonstrated that, if the population factor structure is relatively sharp (i.e., variables tend to cluster together around different factors), varimax is more robust (stable) than quartimax to removal of some variables from the rotation operation.
Equamax and Parsimax
Saunders decided to play up the fact that quartimax and varimax are actually one formula, $pQ - cW$, where $c=0$ (and then p traditionally is dropped) for quartimax and $c=1$ for varimax. He experimented with factor analytic data in the search of a greater value for coefficient $c$ in order to accentuate the varimaxian, non-quartimaxian side of the criterion. He found that $c=m/2$ often produces factors that are more interpretable than after varimax or quartimax rotations. He called $pQ – \frac{m}{2}W$ equamax. The rationale to make $c$ dependent on m was that as the number of factors grows while p does not, the a priori expected proportion of variables to be loaded by any one factor diminishes; and to compensate it, we should raise $c$.
In a similar pursuit of further “bettering” the generic criterion, Crawford arrived at yet another coefficient value, $c = p(m−1)/(p+m−2)$, depending both on m and p. This version of the criterion was named parsimax.
It is possible further to set $c=p$, yielding criterion facpars, “factor parsimony”, which, as I’m aware, is very seldom used.
(I think) It is still an open question if equamax or parsimax are really better than varimax, and if yes, then in what situations. Their dependence on the parameters m (and p) makes them self-tuning (for advocates) or capricious (for critics). Well, from purely math or general data p.o.v., raising $c$ means simply pushing factors in the direction of more equal final variances, - and not at all making the criterion “more varimax than varimax” or “balanced between varimax and quartimax” w.r.t. their objective goals, for both varimax and quartimax optimize well to the limit what they were meant to optimize.
The considered generic criterion of the form $pQ - cW$ (where Q is quartimax, $\sum^p\sum^m a^4$, and W is the sum of squared factor variances, $\sum^m(\sum^p a^2)^2$, is known as orthomax. Quartimax, varimax, equamax, parsimax, and facpars are its particular versions. In general, coefficient $c$ can take on any value. When close to +infinity, it produces factors of completely equal variances (so use that, if your aim is such). When close to -infinity, you get loadings equal to what you get if you rotate your loading matrix into its principal components by means of PCA (without centering the columns). So, value of $c$ is the parameter stretching the dimension “great general factor vs all factors equal strength”.
In their important paper of 1970, Crawford & Ferguson extend the varying $c$ criterion over to the case of nonorthogonal factor rotations (calling that more general coefficient kappa).
Literature
Harman, H.H. Modern factor analysis. 1976.
Mulaik, S.A. Foundations of factor analysis. 2010.
Clarkson, D.B. Quartic rotation criteria and algorithms // Psychometrica, 1988, 53, 2, p. 251-259.
Crawford, C.B., Ferguson, G.A. A general rotation criterion and its use in orthogonal rotation // Psychometrica, 1970, 35, 3, p. 321-332.
Comparing main characteristics of the criteria
I’ve been generating p variables x m factors loading matrices as values from uniform distribution (so yes, that was not a sharp, clean factor structure), 50 matrices for each combination of p and m/p proportion, and rotating each loading matrix by quartimax (Q), varimax (V), equamax (E), parsimax (P), and facpars (F), all methods accompanied by Kaiser normalization. Quartimax (Q0) and varimax (V0) were also tried without Kaiser normalization. Comparisons between the criteria on three characteristics of the rotated matrix are displayed below (for each matrix generated, the 7 values of the post-rotational characteristic were rescaled into the 0-1 range; then means across the 50 simulations and 95% CI are plotted).
Fig.1. Comparing the sum of variances of squared loadings in rows (maximizing this is the quartimax’s prerogative):
Comment: Superiority of quartimax over the other criteria tend to grow as p increases or as m/p increases. Varimax most of the time is second best. Equamax and parsimax are quite similar.
Fig.2. Comparing the sum of variances of squared loadings in columns (maximizing this is the varimax’s prerogative):
Comment: Superiority of varimax over the other criteria tend to grow as p increases or as m/p increases. Quartimax’s tendency is opposite: as the parameters increase it loses ground. In the bottom-right part, quartimax is the worst, that is, with large-scale factor analysis it fails to mimic “varimaxian” job. Equamax and parsimax are quite similar.
Fig.3. Comparing inequality of factor variances (this is driven by coefficient $c$); the variance used as the measure of “inequality”:
Comment: Yes, with growing $c$, that is, in the line Q V E P F, the inequality of factor variances falls. Q is the leader of the inequality, which tells of its propensity for “general factor”, and at that its gap with the other criteria enlarges as p grows or m/p grows.
Comparing inequality of factor variances (this is driven by coefficient $c$); proportion “sum of absolute loadings of the strongest factor / average of such sums across the rest m-1 factors” was used as the measure of “inequality”:
This is another and more direct test for the presence of “general factor”. The configuration of results was almost the same as on the previous picture Fig.3, so I’m not showing a picture.
Disclaimer. These tries, on which the above pics are based, were done on loading matrices with random nonsharp factor structures, i.e. there were no specially preset clear clusters of variables or other specific structure among the loadings.
Kaiser normalization. From the above Fig.1-2 one can learn that versions of quartimax and varimax without the normalization perform the two tasks (the maximizations) markedly better than when accompanied by the normalization. At the same time, absence of the normalization is a little bit more prone to “general factor” (Fig.3).
The question whether Kaiser normalization should be used (and when), seems still open to me. Perhaps one should try both, with and without the normalization, and see where the applied factor interpretation was more satisfying. When we don’t know what to choose based on math grounds, it’s time we resort to “philosophical” consideration, what are set contrasted, as usual. I could imagine of two positions:
Contra normalization. A variable with small communality (high uniqueness) is not much helpful with any rotation. It contains only traces of the totality of the m factors, so lacks a chance to get a large loading of any of them. But we are interpreting factors mostly by large loadings, and the smaller is the loading the harder is to sight the essence of the factor in the variable. It would be justified even to exclude a variable with small communality from the rotation. Kaiser normalization is what is counter-directed to such motive/motif.
Pro normalization. Communality (non-uniqueness) of a variable is the amount of its inclination to the space of m factors from the outside (i.e., it is the magnitude of its projection into that space). Rotation of axes inside that space is not related with that inclination. The rotation – solving the question which of the m factors will and which will not load the variable – concerns equally a variable will any size of communality, because the initial suspense of the said “internal” decision is sharp to the same degree to all variables with their “external” inclination. So, as long as we are choosing to speak of the variables and not their projections inside, there’s no reason to spread them weights depending on their inclinations, in the act of rotation. And, to manage to discern the essence of a factor in the variable under any size of the loading – is a desideratum (and theoretically a must) for an interpreter of factors.
Orthogonal analytic rotations (Orthomax) algorithm pseudocode
Shorthand notation:
* matrix multiplication (or simple multiplication, for scalars)
&* elementwise (Hadamard) multiplication
^ exponentiation of elements
sqrt(M) square roots of elements in matrix M
rsum(M) row sums of elements in matrix M
csum(M) column sums of elements in matrix M
rssq(M) row sums of squares in matrix M, = rsum(M^2)
cssq(M) column sums of squares in matrix M, = csum(M^2)
msum(M) sum of elements in matrix M
make(nr,nc,val) create nr x nc matrix populated with value val
A is p x m loading matrix with m orthogonal factors, p variables
If Kaiser normalization is requested:
h = sqrt(rssq(A)). /*sqrt(communalities), column vector
A = A/(h*make(1,m,1)). /*Bring all communalities to unit
R is the orthogonal rotation matrix to accrue:
Initialize it as m x m identity matrix
Compute the initial value of the criterion Crit;
the coefficient c is: 0 for Quartimax, 1 for Varimax, m/2 for Equamax,
p(m-1)/(p+m-2) for Parsimax, p for Facpars; or you may choose arbitrary c:
Q = msum(A^4)
If “Quartimax”
Crit = Q
Else
W = rssq(cssq(A))
Crit = p*Q – c*W
Begin iterations
For each pair of factors (columns of A) i, j (i<j) do:
ai = A(:,i) /*Copy out the
aj = A(:,j) /*two factors
u = ai^2 – aj^2
v = 2 * ai &* aj
@d = 2 * csum(u &* v)
@c = csum(u^2 – v^2)
@a = csum(u)
@b = csum(v)
Compute the angle Phi of rotation of the two factors in their space
(coefficient c as defined above):
num = @d – c * 2*@a*@b/p
den = @c – c * (@a^2 - @b^2)/p
Phi4 = artan(num/den) /*4Phi (in radians)
If den>0 /*4Phi is in the 1st or the 4th quadrant
Phi = Phi4/4
Else if num>0 /*4Phi is in the 2nd quadrant (pi is the pi value)
Phi = (pi + Phi4)/4
Else /*4Phi is in the 3rd quadrant
Phi = (Phi4 – pi)/4
Perform the rotation of the pair (rotate if Phi is not negligible):
@sin = sin(Phi)
@cos = cos(Phi)
r_ij = {@cos,-@sin;@sin,@cos} /*The 2 x 2 rotation matrix
A(:,{i,j}) = {ai,aj} * r_ij /*Rotate factors (columns) i and j in A
R(:,{i,j}) = R(:,{i,j}) * r_ij /*Update also the columns of the being accrued R
Go to consider next pair of factors i, j, again copying them out, etc.
When all pairs are through, compute the criterion:
Crit = … (see as defined above)
End iterations if Crit has stopped growing any much (say, increase not greater than
0.0001 versus the previous iteration), or the stock of iterations (say, 50) exhausted.
If Kaiser normalization was requested:
A = A &* (h*make(1,m,1)) /*De-normalize
Ready. A has been rotated. A(input)*R = A(output)
Optional post-actions, for convenience:
1) Reorder factors by decreasing their variances (i.e., their cssq(A)).
2) Switch sign of the loadings so that positive loadings prevail in each factor.
Quartimax and Varimax are always positive values; others can be negative.
All the criteria grow on iterations. | Factor rotation methods (varimax, quartimax, oblimin, etc.) - what do the names mean and what do the | This answer succeeds this general question on rotations in factor analysis (please read it) and briefly describes a number of specific methods.
Rotations are performed iteratively and on every pair of | Factor rotation methods (varimax, quartimax, oblimin, etc.) - what do the names mean and what do the methods do?
This answer succeeds this general question on rotations in factor analysis (please read it) and briefly describes a number of specific methods.
Rotations are performed iteratively and on every pair of factors (columns of the loading matrix). This is needed because the task to optimize (maximize or minimize) the objective criterion simultaneously for all the factors would be mathematically difficult. However, in the end the final rotation matrix $\bf Q$ is assembled so that you can reproduce the rotation yourself with it, multiplying the extracted loadings $\bf A$ by it, $\bf AQ=S$, getting the rotated factor structure matrix $\bf S$. The objective criterion is some property of the elements (loadings) of resultant matrix $\bf S$.
Quartimax orthogonal rotation seeks to maximize the sum of all loadings raised to power 4 in $\bf S$. Hence its name ("quarti", four). It was shown that reaching this mathematical objective corresponds enough to satisfying the 3rd Thurstone's criterion of "simple structure" which sounds as: for every pair of factors there is several (ideally >= m) variables with loadings near zero for any one of the two and far from zero for the other factor. In other words, there will be many large and many small loadings; and points on the loading plot drawn for a pair of rotated factors would, ideally, lie close to one of the two axes. Quartimax thus minimizes the number of factors needed to explain a variable: it "simplifies" the rows of the loading matrix. But quartimax often produces the so called "general factor" (which most of the time is not desirable in FA of variables; it is more desirable, I believe, in the so called Q-mode FA of respondents).
Varimax orthogonal rotation tries to maximize variance of the squared loadings in each factor in $\bf S$. Hence its name (variance). As the result, each factor has only few variables with large loadings by the factor. Varimax directly "simplifies" columns of the loading matrix and by that it greatly facilitates the interpretability of factors. On the loading plot, points are spread wide along a factor axis and tend to polarize themselves into near-zero and far-from-zero. This property seems to satisfy a mixture of Thurstones's simple structure points to an extent. Varimax, however, is not safe from producing points lying far away from the axes, i.e. "complex" variables loaded high by more than one factor. Whether this is bad or ok depends of the field of the study. Varimax performs well mostly in combination with the so called Kaiser's normalization (equalizing communalities temporarily while rotating), it is advised to always use it with varimax (and recommended to use it with any other method, too). It is the most popular orthogonal rotation method, especially in psychometry and social sciences.
Equamax (rarely, Equimax) orthogonal rotation can be seen as a method sharpening some properties of varimax. It was invented in attempts to further improve it. Equalization refers to a special weighting which Saunders (1962) introduced into a working formula of the algorithm. Equamax self-adjusts for the number of the being rotated factors. It tends to distribute variables (highly loaded) more uniformly between factors than varimax does and thus further is less prone to giving "general" factors. On the other hand, equamax wasn't conceived to give up the quartimax's aim to simplify rows; equamax is rather a combination of varimax and quartimax than their in-between. However, equamax is claimed to be considerably less "reliable" or "stable" than varimax or quartimax: for some data it can give disastrously bad solutions while for other data it gives perfectly interpretable factors with simple structure. One more method, similar to equamax and even more ventured in quest of simple structure is called parsimax ("maximizing parsimony") (See Mulaik, 2010, for discussion).
I am sorry for stopping now and not reviewing the oblique methods - oblimin ("oblique" with "minimizing" a criterion) and promax (unrestricted procrustes rotation after varimax). The oblique methods would require probably longer paragraphs to describe them, but I didn't plan any long answer today. Both methods are mentioned in Footnote 5 of this answer. I may refer you to Mulaik, Foundations of factor analysis (2010); classic old Harman's book Modern factor analysis (1976); and whatever pops out in the internet when you search.
See also The difference between varimax and oblimin rotations in factor analysis; What does “varimax” mean in SPSS factor analysis?
Later addendum, with the history and the formulae, for meticulous
Quartimax
In the 1950s, several factor analysis experts tried to embody Thurstone’s qualitative features of a “simple structure” (See footnote 1 here) into strict, quantitative criteria:
Ferguson reasoned that the most parsimonious disposition of points (loadings) in the space of factors (axes) will be when, for most pairs of factors, each of the two axes pierces its own clot of points, thus maximizing its own coordinates and minimizing the coordinates onto the perpendicular axis. So he suggested to minimize the products of loadings for each variable in pairs of factors (i,j), summed across all variables: $\sum^p\sum_{i,j;i<j}(a_i a_j)^2$ ($a$ is a loading, an element of a p variables x m factors loading matrix $\bf A$, in this case we mean, the final loadings - after a rotation).
Carroll also thought of pairs of factors (i,j) and wanted to minimize $\sum_{i,j;i<j}\sum^p(a_i^2 a_j^2)$. The idea was that for each pair of factors, the loadings should mostly be unequal-sized or both small, ideally a zero one against a nonzero or zero one.
Neuhaus and Wrigley wanted to maximize the variance of the squared values of loadings in the whole $\bf A$, in order the loadings to split themselves into big ones and near-zero ones.
Kaiser also chose variance, but variance of the squared loadings in rows of $\bf A$; and wanted to maximize the sum of these variances across the rows.
Saunders offered to maximize the kurtosis in the doubled distribution of the loadings (i.e., every loading from $\bf A$ is taken twice - with positive and with negative sign, since the sign of a loading is basically arbitrary). High kurtosis in this symmetric around zero distribution implies maximization of the share (contribution) of extreme (big) loadings as well of near-zero loadings, at the expense of the moderate-size loadings.
It then occurred (and it can be shown mathematically) that, in the milieu of orthogonal rotation, the optimization of all these five criteria is in fact equivalent from the “argmax” point of view, and they all can boil down to the maximization of
$Q= \sum^p\sum^m a^4$,
the overall sum of the 4-th power of loadings. The criterion therefore was called the quartimax. To repeat what was said in the beginning of the answer, quartimax minimizes the number of factors needed to explain a variable: it "simplifies" the rows of the loading matrix. But quartimax not rarely produces the so called "general factor".
Varimax
Having observed that quartimax simplifies well rows (variables) but is prone to “general factor”, Kaiser suggested to simplify $\bf A$’s columns (factors) instead. It was put above, that Kaiser’s idea for quartimax was to maximize the summed variance of squared loadings in rows of $\bf A$. Now he transposed the proposal and suggested to maximize the summed variance of squared loadings in columns of $\bf A$. That is, to maximize $\sum^m[\frac{1}{p} \sum^p (a^2)^2 - \frac{1}{p^2} (\sum^p a^2)^2]$ (the bracketed part is the formula of the variance of p squared values $a$), or, if multiplied by $p^2$, for convenience:
$V = \sum^m[p \sum^p (a^2)^2 - (\sum^p a^2)^2] = p \sum^m\sum^p (a^4) - \sum^m(\sum^p a^2)^2 = pQ - W$
where $V$ is the varimax criterion, $Q$ is the quartimax criterion, and $W$ is the sum of squared variances of the factors (after the rotation) [a factor's variance is the sum of its squared loadings].
[I’ve remarked that Kaiser obtained varimax by simply transposing the quartimax’s problem - to simplify columns in place of rows, - and you may switch places of m and p in the formula for $V$, to get the symmetric corresponding expression, $mQ – W^*$, for quartimax. Since we are rotating columns, not rows, of the loading matrix, the quartimax’s term $W^*$, the sum of squared communalities of the variables, does not change with rotation and therefore can be dropped from the objective statement; after which can also drop multiplier m - and stay with sole $Q$, what quartimax is. While in case of varimax, term $W$ changes with rotations and thus stays an important part of the formula, to be optimized along with it.]
Kaiser normalization. Kaiser felt dissatisfied with that variables with large communalities dictate the rotation by $V$ criterion much more than variables with small communalities. So he introduced normalizing all communalities to unit before launching the procedure maximizing $V$ (and, of course, de-normalizing back after the performed rotation - communalities don’t change in an orthogonal rotation). Per tradition, Kaiser normalization is often recommended to do – mainly with varimax, but sometimes along with quartimax and other rotation methods too, because, logically, it is not tied with varimax solely. Whether the trick is really beneficial, is an unsettled issue. Some software do it by default, some – by default only for varimax, still some – don’t set it to be a default option. (In the end of this answer, I have a remark on the normalization.)
So was varimax, who maximizes variances of squared loadings in columns of $\bf A$ and therefore simplifies the factors – in exact opposition to quartimax, who did that in rows of $\bf A$, simplifying the variables. Kaiser demonstrated that, if the population factor structure is relatively sharp (i.e., variables tend to cluster together around different factors), varimax is more robust (stable) than quartimax to removal of some variables from the rotation operation.
Equamax and Parsimax
Saunders decided to play up the fact that quartimax and varimax are actually one formula, $pQ - cW$, where $c=0$ (and then p traditionally is dropped) for quartimax and $c=1$ for varimax. He experimented with factor analytic data in the search of a greater value for coefficient $c$ in order to accentuate the varimaxian, non-quartimaxian side of the criterion. He found that $c=m/2$ often produces factors that are more interpretable than after varimax or quartimax rotations. He called $pQ – \frac{m}{2}W$ equamax. The rationale to make $c$ dependent on m was that as the number of factors grows while p does not, the a priori expected proportion of variables to be loaded by any one factor diminishes; and to compensate it, we should raise $c$.
In a similar pursuit of further “bettering” the generic criterion, Crawford arrived at yet another coefficient value, $c = p(m−1)/(p+m−2)$, depending both on m and p. This version of the criterion was named parsimax.
It is possible further to set $c=p$, yielding criterion facpars, “factor parsimony”, which, as I’m aware, is very seldom used.
(I think) It is still an open question if equamax or parsimax are really better than varimax, and if yes, then in what situations. Their dependence on the parameters m (and p) makes them self-tuning (for advocates) or capricious (for critics). Well, from purely math or general data p.o.v., raising $c$ means simply pushing factors in the direction of more equal final variances, - and not at all making the criterion “more varimax than varimax” or “balanced between varimax and quartimax” w.r.t. their objective goals, for both varimax and quartimax optimize well to the limit what they were meant to optimize.
The considered generic criterion of the form $pQ - cW$ (where Q is quartimax, $\sum^p\sum^m a^4$, and W is the sum of squared factor variances, $\sum^m(\sum^p a^2)^2$, is known as orthomax. Quartimax, varimax, equamax, parsimax, and facpars are its particular versions. In general, coefficient $c$ can take on any value. When close to +infinity, it produces factors of completely equal variances (so use that, if your aim is such). When close to -infinity, you get loadings equal to what you get if you rotate your loading matrix into its principal components by means of PCA (without centering the columns). So, value of $c$ is the parameter stretching the dimension “great general factor vs all factors equal strength”.
In their important paper of 1970, Crawford & Ferguson extend the varying $c$ criterion over to the case of nonorthogonal factor rotations (calling that more general coefficient kappa).
Literature
Harman, H.H. Modern factor analysis. 1976.
Mulaik, S.A. Foundations of factor analysis. 2010.
Clarkson, D.B. Quartic rotation criteria and algorithms // Psychometrica, 1988, 53, 2, p. 251-259.
Crawford, C.B., Ferguson, G.A. A general rotation criterion and its use in orthogonal rotation // Psychometrica, 1970, 35, 3, p. 321-332.
Comparing main characteristics of the criteria
I’ve been generating p variables x m factors loading matrices as values from uniform distribution (so yes, that was not a sharp, clean factor structure), 50 matrices for each combination of p and m/p proportion, and rotating each loading matrix by quartimax (Q), varimax (V), equamax (E), parsimax (P), and facpars (F), all methods accompanied by Kaiser normalization. Quartimax (Q0) and varimax (V0) were also tried without Kaiser normalization. Comparisons between the criteria on three characteristics of the rotated matrix are displayed below (for each matrix generated, the 7 values of the post-rotational characteristic were rescaled into the 0-1 range; then means across the 50 simulations and 95% CI are plotted).
Fig.1. Comparing the sum of variances of squared loadings in rows (maximizing this is the quartimax’s prerogative):
Comment: Superiority of quartimax over the other criteria tend to grow as p increases or as m/p increases. Varimax most of the time is second best. Equamax and parsimax are quite similar.
Fig.2. Comparing the sum of variances of squared loadings in columns (maximizing this is the varimax’s prerogative):
Comment: Superiority of varimax over the other criteria tend to grow as p increases or as m/p increases. Quartimax’s tendency is opposite: as the parameters increase it loses ground. In the bottom-right part, quartimax is the worst, that is, with large-scale factor analysis it fails to mimic “varimaxian” job. Equamax and parsimax are quite similar.
Fig.3. Comparing inequality of factor variances (this is driven by coefficient $c$); the variance used as the measure of “inequality”:
Comment: Yes, with growing $c$, that is, in the line Q V E P F, the inequality of factor variances falls. Q is the leader of the inequality, which tells of its propensity for “general factor”, and at that its gap with the other criteria enlarges as p grows or m/p grows.
Comparing inequality of factor variances (this is driven by coefficient $c$); proportion “sum of absolute loadings of the strongest factor / average of such sums across the rest m-1 factors” was used as the measure of “inequality”:
This is another and more direct test for the presence of “general factor”. The configuration of results was almost the same as on the previous picture Fig.3, so I’m not showing a picture.
Disclaimer. These tries, on which the above pics are based, were done on loading matrices with random nonsharp factor structures, i.e. there were no specially preset clear clusters of variables or other specific structure among the loadings.
Kaiser normalization. From the above Fig.1-2 one can learn that versions of quartimax and varimax without the normalization perform the two tasks (the maximizations) markedly better than when accompanied by the normalization. At the same time, absence of the normalization is a little bit more prone to “general factor” (Fig.3).
The question whether Kaiser normalization should be used (and when), seems still open to me. Perhaps one should try both, with and without the normalization, and see where the applied factor interpretation was more satisfying. When we don’t know what to choose based on math grounds, it’s time we resort to “philosophical” consideration, what are set contrasted, as usual. I could imagine of two positions:
Contra normalization. A variable with small communality (high uniqueness) is not much helpful with any rotation. It contains only traces of the totality of the m factors, so lacks a chance to get a large loading of any of them. But we are interpreting factors mostly by large loadings, and the smaller is the loading the harder is to sight the essence of the factor in the variable. It would be justified even to exclude a variable with small communality from the rotation. Kaiser normalization is what is counter-directed to such motive/motif.
Pro normalization. Communality (non-uniqueness) of a variable is the amount of its inclination to the space of m factors from the outside (i.e., it is the magnitude of its projection into that space). Rotation of axes inside that space is not related with that inclination. The rotation – solving the question which of the m factors will and which will not load the variable – concerns equally a variable will any size of communality, because the initial suspense of the said “internal” decision is sharp to the same degree to all variables with their “external” inclination. So, as long as we are choosing to speak of the variables and not their projections inside, there’s no reason to spread them weights depending on their inclinations, in the act of rotation. And, to manage to discern the essence of a factor in the variable under any size of the loading – is a desideratum (and theoretically a must) for an interpreter of factors.
Orthogonal analytic rotations (Orthomax) algorithm pseudocode
Shorthand notation:
* matrix multiplication (or simple multiplication, for scalars)
&* elementwise (Hadamard) multiplication
^ exponentiation of elements
sqrt(M) square roots of elements in matrix M
rsum(M) row sums of elements in matrix M
csum(M) column sums of elements in matrix M
rssq(M) row sums of squares in matrix M, = rsum(M^2)
cssq(M) column sums of squares in matrix M, = csum(M^2)
msum(M) sum of elements in matrix M
make(nr,nc,val) create nr x nc matrix populated with value val
A is p x m loading matrix with m orthogonal factors, p variables
If Kaiser normalization is requested:
h = sqrt(rssq(A)). /*sqrt(communalities), column vector
A = A/(h*make(1,m,1)). /*Bring all communalities to unit
R is the orthogonal rotation matrix to accrue:
Initialize it as m x m identity matrix
Compute the initial value of the criterion Crit;
the coefficient c is: 0 for Quartimax, 1 for Varimax, m/2 for Equamax,
p(m-1)/(p+m-2) for Parsimax, p for Facpars; or you may choose arbitrary c:
Q = msum(A^4)
If “Quartimax”
Crit = Q
Else
W = rssq(cssq(A))
Crit = p*Q – c*W
Begin iterations
For each pair of factors (columns of A) i, j (i<j) do:
ai = A(:,i) /*Copy out the
aj = A(:,j) /*two factors
u = ai^2 – aj^2
v = 2 * ai &* aj
@d = 2 * csum(u &* v)
@c = csum(u^2 – v^2)
@a = csum(u)
@b = csum(v)
Compute the angle Phi of rotation of the two factors in their space
(coefficient c as defined above):
num = @d – c * 2*@a*@b/p
den = @c – c * (@a^2 - @b^2)/p
Phi4 = artan(num/den) /*4Phi (in radians)
If den>0 /*4Phi is in the 1st or the 4th quadrant
Phi = Phi4/4
Else if num>0 /*4Phi is in the 2nd quadrant (pi is the pi value)
Phi = (pi + Phi4)/4
Else /*4Phi is in the 3rd quadrant
Phi = (Phi4 – pi)/4
Perform the rotation of the pair (rotate if Phi is not negligible):
@sin = sin(Phi)
@cos = cos(Phi)
r_ij = {@cos,-@sin;@sin,@cos} /*The 2 x 2 rotation matrix
A(:,{i,j}) = {ai,aj} * r_ij /*Rotate factors (columns) i and j in A
R(:,{i,j}) = R(:,{i,j}) * r_ij /*Update also the columns of the being accrued R
Go to consider next pair of factors i, j, again copying them out, etc.
When all pairs are through, compute the criterion:
Crit = … (see as defined above)
End iterations if Crit has stopped growing any much (say, increase not greater than
0.0001 versus the previous iteration), or the stock of iterations (say, 50) exhausted.
If Kaiser normalization was requested:
A = A &* (h*make(1,m,1)) /*De-normalize
Ready. A has been rotated. A(input)*R = A(output)
Optional post-actions, for convenience:
1) Reorder factors by decreasing their variances (i.e., their cssq(A)).
2) Switch sign of the loadings so that positive loadings prevail in each factor.
Quartimax and Varimax are always positive values; others can be negative.
All the criteria grow on iterations. | Factor rotation methods (varimax, quartimax, oblimin, etc.) - what do the names mean and what do the
This answer succeeds this general question on rotations in factor analysis (please read it) and briefly describes a number of specific methods.
Rotations are performed iteratively and on every pair of |
19,778 | Factor rotation methods (varimax, quartimax, oblimin, etc.) - what do the names mean and what do the methods do? | Rotation methods optimise heuristic fuctions with the aim of "simplifying" factor loadings. Simplicity can be defined in many different ways. The most commonly used ones come from Thurnstone [2]: sparsity, column simplicity and parsimony, row-simplicity (or complexity). Most rotation criteria address one or the other of both, their names are not really important.
Single criteria are included in families of criteria: the most comprehensive one is the Crawford-Ferguson one, which is equivalent to the Orthomax family for orthogonal rotations. These families provide a weighing of both simplicity requirements controlled by different parameters. By changing these, almost all known rotation criteria can be obtained. An excellent and accessible overview of rotation methods is the Browne paper.
[1] M. Browne, An overview of analytic rotation in exploratory factor analysis, Multivariate Behavioral Research 36 (2001), pp. 111–150.
[2] L. Thurstone, Multiple-factor analysis, The University of Chicago Press, 1947 | Factor rotation methods (varimax, quartimax, oblimin, etc.) - what do the names mean and what do the | Rotation methods optimise heuristic fuctions with the aim of "simplifying" factor loadings. Simplicity can be defined in many different ways. The most commonly used ones come from Thurnstone [2]: spar | Factor rotation methods (varimax, quartimax, oblimin, etc.) - what do the names mean and what do the methods do?
Rotation methods optimise heuristic fuctions with the aim of "simplifying" factor loadings. Simplicity can be defined in many different ways. The most commonly used ones come from Thurnstone [2]: sparsity, column simplicity and parsimony, row-simplicity (or complexity). Most rotation criteria address one or the other of both, their names are not really important.
Single criteria are included in families of criteria: the most comprehensive one is the Crawford-Ferguson one, which is equivalent to the Orthomax family for orthogonal rotations. These families provide a weighing of both simplicity requirements controlled by different parameters. By changing these, almost all known rotation criteria can be obtained. An excellent and accessible overview of rotation methods is the Browne paper.
[1] M. Browne, An overview of analytic rotation in exploratory factor analysis, Multivariate Behavioral Research 36 (2001), pp. 111–150.
[2] L. Thurstone, Multiple-factor analysis, The University of Chicago Press, 1947 | Factor rotation methods (varimax, quartimax, oblimin, etc.) - what do the names mean and what do the
Rotation methods optimise heuristic fuctions with the aim of "simplifying" factor loadings. Simplicity can be defined in many different ways. The most commonly used ones come from Thurnstone [2]: spar |
19,779 | What is the autocorrelation for a random walk? | (I wrote this as an answer to another post, which was marked as a duplicate of this one while I was composing it; I figured I'd post it here rather than throw it away. It looks like it says quite similar things to whuber's answer but it is just different enough that someone might get something out of this one.)
A random walk is of the form $y_t = \sum_{i=1}^t \epsilon_i$
Note that $y_t = y_{t-1}+ \epsilon_t$
Hence $\text{Cov}(y_t,y_{t-1})=\text{Cov}(y_{t-1}+ \epsilon_t,y_{t-1})=\text{Var}(y_{t-1})$.
Also note that $\sigma^2_t=\text{Var}(y_t) = t\,\sigma^2_\epsilon$
Consequently $\text{corr}(y_t,y_{t-1})=\frac{\sigma_{t-1}^2}{\sigma_{t-1}\sigma_t} =\frac{\sigma_{t-1}}{\sigma_t}=\sqrt{\frac{t-1}{t}}=\sqrt{1-\frac{1}{t}}\approx 1-\frac{1}{2t}$.
Which is to say you should see a correlation of almost 1 because as soon as $t$ starts to get large, $y_t$ and $y_{t-1}$ are almost exactly the same thing -- the relative difference between them tends to be fairly small.
You can see this most readily by plotting $y_t$ vs $y_{t-1}$.
We can now see it somewhat intuitively -- imagine $y_{t-1}$ has drifted down to $-20$ (as we see it did in my simulation of a random walk with standard normal noise term). Then $y_t$ is going to be pretty close to $-20$; it might be $-22$ or it might be $-18.5$ but it's nearly certain to be within a few units of $-20$. So as the series drifts up and down, the plot of $y_t$ vs $y_{t-1}$ is going to nearly always stay within quite a narrow range of the $y=x$ line... yet as $t$ grows the points will cover greater and greater stretches along that $y=x$ line (the spread along the line grows with $\sqrt{t}$, but the vertical spread remains roughly constant); the correlation must approach 1. | What is the autocorrelation for a random walk? | (I wrote this as an answer to another post, which was marked as a duplicate of this one while I was composing it; I figured I'd post it here rather than throw it away. It looks like it says quite simi | What is the autocorrelation for a random walk?
(I wrote this as an answer to another post, which was marked as a duplicate of this one while I was composing it; I figured I'd post it here rather than throw it away. It looks like it says quite similar things to whuber's answer but it is just different enough that someone might get something out of this one.)
A random walk is of the form $y_t = \sum_{i=1}^t \epsilon_i$
Note that $y_t = y_{t-1}+ \epsilon_t$
Hence $\text{Cov}(y_t,y_{t-1})=\text{Cov}(y_{t-1}+ \epsilon_t,y_{t-1})=\text{Var}(y_{t-1})$.
Also note that $\sigma^2_t=\text{Var}(y_t) = t\,\sigma^2_\epsilon$
Consequently $\text{corr}(y_t,y_{t-1})=\frac{\sigma_{t-1}^2}{\sigma_{t-1}\sigma_t} =\frac{\sigma_{t-1}}{\sigma_t}=\sqrt{\frac{t-1}{t}}=\sqrt{1-\frac{1}{t}}\approx 1-\frac{1}{2t}$.
Which is to say you should see a correlation of almost 1 because as soon as $t$ starts to get large, $y_t$ and $y_{t-1}$ are almost exactly the same thing -- the relative difference between them tends to be fairly small.
You can see this most readily by plotting $y_t$ vs $y_{t-1}$.
We can now see it somewhat intuitively -- imagine $y_{t-1}$ has drifted down to $-20$ (as we see it did in my simulation of a random walk with standard normal noise term). Then $y_t$ is going to be pretty close to $-20$; it might be $-22$ or it might be $-18.5$ but it's nearly certain to be within a few units of $-20$. So as the series drifts up and down, the plot of $y_t$ vs $y_{t-1}$ is going to nearly always stay within quite a narrow range of the $y=x$ line... yet as $t$ grows the points will cover greater and greater stretches along that $y=x$ line (the spread along the line grows with $\sqrt{t}$, but the vertical spread remains roughly constant); the correlation must approach 1. | What is the autocorrelation for a random walk?
(I wrote this as an answer to another post, which was marked as a duplicate of this one while I was composing it; I figured I'd post it here rather than throw it away. It looks like it says quite simi |
19,780 | What is the autocorrelation for a random walk? | In the context of your previous question, a "random walk" is one realization $(x_0, x_1, x_2, \ldots, x_n)$ of a binomial random walk. Autocorrelation is the correlation between the vector $(x_0, x_1, \ldots, x_{n-1})$ and the vector of the next elements $(x_1,x_2, \ldots, x_n)$.
The very construction of a binomial random walk causes each $x_{i+1}$ to differ from each $x_i$ by a constant. After running the walk for a while, the values of $x_i$ will have wandered away from the initial value $x_0$ and thereby will usually cover a good range, typically proportional to $\sqrt{n}$ in length. Thus the lag-1 scatterplot of the $(x_i, x_{i+1})$ pairs will consist of points lying only on the lines $y=x\pm 1$, on average being close to the line $y=x$. The residuals will be close to $\pm 1$. Therefore, in the vast majority of realizations, the variance of the residuals (about $1$) compared to the variance of the values (roughly on the order of $(\sqrt{n}/2)^2 = n/4$) will be small. We would expect $R^2$ to be approximately
$$R^2 \approx 1 - \frac{1}{n/4} = 1 - \frac{4}{n}.$$
Here is a picture of $n=1000$ steps in a random walk (on the left) and its lag-1 scatterplot (on the right). Color coding is used to help you find corresponding points in the two plots. Notice that $R^2$ is very close indeed to $1 - 4/n$ in this case.
Here is the R code that produced the images.
set.seed(17)
n <- 1e3
x <- cumsum((runif(n) <= 1/2)*2-1) # Binomial random walk at x_0=0
rho <- format(cor(x[-1], x[-n]), digits=3) # Lag-1 correlation
par(mfrow=c(1,2))
plot(x, type="l", col="#e0e0e0", main="Sample Path")
points(x, pch=16, cex=0.75, col=hsv(1:n/n, .8, .8, .2))
plot(x[-n], x[-1], asp=1, pch=16, col=hsv(1:n/n, .8, .8, .2),
main="Lag-1 Scatterplot",
xlab="Current value", ylab="Next value")
mtext(bquote(rho == .(rho))) | What is the autocorrelation for a random walk? | In the context of your previous question, a "random walk" is one realization $(x_0, x_1, x_2, \ldots, x_n)$ of a binomial random walk. Autocorrelation is the correlation between the vector $(x_0, x_1 | What is the autocorrelation for a random walk?
In the context of your previous question, a "random walk" is one realization $(x_0, x_1, x_2, \ldots, x_n)$ of a binomial random walk. Autocorrelation is the correlation between the vector $(x_0, x_1, \ldots, x_{n-1})$ and the vector of the next elements $(x_1,x_2, \ldots, x_n)$.
The very construction of a binomial random walk causes each $x_{i+1}$ to differ from each $x_i$ by a constant. After running the walk for a while, the values of $x_i$ will have wandered away from the initial value $x_0$ and thereby will usually cover a good range, typically proportional to $\sqrt{n}$ in length. Thus the lag-1 scatterplot of the $(x_i, x_{i+1})$ pairs will consist of points lying only on the lines $y=x\pm 1$, on average being close to the line $y=x$. The residuals will be close to $\pm 1$. Therefore, in the vast majority of realizations, the variance of the residuals (about $1$) compared to the variance of the values (roughly on the order of $(\sqrt{n}/2)^2 = n/4$) will be small. We would expect $R^2$ to be approximately
$$R^2 \approx 1 - \frac{1}{n/4} = 1 - \frac{4}{n}.$$
Here is a picture of $n=1000$ steps in a random walk (on the left) and its lag-1 scatterplot (on the right). Color coding is used to help you find corresponding points in the two plots. Notice that $R^2$ is very close indeed to $1 - 4/n$ in this case.
Here is the R code that produced the images.
set.seed(17)
n <- 1e3
x <- cumsum((runif(n) <= 1/2)*2-1) # Binomial random walk at x_0=0
rho <- format(cor(x[-1], x[-n]), digits=3) # Lag-1 correlation
par(mfrow=c(1,2))
plot(x, type="l", col="#e0e0e0", main="Sample Path")
points(x, pch=16, cex=0.75, col=hsv(1:n/n, .8, .8, .2))
plot(x[-n], x[-1], asp=1, pch=16, col=hsv(1:n/n, .8, .8, .2),
main="Lag-1 Scatterplot",
xlab="Current value", ylab="Next value")
mtext(bquote(rho == .(rho))) | What is the autocorrelation for a random walk?
In the context of your previous question, a "random walk" is one realization $(x_0, x_1, x_2, \ldots, x_n)$ of a binomial random walk. Autocorrelation is the correlation between the vector $(x_0, x_1 |
19,781 | Is Cross entropy cost function for neural network convex? | The cross entropy of an exponential family is always convex. So, for a multilayer neural network having inputs $x$, weights $w$, and output $y$, and loss function $L$
$$\nabla^2_y L$$
is convex. However,
$$\nabla^2_w L$$
is not going to be convex for the parameters of the middle layer for the reasons described by iamonaboat. | Is Cross entropy cost function for neural network convex? | The cross entropy of an exponential family is always convex. So, for a multilayer neural network having inputs $x$, weights $w$, and output $y$, and loss function $L$
$$\nabla^2_y L$$
is convex. How | Is Cross entropy cost function for neural network convex?
The cross entropy of an exponential family is always convex. So, for a multilayer neural network having inputs $x$, weights $w$, and output $y$, and loss function $L$
$$\nabla^2_y L$$
is convex. However,
$$\nabla^2_w L$$
is not going to be convex for the parameters of the middle layer for the reasons described by iamonaboat. | Is Cross entropy cost function for neural network convex?
The cross entropy of an exponential family is always convex. So, for a multilayer neural network having inputs $x$, weights $w$, and output $y$, and loss function $L$
$$\nabla^2_y L$$
is convex. How |
19,782 | Is Cross entropy cost function for neural network convex? | You are right in suspecting that the ANN optimisation problem of the cross-entropy problem will be non-convex. Note: we are talking about a neural network with non-linear activation function at the hidden layer. Also, non-linearity has the potential of introducing local minima in the optimization of the objective function. If you don't use a non-linear activation function then your ANN is implementing a linear function and the problem will become convex.
So the reason why the optimisation of the cross-entropy of a ANN is non-convex is because of the underlying parametrisation of the ANN. If you use a linear neural network, you can make it convex (it will essentially look like logistic regression which is a convex problem). | Is Cross entropy cost function for neural network convex? | You are right in suspecting that the ANN optimisation problem of the cross-entropy problem will be non-convex. Note: we are talking about a neural network with non-linear activation function at the hi | Is Cross entropy cost function for neural network convex?
You are right in suspecting that the ANN optimisation problem of the cross-entropy problem will be non-convex. Note: we are talking about a neural network with non-linear activation function at the hidden layer. Also, non-linearity has the potential of introducing local minima in the optimization of the objective function. If you don't use a non-linear activation function then your ANN is implementing a linear function and the problem will become convex.
So the reason why the optimisation of the cross-entropy of a ANN is non-convex is because of the underlying parametrisation of the ANN. If you use a linear neural network, you can make it convex (it will essentially look like logistic regression which is a convex problem). | Is Cross entropy cost function for neural network convex?
You are right in suspecting that the ANN optimisation problem of the cross-entropy problem will be non-convex. Note: we are talking about a neural network with non-linear activation function at the hi |
19,783 | Is Cross entropy cost function for neural network convex? | What @ngiann said, and informally, if you permute the neurons in the hidden layer and do the same permutation on the weights of the adjacent layers then the loss doesn't change.
Hence if there is a non-zero global minima as a function of weights, then it can't be unique since the permutation of weights gives another global minimum. Hence the function is not convex.
The matrix of all second partial derivatives (the Hessian) is neither positive semidefinite, nor negative semidefinite. Since the second derivative is a matrix, it's possible that it's neither one or the other. | Is Cross entropy cost function for neural network convex? | What @ngiann said, and informally, if you permute the neurons in the hidden layer and do the same permutation on the weights of the adjacent layers then the loss doesn't change.
Hence if there is a n | Is Cross entropy cost function for neural network convex?
What @ngiann said, and informally, if you permute the neurons in the hidden layer and do the same permutation on the weights of the adjacent layers then the loss doesn't change.
Hence if there is a non-zero global minima as a function of weights, then it can't be unique since the permutation of weights gives another global minimum. Hence the function is not convex.
The matrix of all second partial derivatives (the Hessian) is neither positive semidefinite, nor negative semidefinite. Since the second derivative is a matrix, it's possible that it's neither one or the other. | Is Cross entropy cost function for neural network convex?
What @ngiann said, and informally, if you permute the neurons in the hidden layer and do the same permutation on the weights of the adjacent layers then the loss doesn't change.
Hence if there is a n |
19,784 | Output of Scikit SVM in multiclass classification always gives same label | A likely cause is the fact you are not tuning your model. You need to find good values for $C$ and $\gamma$. In your case, the defaults turn out to be bad, which leads to trivial models that always yield a certain class. This is particularly common if one class has much more instances than the others. What is your class distribution?
scikit-learn has limited hyperparameter search facilities, but you can use it together with a tuning library like Optunity. An example about tuning scikit-learn SVC with Optunity is available here.
Disclaimer: I am the lead developer of Optunity. | Output of Scikit SVM in multiclass classification always gives same label | A likely cause is the fact you are not tuning your model. You need to find good values for $C$ and $\gamma$. In your case, the defaults turn out to be bad, which leads to trivial models that always yi | Output of Scikit SVM in multiclass classification always gives same label
A likely cause is the fact you are not tuning your model. You need to find good values for $C$ and $\gamma$. In your case, the defaults turn out to be bad, which leads to trivial models that always yield a certain class. This is particularly common if one class has much more instances than the others. What is your class distribution?
scikit-learn has limited hyperparameter search facilities, but you can use it together with a tuning library like Optunity. An example about tuning scikit-learn SVC with Optunity is available here.
Disclaimer: I am the lead developer of Optunity. | Output of Scikit SVM in multiclass classification always gives same label
A likely cause is the fact you are not tuning your model. You need to find good values for $C$ and $\gamma$. In your case, the defaults turn out to be bad, which leads to trivial models that always yi |
19,785 | Output of Scikit SVM in multiclass classification always gives same label | The problem does turn out to be parameter testing. I did not try when gamma is between 0.0 (which is 1/n_feature) and 1. On my data gamma should be turn to something around 1e-8 | Output of Scikit SVM in multiclass classification always gives same label | The problem does turn out to be parameter testing. I did not try when gamma is between 0.0 (which is 1/n_feature) and 1. On my data gamma should be turn to something around 1e-8 | Output of Scikit SVM in multiclass classification always gives same label
The problem does turn out to be parameter testing. I did not try when gamma is between 0.0 (which is 1/n_feature) and 1. On my data gamma should be turn to something around 1e-8 | Output of Scikit SVM in multiclass classification always gives same label
The problem does turn out to be parameter testing. I did not try when gamma is between 0.0 (which is 1/n_feature) and 1. On my data gamma should be turn to something around 1e-8 |
19,786 | Formatting graphs: when is it appropriate to use a fill under a line graph? | There is a bit of an art to balancing aesthetic and informative aspects of a graphic. Prominent visualization consultant/authors like Edward Tufte and Stephen Few choose a minimal aesthetic that avoids distraction from the informative parts of the graph. However, for some audiences a small amount of flourish is justified -- see Alberto Cairo's journalistic application of data visualization, for instance.
The perceptual research angle is that every graphic element communicates a message, some that we aren't consciously aware of because our visual cortex deals with it ("pre-attentive processing"). Extra elements, even redundant ones, can result in extra processing. The appropriate graph depends on the message to be communicated and the audience.
To your specific question, a connected line emphasizes a trend (and variation from a trend). The filled area emphasizes deviation from the baseline. A bar chart or needle chart would emphasize discrete events deviating from a baseline.
The graphic's context is also important. If you have a grid of tightly packed graphs, the fill will help associate each line with its baseline.
Finally, another consideration for adding redundant graphic elements/florishes is that it makes it harder to extend the graph with more informative elements. For instance, you might want to highlight special values, overlay other trend lines or overlay bands, such as in the following mock-up. | Formatting graphs: when is it appropriate to use a fill under a line graph? | There is a bit of an art to balancing aesthetic and informative aspects of a graphic. Prominent visualization consultant/authors like Edward Tufte and Stephen Few choose a minimal aesthetic that avoid | Formatting graphs: when is it appropriate to use a fill under a line graph?
There is a bit of an art to balancing aesthetic and informative aspects of a graphic. Prominent visualization consultant/authors like Edward Tufte and Stephen Few choose a minimal aesthetic that avoids distraction from the informative parts of the graph. However, for some audiences a small amount of flourish is justified -- see Alberto Cairo's journalistic application of data visualization, for instance.
The perceptual research angle is that every graphic element communicates a message, some that we aren't consciously aware of because our visual cortex deals with it ("pre-attentive processing"). Extra elements, even redundant ones, can result in extra processing. The appropriate graph depends on the message to be communicated and the audience.
To your specific question, a connected line emphasizes a trend (and variation from a trend). The filled area emphasizes deviation from the baseline. A bar chart or needle chart would emphasize discrete events deviating from a baseline.
The graphic's context is also important. If you have a grid of tightly packed graphs, the fill will help associate each line with its baseline.
Finally, another consideration for adding redundant graphic elements/florishes is that it makes it harder to extend the graph with more informative elements. For instance, you might want to highlight special values, overlay other trend lines or overlay bands, such as in the following mock-up. | Formatting graphs: when is it appropriate to use a fill under a line graph?
There is a bit of an art to balancing aesthetic and informative aspects of a graphic. Prominent visualization consultant/authors like Edward Tufte and Stephen Few choose a minimal aesthetic that avoid |
19,787 | Formatting graphs: when is it appropriate to use a fill under a line graph? | The previous two answers cover the main important points, but there are a few things that should still be mentioned.
First, I should say that I disagree with the extreme minimalist approach to graphing -- that all redundant ink must go. Distracting, non-meaningful variation should go. But a solid area versus a single line can catch the eye better and communicate more at a glance. And as you say, it can add "visual variety".
However, as @xan points out, that quick glance also interprets an area differently than a line, in ways partially subconscious.
An area graph implies a total quantity accumulating as you proceed along the x-axis. If you compare two graphs, and one has a larger area filled in, your glance will tell you that it has a greater total regardless of the start and end values.
In contrast, a line graph shows a changing value. The focus is on the change in position from one point to the next, not on the total accumulated.
So when should you use an area graph?
when the values represent a clear quantity with a definite zero point shown on the graph;
when the value represents an amount added (or removed) at each point, such as normal daily rainfall or monthly profit/loss;
when the value represents a distribution of a population, meaning that the total area under the curve represents the total size of the sample, such as bell curve of the number of students with different grades (basically a smoothed histogram).
The idea is that, when reading the graph, if you take two points on the x-axis, the area shown between them should represent an actual amount of something accumulating in that range. For this reason, if you values include negative amounts I'd recommend using opposite colours for negative and positive areas to emphasize that they cancel out in the total.
When should you not use an area graph?
when the zero point is arbitrary (as in non-absolute temperature, as @timcdlucas said), invalid (as in measurements that are a ratio of two values, like an exchange rate), or not shown on the graph for space reasons;
when the values shown by the height of the line already represent a cumulative measure, such as total rainfall to date (for the month/year) or debt/savings;
when the values represent the position/value of a single changing entity rather than an accumulation;
when you want to compare multiple lines on the same chart (if you can't see the whole area, you lose the meaning -- compare area charts side-by-side instead).
With those guidelines in mind, your ping graph can be interpreted two ways.
On the one hand, if you think of the ping speed as a as a single variable that changes over the course of the day, then a simple line chart would be most appropriate.
On the other hand, if you were comparing two different networks' daily ping-speed patterns (or the same network on different days / time periods), then maybe you want to emphasize the total amount of time required for network tasks. For example, if your graph had multiple peaks, instead of just one, a line graph would emphasize the variability in speed while an area graph would emphasize total delay.
Compare:
The cumulative total is slightly greater in the first half of the graph (left of the red line) than the second, even if the peaks hit higher max values on the right. Filling in emphasizes that solid block on the left, so that it balances better against the peaks.
(Forgive the poor image quality -- couldn't figure out how to get R to do an area graph! Had to export and edit separately.) | Formatting graphs: when is it appropriate to use a fill under a line graph? | The previous two answers cover the main important points, but there are a few things that should still be mentioned.
First, I should say that I disagree with the extreme minimalist approach to graphin | Formatting graphs: when is it appropriate to use a fill under a line graph?
The previous two answers cover the main important points, but there are a few things that should still be mentioned.
First, I should say that I disagree with the extreme minimalist approach to graphing -- that all redundant ink must go. Distracting, non-meaningful variation should go. But a solid area versus a single line can catch the eye better and communicate more at a glance. And as you say, it can add "visual variety".
However, as @xan points out, that quick glance also interprets an area differently than a line, in ways partially subconscious.
An area graph implies a total quantity accumulating as you proceed along the x-axis. If you compare two graphs, and one has a larger area filled in, your glance will tell you that it has a greater total regardless of the start and end values.
In contrast, a line graph shows a changing value. The focus is on the change in position from one point to the next, not on the total accumulated.
So when should you use an area graph?
when the values represent a clear quantity with a definite zero point shown on the graph;
when the value represents an amount added (or removed) at each point, such as normal daily rainfall or monthly profit/loss;
when the value represents a distribution of a population, meaning that the total area under the curve represents the total size of the sample, such as bell curve of the number of students with different grades (basically a smoothed histogram).
The idea is that, when reading the graph, if you take two points on the x-axis, the area shown between them should represent an actual amount of something accumulating in that range. For this reason, if you values include negative amounts I'd recommend using opposite colours for negative and positive areas to emphasize that they cancel out in the total.
When should you not use an area graph?
when the zero point is arbitrary (as in non-absolute temperature, as @timcdlucas said), invalid (as in measurements that are a ratio of two values, like an exchange rate), or not shown on the graph for space reasons;
when the values shown by the height of the line already represent a cumulative measure, such as total rainfall to date (for the month/year) or debt/savings;
when the values represent the position/value of a single changing entity rather than an accumulation;
when you want to compare multiple lines on the same chart (if you can't see the whole area, you lose the meaning -- compare area charts side-by-side instead).
With those guidelines in mind, your ping graph can be interpreted two ways.
On the one hand, if you think of the ping speed as a as a single variable that changes over the course of the day, then a simple line chart would be most appropriate.
On the other hand, if you were comparing two different networks' daily ping-speed patterns (or the same network on different days / time periods), then maybe you want to emphasize the total amount of time required for network tasks. For example, if your graph had multiple peaks, instead of just one, a line graph would emphasize the variability in speed while an area graph would emphasize total delay.
Compare:
The cumulative total is slightly greater in the first half of the graph (left of the red line) than the second, even if the peaks hit higher max values on the right. Filling in emphasizes that solid block on the left, so that it balances better against the peaks.
(Forgive the poor image quality -- couldn't figure out how to get R to do an area graph! Had to export and edit separately.) | Formatting graphs: when is it appropriate to use a fill under a line graph?
The previous two answers cover the main important points, but there are a few things that should still be mentioned.
First, I should say that I disagree with the extreme minimalist approach to graphin |
19,788 | Formatting graphs: when is it appropriate to use a fill under a line graph? | A couple more points to consider:
As mentioned in a comment, an underfill is largely inappropriate if the x axis is not at a natural y zero point. This might be because the y axis is scaled to start at a number other than zero, or because the units used do not have a natural zero interpretation (e.g. Kelvin has a natural zero, while Celsius does not.)
Secondly, a case when an underfill is particularly valid is if the data themselves could be considered underfilled. For example, a line chart of the height of a mountain makes sense to be underfilled, the fill colour represents earth, while unfilled represents air.
A related example might be count data. If we stacked all the individuals at each x point, we would get a bar chart. If interpolating between the bars makes sense we would end up with a line chart with an underfill.
This image from the 'visual display of quantitative information' Might explain it a little better. It shows which military units were in Europe during the second war (I think). Stacking the units at each time point gives you an underfilled bar chart. Drawing a line over the top of the data gives you an underfilled line chart. | Formatting graphs: when is it appropriate to use a fill under a line graph? | A couple more points to consider:
As mentioned in a comment, an underfill is largely inappropriate if the x axis is not at a natural y zero point. This might be because the y axis is scaled to start a | Formatting graphs: when is it appropriate to use a fill under a line graph?
A couple more points to consider:
As mentioned in a comment, an underfill is largely inappropriate if the x axis is not at a natural y zero point. This might be because the y axis is scaled to start at a number other than zero, or because the units used do not have a natural zero interpretation (e.g. Kelvin has a natural zero, while Celsius does not.)
Secondly, a case when an underfill is particularly valid is if the data themselves could be considered underfilled. For example, a line chart of the height of a mountain makes sense to be underfilled, the fill colour represents earth, while unfilled represents air.
A related example might be count data. If we stacked all the individuals at each x point, we would get a bar chart. If interpolating between the bars makes sense we would end up with a line chart with an underfill.
This image from the 'visual display of quantitative information' Might explain it a little better. It shows which military units were in Europe during the second war (I think). Stacking the units at each time point gives you an underfilled bar chart. Drawing a line over the top of the data gives you an underfilled line chart. | Formatting graphs: when is it appropriate to use a fill under a line graph?
A couple more points to consider:
As mentioned in a comment, an underfill is largely inappropriate if the x axis is not at a natural y zero point. This might be because the y axis is scaled to start a |
19,789 | What is the difference between exponential and geometric distribution? | Did you try looking at Wikipedia?
The exponential distribution may be viewed as a continuous counterpart of the geometric distribution, which describes the number of Bernoulli trials necessary for a discrete process to change state. In contrast, the exponential distribution describes the time for a continuous process to change state. | What is the difference between exponential and geometric distribution? | Did you try looking at Wikipedia?
The exponential distribution may be viewed as a continuous counterpart of the geometric distribution, which describes the number of Bernoulli trials necessary for a | What is the difference between exponential and geometric distribution?
Did you try looking at Wikipedia?
The exponential distribution may be viewed as a continuous counterpart of the geometric distribution, which describes the number of Bernoulli trials necessary for a discrete process to change state. In contrast, the exponential distribution describes the time for a continuous process to change state. | What is the difference between exponential and geometric distribution?
Did you try looking at Wikipedia?
The exponential distribution may be viewed as a continuous counterpart of the geometric distribution, which describes the number of Bernoulli trials necessary for a |
19,790 | What is the difference between exponential and geometric distribution? | The geometric distribution belongs to the exponential family and so does "the exponential distribution". They only differ in the parameters and sufficient statistics used in factored expression for conditional distributions from the exponential family. | What is the difference between exponential and geometric distribution? | The geometric distribution belongs to the exponential family and so does "the exponential distribution". They only differ in the parameters and sufficient statistics used in factored expression for co | What is the difference between exponential and geometric distribution?
The geometric distribution belongs to the exponential family and so does "the exponential distribution". They only differ in the parameters and sufficient statistics used in factored expression for conditional distributions from the exponential family. | What is the difference between exponential and geometric distribution?
The geometric distribution belongs to the exponential family and so does "the exponential distribution". They only differ in the parameters and sufficient statistics used in factored expression for co |
19,791 | What is the difference between exponential and geometric distribution? | Exponential distributions involve raising numbers to a certain power whereas geometric distributions are more general in nature and involve performing various operations on numbers such as multiplying a certain number by two continuously. Exponential distributions are more specific types of geometric distributions.
Exponential distributions: 2, 4, 16, 256 or 3, 9, 81, 6561.
Geometric distribution: 2, 4, 8, 16, 32, 64.
Just my two cents anyway. | What is the difference between exponential and geometric distribution? | Exponential distributions involve raising numbers to a certain power whereas geometric distributions are more general in nature and involve performing various operations on numbers such as multiplying | What is the difference between exponential and geometric distribution?
Exponential distributions involve raising numbers to a certain power whereas geometric distributions are more general in nature and involve performing various operations on numbers such as multiplying a certain number by two continuously. Exponential distributions are more specific types of geometric distributions.
Exponential distributions: 2, 4, 16, 256 or 3, 9, 81, 6561.
Geometric distribution: 2, 4, 8, 16, 32, 64.
Just my two cents anyway. | What is the difference between exponential and geometric distribution?
Exponential distributions involve raising numbers to a certain power whereas geometric distributions are more general in nature and involve performing various operations on numbers such as multiplying |
19,792 | How to convert a frequency table into a vector of values? | In R, you can do it using the repcommand:
tab <- data.frame(value=c(1, 2, 3, 4, 5), freq=c(2, 1, 4, 2, 1))
vec <- rep(tab$value, tab$freq)
This gives following result:
> tab
value freq
1 1 2
2 2 1
3 3 4
4 4 2
5 5 1
> vec
[1] 1 1 2 3 3 3 3 4 4 5
For details, see the help file for the repcommand by typing ?rep. | How to convert a frequency table into a vector of values? | In R, you can do it using the repcommand:
tab <- data.frame(value=c(1, 2, 3, 4, 5), freq=c(2, 1, 4, 2, 1))
vec <- rep(tab$value, tab$freq)
This gives following result:
> tab
value freq
1 1 2 | How to convert a frequency table into a vector of values?
In R, you can do it using the repcommand:
tab <- data.frame(value=c(1, 2, 3, 4, 5), freq=c(2, 1, 4, 2, 1))
vec <- rep(tab$value, tab$freq)
This gives following result:
> tab
value freq
1 1 2
2 2 1
3 3 4
4 4 2
5 5 1
> vec
[1] 1 1 2 3 3 3 3 4 4 5
For details, see the help file for the repcommand by typing ?rep. | How to convert a frequency table into a vector of values?
In R, you can do it using the repcommand:
tab <- data.frame(value=c(1, 2, 3, 4, 5), freq=c(2, 1, 4, 2, 1))
vec <- rep(tab$value, tab$freq)
This gives following result:
> tab
value freq
1 1 2 |
19,793 | How to convert a frequency table into a vector of values? | Obviously in R it's simpler.
In Excel I would use a helper column (if the value is in A1):
value freq help
1 1 2 =REPT(A2 & ", ",B2)
2 2 1 =C1 & REPT(A3 & ", ",B3)
3 3 4 (drag or copy from upper cell)
4 4 2 (drag or copy from upper cell)
5 5 1 (drag or copy from upper cell)
=LEFT(C6, LEN(C6)-1)
In C7 you have your result | How to convert a frequency table into a vector of values? | Obviously in R it's simpler.
In Excel I would use a helper column (if the value is in A1):
value freq help
1 1 2 =REPT(A2 & ", ",B2)
2 2 1 =C1 & REPT(A3 & ", ",B3)
3 3 4 (dra | How to convert a frequency table into a vector of values?
Obviously in R it's simpler.
In Excel I would use a helper column (if the value is in A1):
value freq help
1 1 2 =REPT(A2 & ", ",B2)
2 2 1 =C1 & REPT(A3 & ", ",B3)
3 3 4 (drag or copy from upper cell)
4 4 2 (drag or copy from upper cell)
5 5 1 (drag or copy from upper cell)
=LEFT(C6, LEN(C6)-1)
In C7 you have your result | How to convert a frequency table into a vector of values?
Obviously in R it's simpler.
In Excel I would use a helper column (if the value is in A1):
value freq help
1 1 2 =REPT(A2 & ", ",B2)
2 2 1 =C1 & REPT(A3 & ", ",B3)
3 3 4 (dra |
19,794 | How to begin reading about data mining? | Being somewhat in this position myself, I'll try to give some insight.
Firstly, download the Elements of Statistical Learning. It presumes calculus and linear algebra, and although it is very technical, it is also extremely well written.
Secondly (or firstly) look at Andrew Ng's tutorials on machine learning.
Thirdly, get some data, and start attempting to analyse data. You'll need to split into training and test sets, and then build models on the training set and test them against the test set.
I found the caret package for R very useful for all of this.
After that its practice, practice practice (like almost everything else). | How to begin reading about data mining? | Being somewhat in this position myself, I'll try to give some insight.
Firstly, download the Elements of Statistical Learning. It presumes calculus and linear algebra, and although it is very technica | How to begin reading about data mining?
Being somewhat in this position myself, I'll try to give some insight.
Firstly, download the Elements of Statistical Learning. It presumes calculus and linear algebra, and although it is very technical, it is also extremely well written.
Secondly (or firstly) look at Andrew Ng's tutorials on machine learning.
Thirdly, get some data, and start attempting to analyse data. You'll need to split into training and test sets, and then build models on the training set and test them against the test set.
I found the caret package for R very useful for all of this.
After that its practice, practice practice (like almost everything else). | How to begin reading about data mining?
Being somewhat in this position myself, I'll try to give some insight.
Firstly, download the Elements of Statistical Learning. It presumes calculus and linear algebra, and although it is very technica |
19,795 | How to begin reading about data mining? | Introduction to Data Mining by Tan, Steinbech, Kumar is the best intro book out there
http://www.amazon.com/Introduction-Data-Mining-Pang-Ning-Tan/dp/0321321367
save EoSL for when you want to dig deeper. It's more of a reference. | How to begin reading about data mining? | Introduction to Data Mining by Tan, Steinbech, Kumar is the best intro book out there
http://www.amazon.com/Introduction-Data-Mining-Pang-Ning-Tan/dp/0321321367
save EoSL for when you want to dig deep | How to begin reading about data mining?
Introduction to Data Mining by Tan, Steinbech, Kumar is the best intro book out there
http://www.amazon.com/Introduction-Data-Mining-Pang-Ning-Tan/dp/0321321367
save EoSL for when you want to dig deeper. It's more of a reference. | How to begin reading about data mining?
Introduction to Data Mining by Tan, Steinbech, Kumar is the best intro book out there
http://www.amazon.com/Introduction-Data-Mining-Pang-Ning-Tan/dp/0321321367
save EoSL for when you want to dig deep |
19,796 | How to begin reading about data mining? | Data mining can be descriptive or predictive.
On the one hand, if you are interested in descriptive data mining, then machine learning won't help.
On the other hand, if you are interested in predictive data mining, then machine learning will help you understand that you try to minimize the unknown risk (expectation of the loss function) when minimizing the empirical risk: you will keep in mind overfitting, generalization error and cross-validation. For instance, for a matter of consistency, the $k$-NN for a training sample of size $n$ should be such that:
$k$ goes to infinity when $n$ goes to infinity,
$\frac{k}{n}$ goes to 0 when $n$ goes to infinity. | How to begin reading about data mining? | Data mining can be descriptive or predictive.
On the one hand, if you are interested in descriptive data mining, then machine learning won't help.
On the other hand, if you are interested in predictiv | How to begin reading about data mining?
Data mining can be descriptive or predictive.
On the one hand, if you are interested in descriptive data mining, then machine learning won't help.
On the other hand, if you are interested in predictive data mining, then machine learning will help you understand that you try to minimize the unknown risk (expectation of the loss function) when minimizing the empirical risk: you will keep in mind overfitting, generalization error and cross-validation. For instance, for a matter of consistency, the $k$-NN for a training sample of size $n$ should be such that:
$k$ goes to infinity when $n$ goes to infinity,
$\frac{k}{n}$ goes to 0 when $n$ goes to infinity. | How to begin reading about data mining?
Data mining can be descriptive or predictive.
On the one hand, if you are interested in descriptive data mining, then machine learning won't help.
On the other hand, if you are interested in predictiv |
19,797 | How to begin reading about data mining? | I only add another very good source of tutorials on data mining/machine learning by Tom Mitchell.
He explains it very clearly and You can also download his presentations from his website (together with watching his lectures there). | How to begin reading about data mining? | I only add another very good source of tutorials on data mining/machine learning by Tom Mitchell.
He explains it very clearly and You can also download his presentations from his website (together wit | How to begin reading about data mining?
I only add another very good source of tutorials on data mining/machine learning by Tom Mitchell.
He explains it very clearly and You can also download his presentations from his website (together with watching his lectures there). | How to begin reading about data mining?
I only add another very good source of tutorials on data mining/machine learning by Tom Mitchell.
He explains it very clearly and You can also download his presentations from his website (together wit |
19,798 | Can chi square be used to compare proportions? | Correct me if I'm wrong, but I think this can be done in R using this command
chisq.test(c(15, 13, 10, 17))
Chi-squared test for given probabilities
data: c(15, 13, 10, 17)
X-squared = 1.9455, df = 3, p-value = 0.5838
This assumes proportions of 1/4 each. You can modify expected values via argument p. For example, you think people may prefer (for whatever reason) one color over the other(s).
chisq.test(c(15, 13, 10, 17), p = c(0.5, 0.3, 0.1, 0.1))
Chi-squared test for given probabilities
data: c(15, 13, 10, 17)
X-squared = 34.1515, df = 3, p-value = 1.841e-07 | Can chi square be used to compare proportions? | Correct me if I'm wrong, but I think this can be done in R using this command
chisq.test(c(15, 13, 10, 17))
Chi-squared test for given probabilities
data: c(15, 13, 10, 17)
| Can chi square be used to compare proportions?
Correct me if I'm wrong, but I think this can be done in R using this command
chisq.test(c(15, 13, 10, 17))
Chi-squared test for given probabilities
data: c(15, 13, 10, 17)
X-squared = 1.9455, df = 3, p-value = 0.5838
This assumes proportions of 1/4 each. You can modify expected values via argument p. For example, you think people may prefer (for whatever reason) one color over the other(s).
chisq.test(c(15, 13, 10, 17), p = c(0.5, 0.3, 0.1, 0.1))
Chi-squared test for given probabilities
data: c(15, 13, 10, 17)
X-squared = 34.1515, df = 3, p-value = 1.841e-07 | Can chi square be used to compare proportions?
Correct me if I'm wrong, but I think this can be done in R using this command
chisq.test(c(15, 13, 10, 17))
Chi-squared test for given probabilities
data: c(15, 13, 10, 17)
|
19,799 | Can chi square be used to compare proportions? | Using the extra information you gave (being that quite some of the values are 0), it's pretty obvious why your solution returns nothing. For one, you have a probability that is 0, so :
$e_i$ in the solution of Henry is 0 for at least one i
$np_i$ in the solution of @probabilityislogic is 0 for at least one i
Which makes the divisions impossible. Now saying that $p=0$ means that it is impossible to have that outcome. If so, you might as well just erase it from the data (see comment of @cardinal). If you mean highly improbable, a first 'solution' might be to increase that 0 chance with a very small number.
Given :
X <- c(0, 0, 0, 8, 6, 2, 0, 0)
p <- c(0.406197174, 0.088746395, 0.025193306, 0.42041479,
0.03192905, 0.018328576, 0.009190708, 0)
You could do :
p2 <- p + 1e-6
chisq.test(X, p2)
Pearson's Chi-squared test
data: X and p2
X-squared = 24, df = 21, p-value = 0.2931
But this is not a correct result. In any case, one should avoid using the chi-square test in these borderline cases. A better approach is using a bootstrap approach, calculating an adapted test statistic and comparing the one from the sample with the distribution obtained by the bootstrap.
In R code this could be (step by step) :
# The function to calculate the adapted statistic.
# We add 0.5 to the expected value to avoid dividing by 0
Statistic <- function(o,e){
e <- e+0.5
sum(((o-e)^2)/e)
}
# Set up the bootstraps, based on the multinomial distribution
n <- 10000
bootstraps <- rmultinom(n, size=sum(X), p=p)
# calculate the expected values
expected <- p*sum(X)
# calculate the statistic for the sample and the bootstrap
ChisqSamp <- Statistic(X, expected)
ChisqDist <- apply(bootstraps, 2, Statistic, expected)
# calculate the p-value
p.value <- sum(ChisqSamp < sort(ChisqDist))/n
p.value
This gives a p-value of 0, which is much more in line with the difference between observed and expected. Mind you, this method assumes your data is drawn from a multinomial distribution. If this assumption doesn't hold, the p-value doesn't hold either. | Can chi square be used to compare proportions? | Using the extra information you gave (being that quite some of the values are 0), it's pretty obvious why your solution returns nothing. For one, you have a probability that is 0, so :
$e_i$ in the s | Can chi square be used to compare proportions?
Using the extra information you gave (being that quite some of the values are 0), it's pretty obvious why your solution returns nothing. For one, you have a probability that is 0, so :
$e_i$ in the solution of Henry is 0 for at least one i
$np_i$ in the solution of @probabilityislogic is 0 for at least one i
Which makes the divisions impossible. Now saying that $p=0$ means that it is impossible to have that outcome. If so, you might as well just erase it from the data (see comment of @cardinal). If you mean highly improbable, a first 'solution' might be to increase that 0 chance with a very small number.
Given :
X <- c(0, 0, 0, 8, 6, 2, 0, 0)
p <- c(0.406197174, 0.088746395, 0.025193306, 0.42041479,
0.03192905, 0.018328576, 0.009190708, 0)
You could do :
p2 <- p + 1e-6
chisq.test(X, p2)
Pearson's Chi-squared test
data: X and p2
X-squared = 24, df = 21, p-value = 0.2931
But this is not a correct result. In any case, one should avoid using the chi-square test in these borderline cases. A better approach is using a bootstrap approach, calculating an adapted test statistic and comparing the one from the sample with the distribution obtained by the bootstrap.
In R code this could be (step by step) :
# The function to calculate the adapted statistic.
# We add 0.5 to the expected value to avoid dividing by 0
Statistic <- function(o,e){
e <- e+0.5
sum(((o-e)^2)/e)
}
# Set up the bootstraps, based on the multinomial distribution
n <- 10000
bootstraps <- rmultinom(n, size=sum(X), p=p)
# calculate the expected values
expected <- p*sum(X)
# calculate the statistic for the sample and the bootstrap
ChisqSamp <- Statistic(X, expected)
ChisqDist <- apply(bootstraps, 2, Statistic, expected)
# calculate the p-value
p.value <- sum(ChisqSamp < sort(ChisqDist))/n
p.value
This gives a p-value of 0, which is much more in line with the difference between observed and expected. Mind you, this method assumes your data is drawn from a multinomial distribution. If this assumption doesn't hold, the p-value doesn't hold either. | Can chi square be used to compare proportions?
Using the extra information you gave (being that quite some of the values are 0), it's pretty obvious why your solution returns nothing. For one, you have a probability that is 0, so :
$e_i$ in the s |
19,800 | Can chi square be used to compare proportions? | The chi-square test is good as long as the expected counts are large, usually above 10 is fine. below this the $\frac{1}{E(x_{i})}$ part tends to dominate the test. An exact test statistic is given by:
$$\psi=\sum_{i}x_{i}\log\left(\frac{x_{i}}{np_{i}}\right)$$
Where $x_{i}$ is the observed count in category $i$. $i\in \{\text{red, blue, green, yellow}\}$ in your example. $n$ is your sample size, equal to $55$ in your example. $p_i$ is the hypothesis you wish to test - the most obvious is $p_i=p_j$ (all probabilities are equal). You can show that the chi-square statistic:
$$\chi^{2}=\sum_{i}\frac{(x_{i}-np_{i})^{2}}{np_{i}}\approx 2\psi$$
In terms of the observed frequencies $f_{i}=\frac{x_{i}}{n}$ we get:
$$\psi=n\sum_{i}f_{i}\log\left(\frac{f_{i}}{p_{i}}\right)$$
$$\chi^{2}=n\sum_{i}\frac{(f_{i}-p_{i})^{2}}{p_{i}}$$
(Note that $\psi$ is the effectively the KL divergence between the hypothesis and the observed values). You may be able to see intuitively why $\psi$ is better for small $p_{i}$, because it does have a $\frac{1}{p_{i}}$ but it also has a log function which is absent from the chi-square, this "reigns in" the extreme values caused by small expected counts. Now the "exactness" of this $\psi$ statistic is not as an exact chi-square distribution - it is exact in a probability sense. The exactness comes about in the following manner, from Jaynes 2003 probability theory: the logic of science.
If you have two hypothesis $H_{1}$ and $H_{2}$ (i.e. two sets of $p_i$ values) that you wish to test, each with test statistics $\psi_{1}$ and $\psi_{2}$ respectively, then $\exp\left(\psi_{1}-\psi_{2}\right)$ gives you the likelihood ratio for $H_{2}$ over $H_{1}$. $\exp\left(\frac{1}{2}\chi_{1}^{2}-\frac{1}{2}\chi_{2}^{2}\right)$ gives an approximation to this likelihood ratio.
Now if you choose $H_{2}$ to be the "sure thing" or "perfect fit" hypothesis, then we will have $\psi_{2}=\chi^{2}_{2}=0$, and thus the chi-square and psi statistic both tell you "how far" from the perfect fit any single hypothesis is, from one which fit the observed data exactly.
Final recommendation: Use $\chi_{2}^{2}$ statistic when the expected counts are large, mainly because most statistical packages will easily report this value. If some expected counts are small, say about $np_{i}<10$, then use $\psi$, because the chi-square is a bad approximation in this case, these small cells will dominate the chi-square statistic. | Can chi square be used to compare proportions? | The chi-square test is good as long as the expected counts are large, usually above 10 is fine. below this the $\frac{1}{E(x_{i})}$ part tends to dominate the test. An exact test statistic is given | Can chi square be used to compare proportions?
The chi-square test is good as long as the expected counts are large, usually above 10 is fine. below this the $\frac{1}{E(x_{i})}$ part tends to dominate the test. An exact test statistic is given by:
$$\psi=\sum_{i}x_{i}\log\left(\frac{x_{i}}{np_{i}}\right)$$
Where $x_{i}$ is the observed count in category $i$. $i\in \{\text{red, blue, green, yellow}\}$ in your example. $n$ is your sample size, equal to $55$ in your example. $p_i$ is the hypothesis you wish to test - the most obvious is $p_i=p_j$ (all probabilities are equal). You can show that the chi-square statistic:
$$\chi^{2}=\sum_{i}\frac{(x_{i}-np_{i})^{2}}{np_{i}}\approx 2\psi$$
In terms of the observed frequencies $f_{i}=\frac{x_{i}}{n}$ we get:
$$\psi=n\sum_{i}f_{i}\log\left(\frac{f_{i}}{p_{i}}\right)$$
$$\chi^{2}=n\sum_{i}\frac{(f_{i}-p_{i})^{2}}{p_{i}}$$
(Note that $\psi$ is the effectively the KL divergence between the hypothesis and the observed values). You may be able to see intuitively why $\psi$ is better for small $p_{i}$, because it does have a $\frac{1}{p_{i}}$ but it also has a log function which is absent from the chi-square, this "reigns in" the extreme values caused by small expected counts. Now the "exactness" of this $\psi$ statistic is not as an exact chi-square distribution - it is exact in a probability sense. The exactness comes about in the following manner, from Jaynes 2003 probability theory: the logic of science.
If you have two hypothesis $H_{1}$ and $H_{2}$ (i.e. two sets of $p_i$ values) that you wish to test, each with test statistics $\psi_{1}$ and $\psi_{2}$ respectively, then $\exp\left(\psi_{1}-\psi_{2}\right)$ gives you the likelihood ratio for $H_{2}$ over $H_{1}$. $\exp\left(\frac{1}{2}\chi_{1}^{2}-\frac{1}{2}\chi_{2}^{2}\right)$ gives an approximation to this likelihood ratio.
Now if you choose $H_{2}$ to be the "sure thing" or "perfect fit" hypothesis, then we will have $\psi_{2}=\chi^{2}_{2}=0$, and thus the chi-square and psi statistic both tell you "how far" from the perfect fit any single hypothesis is, from one which fit the observed data exactly.
Final recommendation: Use $\chi_{2}^{2}$ statistic when the expected counts are large, mainly because most statistical packages will easily report this value. If some expected counts are small, say about $np_{i}<10$, then use $\psi$, because the chi-square is a bad approximation in this case, these small cells will dominate the chi-square statistic. | Can chi square be used to compare proportions?
The chi-square test is good as long as the expected counts are large, usually above 10 is fine. below this the $\frac{1}{E(x_{i})}$ part tends to dominate the test. An exact test statistic is given |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.