idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
36,701 | The right distance for the clustering. Maybe Mahalanobis? | The distance measure you use for cluster analysis should depend on your data. For example, in Ecology we frequently use data on species presence/absence/abundance of ecological communities, and use distance (i.e., similarity) measures such as the Sorensen and Bray-Curtis measures.
There should not be anything specifically against using Mahalanobis distance. Euclidean distance may be the most intuitive to use, and perhaps for the field that you are in, it generally works well. However, it does not work well for all datasets. One thing you can do is try different distance measures and different clustering techniques, and compare cophenetic correlations across analyses to see what is showing the pattern best-supported by the data; also, look at the resulting clusters to see what makes sense and is explainable based on existing literature in your field.
Also, there is a relevant post on CrossValidated here - also, a google search for "non-euclidean distance cluster analysis" looks like it brings up some useful results.
Hope that helps a bit! | The right distance for the clustering. Maybe Mahalanobis? | The distance measure you use for cluster analysis should depend on your data. For example, in Ecology we frequently use data on species presence/absence/abundance of ecological communities, and use di | The right distance for the clustering. Maybe Mahalanobis?
The distance measure you use for cluster analysis should depend on your data. For example, in Ecology we frequently use data on species presence/absence/abundance of ecological communities, and use distance (i.e., similarity) measures such as the Sorensen and Bray-Curtis measures.
There should not be anything specifically against using Mahalanobis distance. Euclidean distance may be the most intuitive to use, and perhaps for the field that you are in, it generally works well. However, it does not work well for all datasets. One thing you can do is try different distance measures and different clustering techniques, and compare cophenetic correlations across analyses to see what is showing the pattern best-supported by the data; also, look at the resulting clusters to see what makes sense and is explainable based on existing literature in your field.
Also, there is a relevant post on CrossValidated here - also, a google search for "non-euclidean distance cluster analysis" looks like it brings up some useful results.
Hope that helps a bit! | The right distance for the clustering. Maybe Mahalanobis?
The distance measure you use for cluster analysis should depend on your data. For example, in Ecology we frequently use data on species presence/absence/abundance of ecological communities, and use di |
36,702 | The right distance for the clustering. Maybe Mahalanobis? | Maybe have a look at correlation clustering, which is meant to find clusters that have a non-spherical shape.
If you want to give Mahalanobis a try, note that Gaussian Mixture Model EM clustering does use Mahalanobis distance. | The right distance for the clustering. Maybe Mahalanobis? | Maybe have a look at correlation clustering, which is meant to find clusters that have a non-spherical shape.
If you want to give Mahalanobis a try, note that Gaussian Mixture Model EM clustering doe | The right distance for the clustering. Maybe Mahalanobis?
Maybe have a look at correlation clustering, which is meant to find clusters that have a non-spherical shape.
If you want to give Mahalanobis a try, note that Gaussian Mixture Model EM clustering does use Mahalanobis distance. | The right distance for the clustering. Maybe Mahalanobis?
Maybe have a look at correlation clustering, which is meant to find clusters that have a non-spherical shape.
If you want to give Mahalanobis a try, note that Gaussian Mixture Model EM clustering doe |
36,703 | About linear combinations of independent Brownian motions | The sum of two independent normal random variables is normal. Therefore, for a given fixed $t$,
$$
\frac{(B(t)+W(t))-(B(0)+W(0))}{\sqrt{2}} \sim \mathcal N(0,\sqrt{t}) \>,
$$
i.e., a normal distribution with mean zero and standard deviation $\sqrt{t}$; here the starting values are known.
Furthermore, independence of increments is inherited from the independence of increments of the underlying stochastic processes, which you should check by looking at the finite dimensional distributions and using properties of the multivariate normal distribution.
For correlation, we can get the covariance between $B(t)=B_t$ and $X(t)=X_t$ for a fixed t, assuming we are starting at the origin to keep things simple.
$COV(B_t,X_t)=E(X_tB_t)-E(X_t)E(B_t)=\frac{1}{\sqrt{2}}E((B_t)^2+B_tW_t)=\frac{1}{\sqrt{2}}E((B_t)^2)=\frac{t}{\sqrt{2}}$
The last two steps are from the independence of $B_t,W_t$ and the fact that $Var(B_t|B_0=0)=t$, respectively.
The correlation is just the covaraince divided by the product of the standard deviations of $X_t$ and $B_t$, which are $\sigma_{B_t}=\sqrt{t}=\sigma_{X_t}$ So the correlation is just $\frac{1}{\sqrt{2}} \approx 0.71$ | About linear combinations of independent Brownian motions | The sum of two independent normal random variables is normal. Therefore, for a given fixed $t$,
$$
\frac{(B(t)+W(t))-(B(0)+W(0))}{\sqrt{2}} \sim \mathcal N(0,\sqrt{t}) \>,
$$
i.e., a normal distribut | About linear combinations of independent Brownian motions
The sum of two independent normal random variables is normal. Therefore, for a given fixed $t$,
$$
\frac{(B(t)+W(t))-(B(0)+W(0))}{\sqrt{2}} \sim \mathcal N(0,\sqrt{t}) \>,
$$
i.e., a normal distribution with mean zero and standard deviation $\sqrt{t}$; here the starting values are known.
Furthermore, independence of increments is inherited from the independence of increments of the underlying stochastic processes, which you should check by looking at the finite dimensional distributions and using properties of the multivariate normal distribution.
For correlation, we can get the covariance between $B(t)=B_t$ and $X(t)=X_t$ for a fixed t, assuming we are starting at the origin to keep things simple.
$COV(B_t,X_t)=E(X_tB_t)-E(X_t)E(B_t)=\frac{1}{\sqrt{2}}E((B_t)^2+B_tW_t)=\frac{1}{\sqrt{2}}E((B_t)^2)=\frac{t}{\sqrt{2}}$
The last two steps are from the independence of $B_t,W_t$ and the fact that $Var(B_t|B_0=0)=t$, respectively.
The correlation is just the covaraince divided by the product of the standard deviations of $X_t$ and $B_t$, which are $\sigma_{B_t}=\sqrt{t}=\sigma_{X_t}$ So the correlation is just $\frac{1}{\sqrt{2}} \approx 0.71$ | About linear combinations of independent Brownian motions
The sum of two independent normal random variables is normal. Therefore, for a given fixed $t$,
$$
\frac{(B(t)+W(t))-(B(0)+W(0))}{\sqrt{2}} \sim \mathcal N(0,\sqrt{t}) \>,
$$
i.e., a normal distribut |
36,704 | Jeffreys prior for continuous uniform distribution | I think the other answer is wrong, so will give a detailed development here. First, let $X_1, \dotsc, X_n$ be iid uniform on the interval $(0,\theta)$. Then the likelihood function can be written as
$$
L(\theta)= \theta^{-n} \cdot \mathbb{1}(\theta \ge T)
$$ where $T=\max(X_1, \dotsc, X_n)$ is the sufficient statistic for $\theta$. The log likelihood then can be written
$$
l(\theta)=\log L(\theta)= -n \log \theta+ \begin{cases} 0 &,\theta\ge T \\ -\infty &, \theta<T \end{cases}
$$ and its first derivative (where it exists) can be written
$$
-n/\theta
$$ with expectation equal to $ -n/\theta \not= 0$, so **we cannot calculate the Fisher information via the expectation of the second derivative, since that equality depends on equality to zero above. If we nevertheless do that, we end up with Fisher information as $-n/\theta^2$, which is negative so of course impossible.
Then, using directly the definition of Fisher information, see Wikipedia: Fisher information, we get
$$\DeclareMathOperator{\E}{\mathbb{E}}
I(\theta)=\E_{\theta}\left\{ \left[\frac{\partial}{\partial\theta}\log f(x;\theta)\right]^2\right\} = \int_0^\theta [-n/\theta]^2 (1/\theta)\; dx = (n/\theta)^2
$$
Then the Jeffrey's uninformative prior is proportional to its squareroot, that is,
$$
\pi(\theta) \propto 1/\theta, \quad \theta>0
$$
which is an improper prior. But when $n\ge 2$ we get a proper posterior, which is Pareto with parameters of scale $x_m = L$ and shape $\alpha = n$. The density is given by
$$
\pi(\theta \,|\, T) = \frac{n}{\theta} \left( \frac{T}{\theta}\right)^{n}
$$ for $\theta \ge T$ and $n\ge 2$.
Note that the Jeffrey's prior can be seen as a degenerate Pareto conjugate prior with parameters $x_m = 0, \, \alpha = 0$. | Jeffreys prior for continuous uniform distribution | I think the other answer is wrong, so will give a detailed development here. First, let $X_1, \dotsc, X_n$ be iid uniform on the interval $(0,\theta)$. Then the likelihood function can be written as
| Jeffreys prior for continuous uniform distribution
I think the other answer is wrong, so will give a detailed development here. First, let $X_1, \dotsc, X_n$ be iid uniform on the interval $(0,\theta)$. Then the likelihood function can be written as
$$
L(\theta)= \theta^{-n} \cdot \mathbb{1}(\theta \ge T)
$$ where $T=\max(X_1, \dotsc, X_n)$ is the sufficient statistic for $\theta$. The log likelihood then can be written
$$
l(\theta)=\log L(\theta)= -n \log \theta+ \begin{cases} 0 &,\theta\ge T \\ -\infty &, \theta<T \end{cases}
$$ and its first derivative (where it exists) can be written
$$
-n/\theta
$$ with expectation equal to $ -n/\theta \not= 0$, so **we cannot calculate the Fisher information via the expectation of the second derivative, since that equality depends on equality to zero above. If we nevertheless do that, we end up with Fisher information as $-n/\theta^2$, which is negative so of course impossible.
Then, using directly the definition of Fisher information, see Wikipedia: Fisher information, we get
$$\DeclareMathOperator{\E}{\mathbb{E}}
I(\theta)=\E_{\theta}\left\{ \left[\frac{\partial}{\partial\theta}\log f(x;\theta)\right]^2\right\} = \int_0^\theta [-n/\theta]^2 (1/\theta)\; dx = (n/\theta)^2
$$
Then the Jeffrey's uninformative prior is proportional to its squareroot, that is,
$$
\pi(\theta) \propto 1/\theta, \quad \theta>0
$$
which is an improper prior. But when $n\ge 2$ we get a proper posterior, which is Pareto with parameters of scale $x_m = L$ and shape $\alpha = n$. The density is given by
$$
\pi(\theta \,|\, T) = \frac{n}{\theta} \left( \frac{T}{\theta}\right)^{n}
$$ for $\theta \ge T$ and $n\ge 2$.
Note that the Jeffrey's prior can be seen as a degenerate Pareto conjugate prior with parameters $x_m = 0, \, \alpha = 0$. | Jeffreys prior for continuous uniform distribution
I think the other answer is wrong, so will give a detailed development here. First, let $X_1, \dotsc, X_n$ be iid uniform on the interval $(0,\theta)$. Then the likelihood function can be written as
|
36,705 | Jeffreys prior for continuous uniform distribution | The Jeffreys prior for $\theta$ doesn't depend upon the indicator function, although of course the posterior will. The square root of the second derivative of the log likelihood function is all you need:
$p(\theta) = \left(-\frac{\text{d}^2(\log \theta)}{\text{d}\theta^2}\right)^{1/2}$
When moving on to the posterior, you'll have to remember that indicator function; if $x \leq \theta$ for all $x$, the data says something important about the values that $\theta$ can take. But it's perfectly OK to have a prior that covers a range of values part of which is ruled out once you observe the data. | Jeffreys prior for continuous uniform distribution | The Jeffreys prior for $\theta$ doesn't depend upon the indicator function, although of course the posterior will. The square root of the second derivative of the log likelihood function is all you n | Jeffreys prior for continuous uniform distribution
The Jeffreys prior for $\theta$ doesn't depend upon the indicator function, although of course the posterior will. The square root of the second derivative of the log likelihood function is all you need:
$p(\theta) = \left(-\frac{\text{d}^2(\log \theta)}{\text{d}\theta^2}\right)^{1/2}$
When moving on to the posterior, you'll have to remember that indicator function; if $x \leq \theta$ for all $x$, the data says something important about the values that $\theta$ can take. But it's perfectly OK to have a prior that covers a range of values part of which is ruled out once you observe the data. | Jeffreys prior for continuous uniform distribution
The Jeffreys prior for $\theta$ doesn't depend upon the indicator function, although of course the posterior will. The square root of the second derivative of the log likelihood function is all you n |
36,706 | How to figure out what numbers often appear together in a dataset? | This question calls for a modification of the solution to a sequence counting problem: as noted in comments, it requests a cross-tabulation of co-occurrences of values.
I will illustrate a naive but effective modification with R code. First, let's introduce a small sample dataset to work with. It's in the usual matrix format, one case per row.
x <- matrix(c(3,5,7,10,13,
3,5,8,10,15,
2,5,10,11,18,
1,3,4,6,8,
2,4,6,12,14,
3,5,8,10,15),
ncol=5, byrow=TRUE)
This solution generates all possible combinations of $m$ items (per row) at a time and tabulates them:
m <- 3
x <- t(apply(x, 1, sort))
x0 <- apply(x, 1, combn, m=m)
y <- array(x0, c(m, length(x0)/(m*dim(x)[1]), dim(x)[1]))
ngrams <- apply(y, c(2,3), function(s) paste("(", paste(s, collapse=","), ")", sep=""))
z <- sort(table(as.factor(ngrams)), decreasing=TRUE)
The tabulation is in z, sorted by descending frequency. It is useful by itself or easily post-processed. Here are the first few entries of the example:
> head(z, 10)
(3,5,10) (3,10,15) (3,5,15) (3,5,8) ... (8,10,15)
3 2 2 2 ... 2
How efficient is this? For $p$ columns there are $\binom{p}{m}$ combinations to work out, which grows as $O(p^m)$ for fixed $m$: that's pretty bad, so we are limited to relatively small numbers of columns. To get a sense of the timing, repeat the preceding with a small random matrix and time it. Let's stick with values between $1$ and $20,$ say:
n.col <- 8 # Number of columns
n.cases <- 10^3 # Number of rows
x <- matrix(sample.int(20, size=n.col*n.cases, replace=TRUE), ncol=n.col)
The operation took two seconds to tabulate all $m=3$-combinations for $1000$ rows and $8$ columns. (It can go an order of magnitude faster by encoding the combinations numerically rather than as strings; this is limited to cases where $\binom{p}{m}$ is small enough to be represented exactly as an integer or float, limiting it to approximately $10^{16}$.) It scales linearly with the number of rows. (Increasing the number of possible values from $20$ to $20,000$ only slightly lengthened the time.) If that suggests overly long times to process a particular dataset, then a more sophisticated approach will be needed, perhaps utilizing results for very small $m$ to limit the higher-order combinations that are computed and counted. | How to figure out what numbers often appear together in a dataset? | This question calls for a modification of the solution to a sequence counting problem: as noted in comments, it requests a cross-tabulation of co-occurrences of values.
I will illustrate a naive but e | How to figure out what numbers often appear together in a dataset?
This question calls for a modification of the solution to a sequence counting problem: as noted in comments, it requests a cross-tabulation of co-occurrences of values.
I will illustrate a naive but effective modification with R code. First, let's introduce a small sample dataset to work with. It's in the usual matrix format, one case per row.
x <- matrix(c(3,5,7,10,13,
3,5,8,10,15,
2,5,10,11,18,
1,3,4,6,8,
2,4,6,12,14,
3,5,8,10,15),
ncol=5, byrow=TRUE)
This solution generates all possible combinations of $m$ items (per row) at a time and tabulates them:
m <- 3
x <- t(apply(x, 1, sort))
x0 <- apply(x, 1, combn, m=m)
y <- array(x0, c(m, length(x0)/(m*dim(x)[1]), dim(x)[1]))
ngrams <- apply(y, c(2,3), function(s) paste("(", paste(s, collapse=","), ")", sep=""))
z <- sort(table(as.factor(ngrams)), decreasing=TRUE)
The tabulation is in z, sorted by descending frequency. It is useful by itself or easily post-processed. Here are the first few entries of the example:
> head(z, 10)
(3,5,10) (3,10,15) (3,5,15) (3,5,8) ... (8,10,15)
3 2 2 2 ... 2
How efficient is this? For $p$ columns there are $\binom{p}{m}$ combinations to work out, which grows as $O(p^m)$ for fixed $m$: that's pretty bad, so we are limited to relatively small numbers of columns. To get a sense of the timing, repeat the preceding with a small random matrix and time it. Let's stick with values between $1$ and $20,$ say:
n.col <- 8 # Number of columns
n.cases <- 10^3 # Number of rows
x <- matrix(sample.int(20, size=n.col*n.cases, replace=TRUE), ncol=n.col)
The operation took two seconds to tabulate all $m=3$-combinations for $1000$ rows and $8$ columns. (It can go an order of magnitude faster by encoding the combinations numerically rather than as strings; this is limited to cases where $\binom{p}{m}$ is small enough to be represented exactly as an integer or float, limiting it to approximately $10^{16}$.) It scales linearly with the number of rows. (Increasing the number of possible values from $20$ to $20,000$ only slightly lengthened the time.) If that suggests overly long times to process a particular dataset, then a more sophisticated approach will be needed, perhaps utilizing results for very small $m$ to limit the higher-order combinations that are computed and counted. | How to figure out what numbers often appear together in a dataset?
This question calls for a modification of the solution to a sequence counting problem: as noted in comments, it requests a cross-tabulation of co-occurrences of values.
I will illustrate a naive but e |
36,707 | How to figure out what numbers often appear together in a dataset? | You are not looking for clustering.
Instead, you are looking for frequent itemset mining. There are dozens of algorithms for this, the most widely known probably are APRIORI, FP-Growth and Eclat. | How to figure out what numbers often appear together in a dataset? | You are not looking for clustering.
Instead, you are looking for frequent itemset mining. There are dozens of algorithms for this, the most widely known probably are APRIORI, FP-Growth and Eclat. | How to figure out what numbers often appear together in a dataset?
You are not looking for clustering.
Instead, you are looking for frequent itemset mining. There are dozens of algorithms for this, the most widely known probably are APRIORI, FP-Growth and Eclat. | How to figure out what numbers often appear together in a dataset?
You are not looking for clustering.
Instead, you are looking for frequent itemset mining. There are dozens of algorithms for this, the most widely known probably are APRIORI, FP-Growth and Eclat. |
36,708 | How to figure out what numbers often appear together in a dataset? | I use a simple Excel spreadsheet and load all the drawings and then do a data sort. When you do a data sort you can learn quickly which numbers will never and have never played together. I've also written Macros that can run pairs and 3 pairs. | How to figure out what numbers often appear together in a dataset? | I use a simple Excel spreadsheet and load all the drawings and then do a data sort. When you do a data sort you can learn quickly which numbers will never and have never played together. I've also w | How to figure out what numbers often appear together in a dataset?
I use a simple Excel spreadsheet and load all the drawings and then do a data sort. When you do a data sort you can learn quickly which numbers will never and have never played together. I've also written Macros that can run pairs and 3 pairs. | How to figure out what numbers often appear together in a dataset?
I use a simple Excel spreadsheet and load all the drawings and then do a data sort. When you do a data sort you can learn quickly which numbers will never and have never played together. I've also w |
36,709 | Missing data at random | You can't tell, absolutely, at least, not from statistics.
You can compare the cases that are missing to those that are not missing on any variables that are present in both, but there still could be other things that aren't in the data set. For a simple example, suppose your data set consists only of two variables "race/ethnicity" and "income". You could see if the proportion of different ethnicities that are missing are all similar, but people could be (and quite likely are) skipping the income question because of other things.
The only way to tell for sure that data are missing completely at random or missing at random is if you know why they are missing. In my experience, this sometimes lets you conclude they are MCAR - when it's some documented computer glitch, for instance and sometimes lets you tell they are NOT missing at random, but does not let you conclude they are MAR. | Missing data at random | You can't tell, absolutely, at least, not from statistics.
You can compare the cases that are missing to those that are not missing on any variables that are present in both, but there still could be | Missing data at random
You can't tell, absolutely, at least, not from statistics.
You can compare the cases that are missing to those that are not missing on any variables that are present in both, but there still could be other things that aren't in the data set. For a simple example, suppose your data set consists only of two variables "race/ethnicity" and "income". You could see if the proportion of different ethnicities that are missing are all similar, but people could be (and quite likely are) skipping the income question because of other things.
The only way to tell for sure that data are missing completely at random or missing at random is if you know why they are missing. In my experience, this sometimes lets you conclude they are MCAR - when it's some documented computer glitch, for instance and sometimes lets you tell they are NOT missing at random, but does not let you conclude they are MAR. | Missing data at random
You can't tell, absolutely, at least, not from statistics.
You can compare the cases that are missing to those that are not missing on any variables that are present in both, but there still could be |
36,710 | Derivation of the Satterthwaite appproximation | Background: Understanding the method of moments (MoM) in a basic way
Motivation for the method: The (strong) Law of Large Numbers (LLN) gives us reason to think that (at least for large samples), a sample expectation will be close to the population expectation (note that the LLN applies to higher moments by taking $Z=X^j$). Thus, if we have $iid$ $X_i, i=1,\ldots,n$ we have Casella & Berger's $m_j = \frac{1}{n} \sum_{i=1}^n X_i^j$ is set equal to $\text{E}(m_j) = \text{E}(X_i^j) = \mu_j$.
Why you only need consider first moments: Consider Casella & Berger's $m_j = \frac{1}{n} \sum_{i=1}^n X_i^j$ and note that (as we did in the motivating argument), for any $j$ we can just take $Z_i = X_i^j$ and be left with $m_1$ for a different random variable. That is, all MoM estimators can be thought of as first moment MoMs; we can simply make that substitution to get any other moment we need. So MoM is really just setting $m=\mu$ where $m = \frac{1}{n} \sum_{i=1}^n X_i$ for some set of $iid$ $X_i \sim f_X$.
Why you can think of MoM as 'drop expectations': (i) Take $Z = \frac{1}{n} \sum_{i=1}^n X_i$ and note that $\text{E}(Z)=\text{E}(X)$ by linearity of expectation, so MoM simply takes $Z=\text{E}(Z)$. Similarly, taking $Z^j = \text{E}(Z^j)$ follows immediately from the argument we already used - i.e. we can think of MoM as 'drop expectations', and it will be reasonable because we have some random variable which will be close to its expectation; (ii) more generally, we could reasonably do this ('drop expectations') for any $Z$ that we had reason to think would be 'close to' its expectation.
--
Now for the expression in the section relating to Satterthwaite in Casella & Berger
Casella & Berger match first and second moments of $Z=\sum_{i=1}^k a_iY_i$ that is, they take
$\text{E}(Z) = Z$ and $\text{E}(Z^2)=Z^2$, the second of which gives an estimate of $\nu$.
Note that $Z=\sum_i a_iY_i$ is a constant times a sample expectation; there's a clear sense in which we might expect that $Z\approx \text{E}(Z)$ and $Z^2 \approx \text{E}(Z^2)$, but we don't actually have to justify it here, we're just following their argument about what happens when we do it. | Derivation of the Satterthwaite appproximation | Background: Understanding the method of moments (MoM) in a basic way
Motivation for the method: The (strong) Law of Large Numbers (LLN) gives us reason to think that (at least for large samples), a sa | Derivation of the Satterthwaite appproximation
Background: Understanding the method of moments (MoM) in a basic way
Motivation for the method: The (strong) Law of Large Numbers (LLN) gives us reason to think that (at least for large samples), a sample expectation will be close to the population expectation (note that the LLN applies to higher moments by taking $Z=X^j$). Thus, if we have $iid$ $X_i, i=1,\ldots,n$ we have Casella & Berger's $m_j = \frac{1}{n} \sum_{i=1}^n X_i^j$ is set equal to $\text{E}(m_j) = \text{E}(X_i^j) = \mu_j$.
Why you only need consider first moments: Consider Casella & Berger's $m_j = \frac{1}{n} \sum_{i=1}^n X_i^j$ and note that (as we did in the motivating argument), for any $j$ we can just take $Z_i = X_i^j$ and be left with $m_1$ for a different random variable. That is, all MoM estimators can be thought of as first moment MoMs; we can simply make that substitution to get any other moment we need. So MoM is really just setting $m=\mu$ where $m = \frac{1}{n} \sum_{i=1}^n X_i$ for some set of $iid$ $X_i \sim f_X$.
Why you can think of MoM as 'drop expectations': (i) Take $Z = \frac{1}{n} \sum_{i=1}^n X_i$ and note that $\text{E}(Z)=\text{E}(X)$ by linearity of expectation, so MoM simply takes $Z=\text{E}(Z)$. Similarly, taking $Z^j = \text{E}(Z^j)$ follows immediately from the argument we already used - i.e. we can think of MoM as 'drop expectations', and it will be reasonable because we have some random variable which will be close to its expectation; (ii) more generally, we could reasonably do this ('drop expectations') for any $Z$ that we had reason to think would be 'close to' its expectation.
--
Now for the expression in the section relating to Satterthwaite in Casella & Berger
Casella & Berger match first and second moments of $Z=\sum_{i=1}^k a_iY_i$ that is, they take
$\text{E}(Z) = Z$ and $\text{E}(Z^2)=Z^2$, the second of which gives an estimate of $\nu$.
Note that $Z=\sum_i a_iY_i$ is a constant times a sample expectation; there's a clear sense in which we might expect that $Z\approx \text{E}(Z)$ and $Z^2 \approx \text{E}(Z^2)$, but we don't actually have to justify it here, we're just following their argument about what happens when we do it. | Derivation of the Satterthwaite appproximation
Background: Understanding the method of moments (MoM) in a basic way
Motivation for the method: The (strong) Law of Large Numbers (LLN) gives us reason to think that (at least for large samples), a sa |
36,711 | Derivation of the Satterthwaite appproximation | As pointed out patiently by Glen_b above, since $X_{i}$ is an independent random sample, we have $$E(\sum a_{i}Y_{i})=\sum a_{i}E(Y_{i})=\sum a_{i} Y_{i}$$where the last equation $$E(Y_{i})=Y_{i}$$ follows from the fact that from the definition of method of moments we have $$E(X)=\frac{\sum X_{i}}{n}=\overline{X}=X \text{when $n$=1} $$So the author's proof is justified. | Derivation of the Satterthwaite appproximation | As pointed out patiently by Glen_b above, since $X_{i}$ is an independent random sample, we have $$E(\sum a_{i}Y_{i})=\sum a_{i}E(Y_{i})=\sum a_{i} Y_{i}$$where the last equation $$E(Y_{i})=Y_{i}$$ fo | Derivation of the Satterthwaite appproximation
As pointed out patiently by Glen_b above, since $X_{i}$ is an independent random sample, we have $$E(\sum a_{i}Y_{i})=\sum a_{i}E(Y_{i})=\sum a_{i} Y_{i}$$where the last equation $$E(Y_{i})=Y_{i}$$ follows from the fact that from the definition of method of moments we have $$E(X)=\frac{\sum X_{i}}{n}=\overline{X}=X \text{when $n$=1} $$So the author's proof is justified. | Derivation of the Satterthwaite appproximation
As pointed out patiently by Glen_b above, since $X_{i}$ is an independent random sample, we have $$E(\sum a_{i}Y_{i})=\sum a_{i}E(Y_{i})=\sum a_{i} Y_{i}$$where the last equation $$E(Y_{i})=Y_{i}$$ fo |
36,712 | Adverse results of clustering criteria | The question you should ask yourself is this: what do you want to achieve.
All these criteria are nothing but heuristics. You judge the result of one mathematical optimization technique by yet another mathematical function. This does not actually measure if the result is good, but just whether the data fits to certain assumptions.
Now since you have a global data set in latitude and longitude euclidean distance actually is already not a good choice. However, some of these criteria and algorithms (k-means…) need this inappropriate distance function.
Some things you should try:
Better algorithms. Try DBSCAN and OPTICS, which both don't require you to specify the number of clusters! They have other parameters, but e.g. distance and minimum number of points should be much easier to set for this data set.
Visualization. Instead of looking at statistics of some mathematical measure, choose the best result by visual inspection! So first of all, visualize the clusters to see if the result makes any sense at all.
Consider what you want to find. A mathematical criterion will be happy if you separate the continents. But you don't need an algorithm to do this, the continents are quite well-known already! So what do you want to discover?
Remove outliers. Both k-means and hierarchical clustering don't like outliers that much, and you may need to increase the number of clusters to find by the number of outliers in the data (DBSCAN and OPTICS mentioned above are much more robust towards outliers).
More appropriate distance function. The earth is approximately spherical, use the great circle distance instead of Euclidean distance.
Try converting the data into a 3D ECEF coordinate system, if you need to use Euclidean distance. This will yield cluster centers that are below the earth surface, but it will allow clustering Alaska, and the euclidean distance is at least a lower bound of the true surface distance.
Have a look at e.g. this related question / answer on stackoverflow. | Adverse results of clustering criteria | The question you should ask yourself is this: what do you want to achieve.
All these criteria are nothing but heuristics. You judge the result of one mathematical optimization technique by yet another | Adverse results of clustering criteria
The question you should ask yourself is this: what do you want to achieve.
All these criteria are nothing but heuristics. You judge the result of one mathematical optimization technique by yet another mathematical function. This does not actually measure if the result is good, but just whether the data fits to certain assumptions.
Now since you have a global data set in latitude and longitude euclidean distance actually is already not a good choice. However, some of these criteria and algorithms (k-means…) need this inappropriate distance function.
Some things you should try:
Better algorithms. Try DBSCAN and OPTICS, which both don't require you to specify the number of clusters! They have other parameters, but e.g. distance and minimum number of points should be much easier to set for this data set.
Visualization. Instead of looking at statistics of some mathematical measure, choose the best result by visual inspection! So first of all, visualize the clusters to see if the result makes any sense at all.
Consider what you want to find. A mathematical criterion will be happy if you separate the continents. But you don't need an algorithm to do this, the continents are quite well-known already! So what do you want to discover?
Remove outliers. Both k-means and hierarchical clustering don't like outliers that much, and you may need to increase the number of clusters to find by the number of outliers in the data (DBSCAN and OPTICS mentioned above are much more robust towards outliers).
More appropriate distance function. The earth is approximately spherical, use the great circle distance instead of Euclidean distance.
Try converting the data into a 3D ECEF coordinate system, if you need to use Euclidean distance. This will yield cluster centers that are below the earth surface, but it will allow clustering Alaska, and the euclidean distance is at least a lower bound of the true surface distance.
Have a look at e.g. this related question / answer on stackoverflow. | Adverse results of clustering criteria
The question you should ask yourself is this: what do you want to achieve.
All these criteria are nothing but heuristics. You judge the result of one mathematical optimization technique by yet another |
36,713 | Adverse results of clustering criteria | Longitude and latitude are angles which define points on a sphere so you should probably be looking at the Great Circle Distance or other geodesic distances between points rather than the Euclidean distance.
Also as has been mentioned, certain explicitly model-based clustering algorithms like mixture models and implicitly model-based ones like K-means, make assumptions about the shape and size of the clusters.
In this situation are your expecting your data to fit an underlying model?
If not then density-based methods which don't make assumptions about the shape/size of the clusters might be more appropriate. | Adverse results of clustering criteria | Longitude and latitude are angles which define points on a sphere so you should probably be looking at the Great Circle Distance or other geodesic distances between points rather than the Euclidean di | Adverse results of clustering criteria
Longitude and latitude are angles which define points on a sphere so you should probably be looking at the Great Circle Distance or other geodesic distances between points rather than the Euclidean distance.
Also as has been mentioned, certain explicitly model-based clustering algorithms like mixture models and implicitly model-based ones like K-means, make assumptions about the shape and size of the clusters.
In this situation are your expecting your data to fit an underlying model?
If not then density-based methods which don't make assumptions about the shape/size of the clusters might be more appropriate. | Adverse results of clustering criteria
Longitude and latitude are angles which define points on a sphere so you should probably be looking at the Great Circle Distance or other geodesic distances between points rather than the Euclidean di |
36,714 | Difference between independent and non-informative censoring | The first set of definitions seem right to me.
Your adviser's definition two seems to be a conflation of independent and non-informative censoring assumptions. I haven't seen non-informative censoring defined before with reference to the covariate profile.
The following text is from "Survival analysis: A self-learning text" by Kleinbaum and Klein (3rd edition, 2011, Springer) where pages 37-43 deal with censoring assumptions:
p. 38 (emphasis as per original text)
Independent censoring essentially means that within any subgroup of interest, the subjects who are censored at time t should be representative of all the subjects in that subgroup who remained at risk at time t with respect to their survival experience. In other words, censoring is independent provided that it is random within any subgroup of interest.
So independent censoring is a less restrictive form of random censoring (where we would not be taking into account the survival profile by covariates).
p. 42
Non-informative censoring occurs if the distribution of survival times (T) provides no information about the distribution of censorship times (C), and vice versa.
However... and to the point of the materiality of the distinction:
p. 42 (emphasis added by me this time!)
The assumption of non-informative censoring is often justifiable when censoring is independent and/or random; nevertheless, these assumptions are not equivalent. | Difference between independent and non-informative censoring | The first set of definitions seem right to me.
Your adviser's definition two seems to be a conflation of independent and non-informative censoring assumptions. I haven't seen non-informative censoring | Difference between independent and non-informative censoring
The first set of definitions seem right to me.
Your adviser's definition two seems to be a conflation of independent and non-informative censoring assumptions. I haven't seen non-informative censoring defined before with reference to the covariate profile.
The following text is from "Survival analysis: A self-learning text" by Kleinbaum and Klein (3rd edition, 2011, Springer) where pages 37-43 deal with censoring assumptions:
p. 38 (emphasis as per original text)
Independent censoring essentially means that within any subgroup of interest, the subjects who are censored at time t should be representative of all the subjects in that subgroup who remained at risk at time t with respect to their survival experience. In other words, censoring is independent provided that it is random within any subgroup of interest.
So independent censoring is a less restrictive form of random censoring (where we would not be taking into account the survival profile by covariates).
p. 42
Non-informative censoring occurs if the distribution of survival times (T) provides no information about the distribution of censorship times (C), and vice versa.
However... and to the point of the materiality of the distinction:
p. 42 (emphasis added by me this time!)
The assumption of non-informative censoring is often justifiable when censoring is independent and/or random; nevertheless, these assumptions are not equivalent. | Difference between independent and non-informative censoring
The first set of definitions seem right to me.
Your adviser's definition two seems to be a conflation of independent and non-informative censoring assumptions. I haven't seen non-informative censoring |
36,715 | How to use rpart's result in prediction | fit = rpart(formula, data =, method =, control=)
fitVariablesUsed <- names(fit[,1:20])
preds <- predict(fit, data = newdata[,c(fitVariablesUsed)], type = c("prob"))
this will return a probability matrix for each of the observations. Meaning it will give a probability that the observation is in class 1, class2, etc.
make sure that the columns all line up correctly between the matrix you made the model with and the matrix you're going to make the predictions with.
The variables I created fitVaraiblesUsed which connected 20 variables (just for example) from the fit data frame can then be used in the new data data frame, so long as they're all named the same thing. | How to use rpart's result in prediction | fit = rpart(formula, data =, method =, control=)
fitVariablesUsed <- names(fit[,1:20])
preds <- predict(fit, data = newdata[,c(fitVariablesUsed)], type = c("prob"))
this will return a probability mat | How to use rpart's result in prediction
fit = rpart(formula, data =, method =, control=)
fitVariablesUsed <- names(fit[,1:20])
preds <- predict(fit, data = newdata[,c(fitVariablesUsed)], type = c("prob"))
this will return a probability matrix for each of the observations. Meaning it will give a probability that the observation is in class 1, class2, etc.
make sure that the columns all line up correctly between the matrix you made the model with and the matrix you're going to make the predictions with.
The variables I created fitVaraiblesUsed which connected 20 variables (just for example) from the fit data frame can then be used in the new data data frame, so long as they're all named the same thing. | How to use rpart's result in prediction
fit = rpart(formula, data =, method =, control=)
fitVariablesUsed <- names(fit[,1:20])
preds <- predict(fit, data = newdata[,c(fitVariablesUsed)], type = c("prob"))
this will return a probability mat |
36,716 | How to use rpart's result in prediction | To make a prediction based on a different dataframe than the one used to train your model (e.g. the test dataframe), you should use the newdata parameter to predict() rather than data, because data is not a real parameter (documentation).
predict(output_of_rpart, type=, newdata=testDataFrame) | How to use rpart's result in prediction | To make a prediction based on a different dataframe than the one used to train your model (e.g. the test dataframe), you should use the newdata parameter to predict() rather than data, because data is | How to use rpart's result in prediction
To make a prediction based on a different dataframe than the one used to train your model (e.g. the test dataframe), you should use the newdata parameter to predict() rather than data, because data is not a real parameter (documentation).
predict(output_of_rpart, type=, newdata=testDataFrame) | How to use rpart's result in prediction
To make a prediction based on a different dataframe than the one used to train your model (e.g. the test dataframe), you should use the newdata parameter to predict() rather than data, because data is |
36,717 | What is the fastest unsupervised feature learning algorithm? | K-means is pretty fast, as is PCA. If you use a sparse SVD library (like the irlba package for R) you can approximate PCA pretty quickly on large datasets.
I think there's some pretty fast algorithms for online (also known as sequential) k-means. | What is the fastest unsupervised feature learning algorithm? | K-means is pretty fast, as is PCA. If you use a sparse SVD library (like the irlba package for R) you can approximate PCA pretty quickly on large datasets.
I think there's some pretty fast algorithms | What is the fastest unsupervised feature learning algorithm?
K-means is pretty fast, as is PCA. If you use a sparse SVD library (like the irlba package for R) you can approximate PCA pretty quickly on large datasets.
I think there's some pretty fast algorithms for online (also known as sequential) k-means. | What is the fastest unsupervised feature learning algorithm?
K-means is pretty fast, as is PCA. If you use a sparse SVD library (like the irlba package for R) you can approximate PCA pretty quickly on large datasets.
I think there's some pretty fast algorithms |
36,718 | What is the fastest unsupervised feature learning algorithm? | One thing to consider is the MDA, the Marginalized Denoising Autoencoder. It trains orders of magnitude faster than an SdA and may be the solution that you're looking for. [1][2] I personally am more interested in the latter paper as learning non-linear representations is important in domains which have highly non-linear structure. In my case, face images.
The "online" setting for an algorithm that uses stochastic gradient descent is simply setting the mini-batch size to 1, that is, computing the gradient and taking a step after every new sample. It should be obvious that, in the case of online learning, standard "batch" gradient descent will never update its weights. (Batch gradient descent only updates its weights after seeing all of the examples in a dataset.) The actual minibatch size is going to change based on the hardware that you're running. You may even find out that, since data bandwidth will most likely be the overriding concern, CPUs might perform better than GPUs--at least until nvlink comes out.
Yoshua Bengio has some interesting thoughts on what it means to train a network in an online setting (where each new training sample $x_t$ comes in at timestep $t$, is seen once and then never seen again). He suggests that, *"in the simplified case of independent and identically distributed (i.i.d.) data, that an online learner is performing stochastic gradient descent on its generalization error."[3] It's hopefully clear that most online datasets are not i.i.d, they usually exhibit temporal correlation. (I certainly would not want to watch an i.i.d. stream of images for too long. ^^)
http://arxiv.org/abs/1206.4683
http://arxiv.org/abs/1206.4683
http://arxiv.org/abs/1206.5533 | What is the fastest unsupervised feature learning algorithm? | One thing to consider is the MDA, the Marginalized Denoising Autoencoder. It trains orders of magnitude faster than an SdA and may be the solution that you're looking for. [1][2] I personally am more | What is the fastest unsupervised feature learning algorithm?
One thing to consider is the MDA, the Marginalized Denoising Autoencoder. It trains orders of magnitude faster than an SdA and may be the solution that you're looking for. [1][2] I personally am more interested in the latter paper as learning non-linear representations is important in domains which have highly non-linear structure. In my case, face images.
The "online" setting for an algorithm that uses stochastic gradient descent is simply setting the mini-batch size to 1, that is, computing the gradient and taking a step after every new sample. It should be obvious that, in the case of online learning, standard "batch" gradient descent will never update its weights. (Batch gradient descent only updates its weights after seeing all of the examples in a dataset.) The actual minibatch size is going to change based on the hardware that you're running. You may even find out that, since data bandwidth will most likely be the overriding concern, CPUs might perform better than GPUs--at least until nvlink comes out.
Yoshua Bengio has some interesting thoughts on what it means to train a network in an online setting (where each new training sample $x_t$ comes in at timestep $t$, is seen once and then never seen again). He suggests that, *"in the simplified case of independent and identically distributed (i.i.d.) data, that an online learner is performing stochastic gradient descent on its generalization error."[3] It's hopefully clear that most online datasets are not i.i.d, they usually exhibit temporal correlation. (I certainly would not want to watch an i.i.d. stream of images for too long. ^^)
http://arxiv.org/abs/1206.4683
http://arxiv.org/abs/1206.4683
http://arxiv.org/abs/1206.5533 | What is the fastest unsupervised feature learning algorithm?
One thing to consider is the MDA, the Marginalized Denoising Autoencoder. It trains orders of magnitude faster than an SdA and may be the solution that you're looking for. [1][2] I personally am more |
36,719 | Why isn't the Dantzig selector popular in applied statistics? | The $\ell_\infty$ loss term is VERY sensitive to outliers.
Most (all?) of the theory for the Dantzig selector is under the assumption of normal / Gaussian errors. With this error distribution, there isn't much difference between $\ell_2$ loss and $\ell_\infty$ loss. However, with real data, we would like to be less sensitive to outliers. | Why isn't the Dantzig selector popular in applied statistics? | The $\ell_\infty$ loss term is VERY sensitive to outliers.
Most (all?) of the theory for the Dantzig selector is under the assumption of normal / Gaussian errors. With this error distribution, there | Why isn't the Dantzig selector popular in applied statistics?
The $\ell_\infty$ loss term is VERY sensitive to outliers.
Most (all?) of the theory for the Dantzig selector is under the assumption of normal / Gaussian errors. With this error distribution, there isn't much difference between $\ell_2$ loss and $\ell_\infty$ loss. However, with real data, we would like to be less sensitive to outliers. | Why isn't the Dantzig selector popular in applied statistics?
The $\ell_\infty$ loss term is VERY sensitive to outliers.
Most (all?) of the theory for the Dantzig selector is under the assumption of normal / Gaussian errors. With this error distribution, there |
36,720 | High level interpretation of Cramer-Rao bound and Fisher information matrix | I think you are correct in your interpretations. The below is an informal response to your points.
If an unbiased estimator achieves the CRB, then it is the optimal estimator out of the class of unbiased estimators for the parameter.
Because their estimator achieves the CRB, the greater the FIM then the more certain you are in the estimator's estimates.
If the FIM is singular, then you basically have unlimited uncertainty in the estimator's estimates. | High level interpretation of Cramer-Rao bound and Fisher information matrix | I think you are correct in your interpretations. The below is an informal response to your points.
If an unbiased estimator achieves the CRB, then it is the optimal estimator out of the class of unb | High level interpretation of Cramer-Rao bound and Fisher information matrix
I think you are correct in your interpretations. The below is an informal response to your points.
If an unbiased estimator achieves the CRB, then it is the optimal estimator out of the class of unbiased estimators for the parameter.
Because their estimator achieves the CRB, the greater the FIM then the more certain you are in the estimator's estimates.
If the FIM is singular, then you basically have unlimited uncertainty in the estimator's estimates. | High level interpretation of Cramer-Rao bound and Fisher information matrix
I think you are correct in your interpretations. The below is an informal response to your points.
If an unbiased estimator achieves the CRB, then it is the optimal estimator out of the class of unb |
36,721 | Smoothing algorithm for irregular time interval | The simplest algorithm is the median filter. You can find an C++ implementation in
the R package robfilter. That implementation also include an 'online' version
that only uses past data and implements some algorithmic short-cuts.
Of course you will still have to set the "width" argument yourself, but this is the counter part of looking for a simple algorithm (this package also contains more sophisticated smoothing algorithms).
The median-filter is essentially a rolling window median, so it inherits the good
behaviour of the median in terms of insensitivity to outliers and non-parametric
interpret-ability.
So, considering the dataset you posted, the median filter would yield:
and the code:
a1<-read.table("sodat.txt",header=TRUE)
library("robfilter")
d1<-med.filter(a1[,2],width=10,online=TRUE)
plot(d1) | Smoothing algorithm for irregular time interval | The simplest algorithm is the median filter. You can find an C++ implementation in
the R package robfilter. That implementation also include an 'online' version
that only uses past data and implemen | Smoothing algorithm for irregular time interval
The simplest algorithm is the median filter. You can find an C++ implementation in
the R package robfilter. That implementation also include an 'online' version
that only uses past data and implements some algorithmic short-cuts.
Of course you will still have to set the "width" argument yourself, but this is the counter part of looking for a simple algorithm (this package also contains more sophisticated smoothing algorithms).
The median-filter is essentially a rolling window median, so it inherits the good
behaviour of the median in terms of insensitivity to outliers and non-parametric
interpret-ability.
So, considering the dataset you posted, the median filter would yield:
and the code:
a1<-read.table("sodat.txt",header=TRUE)
library("robfilter")
d1<-med.filter(a1[,2],width=10,online=TRUE)
plot(d1) | Smoothing algorithm for irregular time interval
The simplest algorithm is the median filter. You can find an C++ implementation in
the R package robfilter. That implementation also include an 'online' version
that only uses past data and implemen |
36,722 | Smoothing algorithm for irregular time interval | I think what you want is a median filter: "Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise". | Smoothing algorithm for irregular time interval | I think what you want is a median filter: "Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise". | Smoothing algorithm for irregular time interval
I think what you want is a median filter: "Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise". | Smoothing algorithm for irregular time interval
I think what you want is a median filter: "Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise". |
36,723 | Correct interpretation of Lmer output | I address your interpretations 1 and 2 in order:
1) How you interpret factors depends on which level of the factor is the reference category. The fact that the model calls it Type2 suggests to me that Type1 is the reference, and that the parameter represents how the estimate changes when Type == 2. Thus, I disagree with your interpretation. I would say TotalPayoff is higher when Type == 2 because the parameter is positive and significant (assuming alpha == .05).
2) I think your interpretation basically makes sense. I prefer to say it like this: The slope for PgvnD changes by the amount estimated as the parameter for the interaction term when Asym == 1 (i.e. when Asym is not equal to the reference category). So the PgvnD parameter is its main effect estimate plus the interaction estimate when Asym == 1. This would be -8.466 + 26.618. Keep in mind, though, if you want to make an estimate of TotalPayoff you must also account for the main effect of Asym. Bottom line, the interaction parameter tells you how much the main effects change under the conditions specified by the interaction (value of PgvnD and the Asym == 1).
Alternatively, the interaction allows you to say that the effect of Asym==1 on TotalPayoff changes positively along with changes in PgvnD by the amount estimated as the interaction parameter.
A quick example: ignoring all but the two discussed main effects which I now refer to as $A$ and $P$, and the interaction $AP$,
$$ y = \beta_{A}A + \beta_{P}P + \beta_{AP}AP $$
Clearly, if $A$ is $0$ (i.e. reference category), then neither the $AP$ interaction nor the main effect for $A$ contributes anything to $y$. But $\beta_PP$ still does so long as $P \ne 0$.
If $A = 1$ (i.e. probably meaning Asym is true, or not reference), and $P = 1$, then
$$y = \beta_{A}(1) + \beta_{P}(1) + \beta_{AP}(1 \times 1)$$
$$y = -12.167 + -8.466 + 26.618$$
Finally, I think it is probably safe to remove the variance component that was estimated 0 from the model. It might be worth it to explore the data a little to make sure that it seems like a reasonable estimate and not an artifact of a misspecified model or other oddity. | Correct interpretation of Lmer output | I address your interpretations 1 and 2 in order:
1) How you interpret factors depends on which level of the factor is the reference category. The fact that the model calls it Type2 suggests to me that | Correct interpretation of Lmer output
I address your interpretations 1 and 2 in order:
1) How you interpret factors depends on which level of the factor is the reference category. The fact that the model calls it Type2 suggests to me that Type1 is the reference, and that the parameter represents how the estimate changes when Type == 2. Thus, I disagree with your interpretation. I would say TotalPayoff is higher when Type == 2 because the parameter is positive and significant (assuming alpha == .05).
2) I think your interpretation basically makes sense. I prefer to say it like this: The slope for PgvnD changes by the amount estimated as the parameter for the interaction term when Asym == 1 (i.e. when Asym is not equal to the reference category). So the PgvnD parameter is its main effect estimate plus the interaction estimate when Asym == 1. This would be -8.466 + 26.618. Keep in mind, though, if you want to make an estimate of TotalPayoff you must also account for the main effect of Asym. Bottom line, the interaction parameter tells you how much the main effects change under the conditions specified by the interaction (value of PgvnD and the Asym == 1).
Alternatively, the interaction allows you to say that the effect of Asym==1 on TotalPayoff changes positively along with changes in PgvnD by the amount estimated as the interaction parameter.
A quick example: ignoring all but the two discussed main effects which I now refer to as $A$ and $P$, and the interaction $AP$,
$$ y = \beta_{A}A + \beta_{P}P + \beta_{AP}AP $$
Clearly, if $A$ is $0$ (i.e. reference category), then neither the $AP$ interaction nor the main effect for $A$ contributes anything to $y$. But $\beta_PP$ still does so long as $P \ne 0$.
If $A = 1$ (i.e. probably meaning Asym is true, or not reference), and $P = 1$, then
$$y = \beta_{A}(1) + \beta_{P}(1) + \beta_{AP}(1 \times 1)$$
$$y = -12.167 + -8.466 + 26.618$$
Finally, I think it is probably safe to remove the variance component that was estimated 0 from the model. It might be worth it to explore the data a little to make sure that it seems like a reasonable estimate and not an artifact of a misspecified model or other oddity. | Correct interpretation of Lmer output
I address your interpretations 1 and 2 in order:
1) How you interpret factors depends on which level of the factor is the reference category. The fact that the model calls it Type2 suggests to me that |
36,724 | Why are the Lagrange multipliers sparse for SVMs? | The Lagrange multipliers in the context of SVMs are typically denoted $\alpha_i$. The fact that one often observes that most $\alpha_i=0$ is a direct consequence of the Karush-Kuhn-Tucker (KKT) dual complementarity conditions:
Since $y_i(\mathbf{w}^T\mathbf{x}_i+b) = 1$ iff $\mathbf{x}_i$ is on the SVM decision boundary, i.e. is a support vector assuming $\mathbf{x}_i$ is in the training set, and in most cases few training vectors are support vectors, as whuber pointed out in the comments, it means that most $\alpha_i$ are 0 or $C$.
Andrew Ng's CS229 Lecture notes on SVMs introduces the Karush-Kuhn-Tucker (KKT) dual complementarity conditions:
Note that we can create some case where all vectors in the training set are support vectors: e.g. see this Support Vector Machine Question. | Why are the Lagrange multipliers sparse for SVMs? | The Lagrange multipliers in the context of SVMs are typically denoted $\alpha_i$. The fact that one often observes that most $\alpha_i=0$ is a direct consequence of the Karush-Kuhn-Tucker (KKT) dual c | Why are the Lagrange multipliers sparse for SVMs?
The Lagrange multipliers in the context of SVMs are typically denoted $\alpha_i$. The fact that one often observes that most $\alpha_i=0$ is a direct consequence of the Karush-Kuhn-Tucker (KKT) dual complementarity conditions:
Since $y_i(\mathbf{w}^T\mathbf{x}_i+b) = 1$ iff $\mathbf{x}_i$ is on the SVM decision boundary, i.e. is a support vector assuming $\mathbf{x}_i$ is in the training set, and in most cases few training vectors are support vectors, as whuber pointed out in the comments, it means that most $\alpha_i$ are 0 or $C$.
Andrew Ng's CS229 Lecture notes on SVMs introduces the Karush-Kuhn-Tucker (KKT) dual complementarity conditions:
Note that we can create some case where all vectors in the training set are support vectors: e.g. see this Support Vector Machine Question. | Why are the Lagrange multipliers sparse for SVMs?
The Lagrange multipliers in the context of SVMs are typically denoted $\alpha_i$. The fact that one often observes that most $\alpha_i=0$ is a direct consequence of the Karush-Kuhn-Tucker (KKT) dual c |
36,725 | Analyzing changes in a repeated categorical measure | If you used a simple multinomial model without explicitly including the time dependency, I don't think your approach works well.
Luckily, researchers in psychometrics have developed a number of sophisticated tools for measurement of change based on items (like survey questions). The key idea is to decompose the change over time into an effect for trend and one for group or covariate. You can check out the work done by Gerhard Fischer (which is probably a bit outdated), for example:
Fischer, G. Some Probabilistic Models for Measuring Change. In De
Gruijter, D. N. M. & L. J. Th. van der Kamp (eds), Advances in
Psychological and Educational Measurement, London: John Wiley, 97-110,
1976.
One class of models might be particularly useful in your case, dichotomous or polytomous linear logistic model with relaxed assumptions. They adapt commonly used IRT models to allow for measurement of change for continuous and categorical covariates for any arbitrary number of items. You might check out this presentation Measuring Change: Linear Logistic Models With Relaxed Assumptions (LLRA) and this paper Hatzinger & Rusch (2009) to see whether this model class is of interest for you and some of the statistical and computational ideas behind it.
Although the LLRA are a bit outdated (newer models exist), I mention them because software for fitting them is easily and freely available in the R package eRm. Hence, if you find this useful, the eRm package contains the function eRm::LLRA (or simply LLRA) that allows you to fit such models for the measurement of change in a very straightforward manner. The function will take care of all the steps from design matrix to model fitting for you (edit: made the example to reflect your data, i.e., one item and 4 groups):
#Example 6 from Hatzinger & Rusch (2009) adapted to one dichotomous item
#four groups (a to d), two time points
require(eRm)
data(llradat3)
data<-llradat3[,c(2,5)] #select item 2 at time point 1 and time point 2
groups <- c(rep("a",15),rep("b",15),rep("c",15),rep("d",15))
llra1 <- LLRA(data,mpoints=2,groups=groups)
summary(llra1)
Results of LLRA via LPCM estimation:
Call: LLRA(X = data, mpoints = 2, groups = groups)
Conditional log-likelihood: -11.27
Number of iterations: 14
Number of parameters: 4
Estimated parameters with 0.95 CI:
Estimate Std.Error lower.CI upper.CI
d.I1.t2 0.981 1.443 -1.848 3.810
c.I1.t2 1.204 1.426 -1.591 3.999
b.I1.t2 1.204 1.426 -1.591 3.999
trend.I1.t2 0.405 0.913 -1.384 2.195
Reference Group: a
You can plot the relative change due to the groups effect by
plotGR(LLRA1)
References:
Fischer, G. Some Probabilistic Models for Measuring Change. In De
Gruijter, D. N. M. & L. J. Th. van der Kamp (eds), Advances in
Psychological and Educational Measurement, London: John Wiley, 97-110,
Hatzinger & Rusch (2009) IRT models with relaxed assumptions in eRm: A manual-like instruction. Psychology Science Quarterly, 51, pp. 87 - 120
Fischer, G. H. (1995). Linear logistic models for change. In G. H. Fischer & I. W. Molenaar (Eds.),Rasch models. Foundations, Recent developments and Applications (pp. 157-181). New York: Springer. | Analyzing changes in a repeated categorical measure | If you used a simple multinomial model without explicitly including the time dependency, I don't think your approach works well.
Luckily, researchers in psychometrics have developed a number of sophi | Analyzing changes in a repeated categorical measure
If you used a simple multinomial model without explicitly including the time dependency, I don't think your approach works well.
Luckily, researchers in psychometrics have developed a number of sophisticated tools for measurement of change based on items (like survey questions). The key idea is to decompose the change over time into an effect for trend and one for group or covariate. You can check out the work done by Gerhard Fischer (which is probably a bit outdated), for example:
Fischer, G. Some Probabilistic Models for Measuring Change. In De
Gruijter, D. N. M. & L. J. Th. van der Kamp (eds), Advances in
Psychological and Educational Measurement, London: John Wiley, 97-110,
1976.
One class of models might be particularly useful in your case, dichotomous or polytomous linear logistic model with relaxed assumptions. They adapt commonly used IRT models to allow for measurement of change for continuous and categorical covariates for any arbitrary number of items. You might check out this presentation Measuring Change: Linear Logistic Models With Relaxed Assumptions (LLRA) and this paper Hatzinger & Rusch (2009) to see whether this model class is of interest for you and some of the statistical and computational ideas behind it.
Although the LLRA are a bit outdated (newer models exist), I mention them because software for fitting them is easily and freely available in the R package eRm. Hence, if you find this useful, the eRm package contains the function eRm::LLRA (or simply LLRA) that allows you to fit such models for the measurement of change in a very straightforward manner. The function will take care of all the steps from design matrix to model fitting for you (edit: made the example to reflect your data, i.e., one item and 4 groups):
#Example 6 from Hatzinger & Rusch (2009) adapted to one dichotomous item
#four groups (a to d), two time points
require(eRm)
data(llradat3)
data<-llradat3[,c(2,5)] #select item 2 at time point 1 and time point 2
groups <- c(rep("a",15),rep("b",15),rep("c",15),rep("d",15))
llra1 <- LLRA(data,mpoints=2,groups=groups)
summary(llra1)
Results of LLRA via LPCM estimation:
Call: LLRA(X = data, mpoints = 2, groups = groups)
Conditional log-likelihood: -11.27
Number of iterations: 14
Number of parameters: 4
Estimated parameters with 0.95 CI:
Estimate Std.Error lower.CI upper.CI
d.I1.t2 0.981 1.443 -1.848 3.810
c.I1.t2 1.204 1.426 -1.591 3.999
b.I1.t2 1.204 1.426 -1.591 3.999
trend.I1.t2 0.405 0.913 -1.384 2.195
Reference Group: a
You can plot the relative change due to the groups effect by
plotGR(LLRA1)
References:
Fischer, G. Some Probabilistic Models for Measuring Change. In De
Gruijter, D. N. M. & L. J. Th. van der Kamp (eds), Advances in
Psychological and Educational Measurement, London: John Wiley, 97-110,
Hatzinger & Rusch (2009) IRT models with relaxed assumptions in eRm: A manual-like instruction. Psychology Science Quarterly, 51, pp. 87 - 120
Fischer, G. H. (1995). Linear logistic models for change. In G. H. Fischer & I. W. Molenaar (Eds.),Rasch models. Foundations, Recent developments and Applications (pp. 157-181). New York: Springer. | Analyzing changes in a repeated categorical measure
If you used a simple multinomial model without explicitly including the time dependency, I don't think your approach works well.
Luckily, researchers in psychometrics have developed a number of sophi |
36,726 | VIF values and interactions in multiple regression | I go with Penguin_Knight. You will check interaction term also for VIF but you can ignore high values of VIFs here when you use interaction. You can further check for the detailed overview here:
http://www.statisticalhorizons.com/multicollinearity | VIF values and interactions in multiple regression | I go with Penguin_Knight. You will check interaction term also for VIF but you can ignore high values of VIFs here when you use interaction. You can further check for the detailed overview here:
http | VIF values and interactions in multiple regression
I go with Penguin_Knight. You will check interaction term also for VIF but you can ignore high values of VIFs here when you use interaction. You can further check for the detailed overview here:
http://www.statisticalhorizons.com/multicollinearity | VIF values and interactions in multiple regression
I go with Penguin_Knight. You will check interaction term also for VIF but you can ignore high values of VIFs here when you use interaction. You can further check for the detailed overview here:
http |
36,727 | Regsubsets with leaps fails | Did you instruct regsubsets to do a forward selection? The default is "exhaustive", I believe.
In any case, the collinearities will still cause trouble. Any time regsubsets considers a collection of variables that are too collinear (i.e. the design matrix is practically singular), it will fail.
"Best subset" methods can be unstable with multiple regression, especially when there are a lot of variables. You might want to try a random forest approach. | Regsubsets with leaps fails | Did you instruct regsubsets to do a forward selection? The default is "exhaustive", I believe.
In any case, the collinearities will still cause trouble. Any time regsubsets considers a collection of v | Regsubsets with leaps fails
Did you instruct regsubsets to do a forward selection? The default is "exhaustive", I believe.
In any case, the collinearities will still cause trouble. Any time regsubsets considers a collection of variables that are too collinear (i.e. the design matrix is practically singular), it will fail.
"Best subset" methods can be unstable with multiple regression, especially when there are a lot of variables. You might want to try a random forest approach. | Regsubsets with leaps fails
Did you instruct regsubsets to do a forward selection? The default is "exhaustive", I believe.
In any case, the collinearities will still cause trouble. Any time regsubsets considers a collection of v |
36,728 | Non-parametric alternative for 2-way ANOVA [duplicate] | The problem you have in analyzing these data is that interactions don't really make sense when you have non-parametric tests. Non-parametric tests consider data to be ranks - that is we know if something is higher or lower, but we don't know the magnitude.
An interaction says does the magnitude of the effect of X1 depend on the level of X2. You're asking to compare the size of the effects, but the size of the effects is not something you can consider in a non-parametric test.
However, unequal variance is a bad reason to do a non-parametric test. Unequal variance is pretty much irrelevant if your group sizes are equal. If they're not, it's really easy to correct for it. You can use survey methods, the Browne-Forsythe correction, the Welch correction, robust estimates, sandwich estimates. Which these is easier depends on the software that you're using, and what you're familiar with. | Non-parametric alternative for 2-way ANOVA [duplicate] | The problem you have in analyzing these data is that interactions don't really make sense when you have non-parametric tests. Non-parametric tests consider data to be ranks - that is we know if somet | Non-parametric alternative for 2-way ANOVA [duplicate]
The problem you have in analyzing these data is that interactions don't really make sense when you have non-parametric tests. Non-parametric tests consider data to be ranks - that is we know if something is higher or lower, but we don't know the magnitude.
An interaction says does the magnitude of the effect of X1 depend on the level of X2. You're asking to compare the size of the effects, but the size of the effects is not something you can consider in a non-parametric test.
However, unequal variance is a bad reason to do a non-parametric test. Unequal variance is pretty much irrelevant if your group sizes are equal. If they're not, it's really easy to correct for it. You can use survey methods, the Browne-Forsythe correction, the Welch correction, robust estimates, sandwich estimates. Which these is easier depends on the software that you're using, and what you're familiar with. | Non-parametric alternative for 2-way ANOVA [duplicate]
The problem you have in analyzing these data is that interactions don't really make sense when you have non-parametric tests. Non-parametric tests consider data to be ranks - that is we know if somet |
36,729 | Non-parametric alternative for 2-way ANOVA [duplicate] | The only method that I know of that has support in the literature for the interaction test is the use of a transformation of the raw data to normal scores, such as the ranking procedure by Van der Waerden and by Blom. Both are available in SAS under proc rank. | Non-parametric alternative for 2-way ANOVA [duplicate] | The only method that I know of that has support in the literature for the interaction test is the use of a transformation of the raw data to normal scores, such as the ranking procedure by Van der Wae | Non-parametric alternative for 2-way ANOVA [duplicate]
The only method that I know of that has support in the literature for the interaction test is the use of a transformation of the raw data to normal scores, such as the ranking procedure by Van der Waerden and by Blom. Both are available in SAS under proc rank. | Non-parametric alternative for 2-way ANOVA [duplicate]
The only method that I know of that has support in the literature for the interaction test is the use of a transformation of the raw data to normal scores, such as the ranking procedure by Van der Wae |
36,730 | Regarding precision and recall for the highly unbalanced validation data set [duplicate] | I work in biomedical text classification, where this sort of situation happens all the time. You're exactly right--precision and recall aren't all that informative for highly-skewed data. I tend to use AUC as my performance metric, as it's not sensitive to class distribution. | Regarding precision and recall for the highly unbalanced validation data set [duplicate] | I work in biomedical text classification, where this sort of situation happens all the time. You're exactly right--precision and recall aren't all that informative for highly-skewed data. I tend to us | Regarding precision and recall for the highly unbalanced validation data set [duplicate]
I work in biomedical text classification, where this sort of situation happens all the time. You're exactly right--precision and recall aren't all that informative for highly-skewed data. I tend to use AUC as my performance metric, as it's not sensitive to class distribution. | Regarding precision and recall for the highly unbalanced validation data set [duplicate]
I work in biomedical text classification, where this sort of situation happens all the time. You're exactly right--precision and recall aren't all that informative for highly-skewed data. I tend to us |
36,731 | Regarding precision and recall for the highly unbalanced validation data set [duplicate] | You could introduce a cost function, consistent with your application, with values for TP, FP, TN, FN and optimise your predictors for that. | Regarding precision and recall for the highly unbalanced validation data set [duplicate] | You could introduce a cost function, consistent with your application, with values for TP, FP, TN, FN and optimise your predictors for that. | Regarding precision and recall for the highly unbalanced validation data set [duplicate]
You could introduce a cost function, consistent with your application, with values for TP, FP, TN, FN and optimise your predictors for that. | Regarding precision and recall for the highly unbalanced validation data set [duplicate]
You could introduce a cost function, consistent with your application, with values for TP, FP, TN, FN and optimise your predictors for that. |
36,732 | Regarding precision and recall for the highly unbalanced validation data set [duplicate] | I think you need to be clearer what you mean when you say "Not valid": in the sense that they summarise the contingency table, they are valid, but they are biased in the case of highly imbalanced data. One alternate measure you can look at, which tends to be more stable across class balance, is the mean of true positive rate and (1 - false positive rate).
You should be careful about what you want to do with this, though: precision on your positive class is a useful metric to have, because optimising that recall/precision tradeoff on an infrequently occurring class is often the goal of the practical application of classifiers. | Regarding precision and recall for the highly unbalanced validation data set [duplicate] | I think you need to be clearer what you mean when you say "Not valid": in the sense that they summarise the contingency table, they are valid, but they are biased in the case of highly imbalanced data | Regarding precision and recall for the highly unbalanced validation data set [duplicate]
I think you need to be clearer what you mean when you say "Not valid": in the sense that they summarise the contingency table, they are valid, but they are biased in the case of highly imbalanced data. One alternate measure you can look at, which tends to be more stable across class balance, is the mean of true positive rate and (1 - false positive rate).
You should be careful about what you want to do with this, though: precision on your positive class is a useful metric to have, because optimising that recall/precision tradeoff on an infrequently occurring class is often the goal of the practical application of classifiers. | Regarding precision and recall for the highly unbalanced validation data set [duplicate]
I think you need to be clearer what you mean when you say "Not valid": in the sense that they summarise the contingency table, they are valid, but they are biased in the case of highly imbalanced data |
36,733 | A re-formalization of a conjugate prior? | It is an interesting remark that I have not seen explicitly spelled out, however the parameter space for conjugate priors is often chosen in the opposite way, namely the largest possible set that keeps the sampling distribution well defined. See Brown's Fundamentals of Statistical Exponential Families (1986). | A re-formalization of a conjugate prior? | It is an interesting remark that I have not seen explicitly spelled out, however the parameter space for conjugate priors is often chosen in the opposite way, namely the largest possible set that keep | A re-formalization of a conjugate prior?
It is an interesting remark that I have not seen explicitly spelled out, however the parameter space for conjugate priors is often chosen in the opposite way, namely the largest possible set that keeps the sampling distribution well defined. See Brown's Fundamentals of Statistical Exponential Families (1986). | A re-formalization of a conjugate prior?
It is an interesting remark that I have not seen explicitly spelled out, however the parameter space for conjugate priors is often chosen in the opposite way, namely the largest possible set that keep |
36,734 | Can I prove a curvilinear relationship when the linear independent variable is not significant | No, it is not essential that both the linear and quadratic terms be significant. Only the quadratic term need be significant.
In fact, it is important to note that the linear term takes on a somewhat different interpretation in the context of a model that also includes the quadratic term. In such a model, the linear term now represents the slope of the line tangent to x at the y-intercept, that is, the predicted slope of x when and only when x = 0. So a test of the linear term in a model like this is not in general testing the same thing as in a model that just includes the linear term without the quadratic. | Can I prove a curvilinear relationship when the linear independent variable is not significant | No, it is not essential that both the linear and quadratic terms be significant. Only the quadratic term need be significant.
In fact, it is important to note that the linear term takes on a somewhat | Can I prove a curvilinear relationship when the linear independent variable is not significant
No, it is not essential that both the linear and quadratic terms be significant. Only the quadratic term need be significant.
In fact, it is important to note that the linear term takes on a somewhat different interpretation in the context of a model that also includes the quadratic term. In such a model, the linear term now represents the slope of the line tangent to x at the y-intercept, that is, the predicted slope of x when and only when x = 0. So a test of the linear term in a model like this is not in general testing the same thing as in a model that just includes the linear term without the quadratic. | Can I prove a curvilinear relationship when the linear independent variable is not significant
No, it is not essential that both the linear and quadratic terms be significant. Only the quadratic term need be significant.
In fact, it is important to note that the linear term takes on a somewhat |
36,735 | Can I prove a curvilinear relationship when the linear independent variable is not significant | Think about what significance means. A relationship of the form you suggest can be characterized as
$Y = a_1X^2+a_2X+b$
and empirically estimated as $\hat{Y}=\alpha_1\hat{X}^2+\alpha_2\hat{X}+\beta+\epsilon$.
What does the significance of an estimate - say, $\alpha_2$ - mean? The significance is Pr(data|H0), and given a probability that is "not significant", what you really do not reject, is the possibility that the coefficient might really be zero.
Does this invalidate the assumption of a curvilinear relationship? Not in my opinion. Rather, it seems to suggest that $a_2$ is really zero.
Consider the following example (written in Stata).
First we generate some data:
set obs 20000
gen x = uniform()
gen control_one = uniform()
gen control_two = uniform()
drawnorm e, m(0) sd(0.5)
We then specify a new variable X = x^2 and a relationship for an outcome variable Y
gen Y = control_one+control_two+X+e
(This corresponds to a multidimensional curvilinear model in x with coefficient of the linear and constant term equal to zero).
We then run some regressions:
reg Y control_one control_two
reg Y control_one control_two x
reg Y control_one control_two X x
The x term is significant in the second model, but not in the third. As far as I understand, this reflects your experience with real data. | Can I prove a curvilinear relationship when the linear independent variable is not significant | Think about what significance means. A relationship of the form you suggest can be characterized as
$Y = a_1X^2+a_2X+b$
and empirically estimated as $\hat{Y}=\alpha_1\hat{X}^2+\alpha_2\hat{X}+\beta+\ | Can I prove a curvilinear relationship when the linear independent variable is not significant
Think about what significance means. A relationship of the form you suggest can be characterized as
$Y = a_1X^2+a_2X+b$
and empirically estimated as $\hat{Y}=\alpha_1\hat{X}^2+\alpha_2\hat{X}+\beta+\epsilon$.
What does the significance of an estimate - say, $\alpha_2$ - mean? The significance is Pr(data|H0), and given a probability that is "not significant", what you really do not reject, is the possibility that the coefficient might really be zero.
Does this invalidate the assumption of a curvilinear relationship? Not in my opinion. Rather, it seems to suggest that $a_2$ is really zero.
Consider the following example (written in Stata).
First we generate some data:
set obs 20000
gen x = uniform()
gen control_one = uniform()
gen control_two = uniform()
drawnorm e, m(0) sd(0.5)
We then specify a new variable X = x^2 and a relationship for an outcome variable Y
gen Y = control_one+control_two+X+e
(This corresponds to a multidimensional curvilinear model in x with coefficient of the linear and constant term equal to zero).
We then run some regressions:
reg Y control_one control_two
reg Y control_one control_two x
reg Y control_one control_two X x
The x term is significant in the second model, but not in the third. As far as I understand, this reflects your experience with real data. | Can I prove a curvilinear relationship when the linear independent variable is not significant
Think about what significance means. A relationship of the form you suggest can be characterized as
$Y = a_1X^2+a_2X+b$
and empirically estimated as $\hat{Y}=\alpha_1\hat{X}^2+\alpha_2\hat{X}+\beta+\ |
36,736 | Can I prove a curvilinear relationship when the linear independent variable is not significant | It is actually not essential that either term be significant, but you never prove anything with just a model.
The given estimates of coefficients are estimates, and they provide evidence. A large coefficient on the quadratic term provides a lot of evidence, a small coefficient provides a little evidence, of a curvilinear relationship. The linear term is irrelevant. It can be positive, negative, near 0 or whatever.
A plot of the data will also provide evidence of a curvilinear relationship.
Statistical significance means a very precise thing: If, in the population from which this sample was drawn, the effect was really 0, is there a 5% chance that, in a sample of the size that is available, a test statistic this far or farther from 0 would be gotten. | Can I prove a curvilinear relationship when the linear independent variable is not significant | It is actually not essential that either term be significant, but you never prove anything with just a model.
The given estimates of coefficients are estimates, and they provide evidence. A large coe | Can I prove a curvilinear relationship when the linear independent variable is not significant
It is actually not essential that either term be significant, but you never prove anything with just a model.
The given estimates of coefficients are estimates, and they provide evidence. A large coefficient on the quadratic term provides a lot of evidence, a small coefficient provides a little evidence, of a curvilinear relationship. The linear term is irrelevant. It can be positive, negative, near 0 or whatever.
A plot of the data will also provide evidence of a curvilinear relationship.
Statistical significance means a very precise thing: If, in the population from which this sample was drawn, the effect was really 0, is there a 5% chance that, in a sample of the size that is available, a test statistic this far or farther from 0 would be gotten. | Can I prove a curvilinear relationship when the linear independent variable is not significant
It is actually not essential that either term be significant, but you never prove anything with just a model.
The given estimates of coefficients are estimates, and they provide evidence. A large coe |
36,737 | Can I prove a curvilinear relationship when the linear independent variable is not significant | As noted, the significance of the curvilinear term stands by itself, regardless of the significance of the linear term in the regression. If the linear term is near zero, then the curve is a U or inverted U if it is significant. If both terms are significant, the resulting line is more like a hill with an accelerating (or decelerating) slope. | Can I prove a curvilinear relationship when the linear independent variable is not significant | As noted, the significance of the curvilinear term stands by itself, regardless of the significance of the linear term in the regression. If the linear term is near zero, then the curve is a U or inve | Can I prove a curvilinear relationship when the linear independent variable is not significant
As noted, the significance of the curvilinear term stands by itself, regardless of the significance of the linear term in the regression. If the linear term is near zero, then the curve is a U or inverted U if it is significant. If both terms are significant, the resulting line is more like a hill with an accelerating (or decelerating) slope. | Can I prove a curvilinear relationship when the linear independent variable is not significant
As noted, the significance of the curvilinear term stands by itself, regardless of the significance of the linear term in the regression. If the linear term is near zero, then the curve is a U or inve |
36,738 | Terminology for different types of epidemiological variable | Let us assume we're talking about a particular categorical variable. Lets say a "residence" variable that indicates if someone lives in a private home, an apartment, a dorm or group living space, a nursing home, a prison, or other.
For your first example, I don't know that I'd call the variable anything. I'd probably end up just saying what value of the variable I ended up excluding. For example: "Subjects who were incarcerated in correctional institutions were excluded from the study."
In the second example, depending on what you're doing you might say you're "stratifying by Y". That being said, I'd never be really all that comfortable with splitting them into two entirely two data sets and analyzing one utterly without the context of the other. Unless it's just a way to partition things that really should be two separate studies - but in that case you don't need to talk about the variable, as you're really just running two separate studies whose data you happened to get in the same file. | Terminology for different types of epidemiological variable | Let us assume we're talking about a particular categorical variable. Lets say a "residence" variable that indicates if someone lives in a private home, an apartment, a dorm or group living space, a nu | Terminology for different types of epidemiological variable
Let us assume we're talking about a particular categorical variable. Lets say a "residence" variable that indicates if someone lives in a private home, an apartment, a dorm or group living space, a nursing home, a prison, or other.
For your first example, I don't know that I'd call the variable anything. I'd probably end up just saying what value of the variable I ended up excluding. For example: "Subjects who were incarcerated in correctional institutions were excluded from the study."
In the second example, depending on what you're doing you might say you're "stratifying by Y". That being said, I'd never be really all that comfortable with splitting them into two entirely two data sets and analyzing one utterly without the context of the other. Unless it's just a way to partition things that really should be two separate studies - but in that case you don't need to talk about the variable, as you're really just running two separate studies whose data you happened to get in the same file. | Terminology for different types of epidemiological variable
Let us assume we're talking about a particular categorical variable. Lets say a "residence" variable that indicates if someone lives in a private home, an apartment, a dorm or group living space, a nu |
36,739 | Terminology for different types of epidemiological variable | For the first question, there is no particular term for X. Exclusion criteria is a term used to refer to rules used to exclude patients from being enrolled in the trial. There may be an analogy here to the process of excluding data when a mathematical condition exists but it is a different situation and I have never heard the term exclusion criteria used in that case.
In my experience with clinical trials the analysis you are describing in the second question is most commonly called subgroup analysis. I imagine the epidemiologist use that term also. | Terminology for different types of epidemiological variable | For the first question, there is no particular term for X. Exclusion criteria is a term used to refer to rules used to exclude patients from being enrolled in the trial. There may be an analogy here | Terminology for different types of epidemiological variable
For the first question, there is no particular term for X. Exclusion criteria is a term used to refer to rules used to exclude patients from being enrolled in the trial. There may be an analogy here to the process of excluding data when a mathematical condition exists but it is a different situation and I have never heard the term exclusion criteria used in that case.
In my experience with clinical trials the analysis you are describing in the second question is most commonly called subgroup analysis. I imagine the epidemiologist use that term also. | Terminology for different types of epidemiological variable
For the first question, there is no particular term for X. Exclusion criteria is a term used to refer to rules used to exclude patients from being enrolled in the trial. There may be an analogy here |
36,740 | Using $\chi^2$ to compare two Markov transition matrices | It looks as though you would like to use the Pearson $\chi^2$ test to assess whether a sample $x_1,...,x_n$ that is taken from a first-order chain with transition probabilities given by $M$ is well fit by the second order chain with transition probabilities given by $M^2$.
The notation you are using is a little confusing only because it seems to imply matrix multiplication. What I understand you to mean is that if I collapse transition probabilities in $M^2$ then I'll get the transition probabilities given in $M$.
Is the $\chi^2$ test appropriate here?
It depends on what you are testing. But, I think in your case that the idea is good.
The form of the $\chi^2$ test should be a bit different. For example, see the Wikipedia entry on Pearson's chi-squared test. If the expected values are denoted $E_\alpha$ and the observed values $O_\alpha$, the form of the test should be
$$X^2 =\sum_\alpha \frac{\left(O_\alpha - E_\alpha\right)^2}{E_\alpha}.$$
Now, we need to think of the second-order chain in its flattened first-order form. That is, the matrix of transition probabilities consists of the bigram transition probabilities.
If we let $f_{ij}$ be the frequencies of observed bigram transitions taken from the sample $x_1,...,x_n$, $f_{i}$ the observed frequencies of bigrams, and $p_{ij}$ the second-order transition probabilities given in $M^2$, then we can calculate the Pearson $\chi^2$ statistic
$$X^2 = \sum_{ij} \frac{\left(f_{ij} - f_{i}p_{ij}\right)^2}{ f_{i}p_{ij}},$$
and per Billingsley (1960)
$$X^2 \sim \chi^2_{d-s},$$
where $d$ is the number of positive entries in the transition matrix $M^2$ and $s$ is the number of unique bigrams.
This gives an evaluation of how well the distribution $M^2$ fits the data $x_1,...,x_n$.
Billingsley credits Bartlett (1951) with this result, so as usual these days we are looking at material that has been developed some time ago! Billingsley also notes that the usual $\chi^2$ test on values from a Markov chain may be interpreted as a test for independent sampling given that the sampling is derived from a first-order Markov chain.
MS Bartlett (1951) The frequency goodness of fit test for probability chains. Proc. Camb. Phil. Soc. 47: 86--95.
P Billingsley (1960) Statistical methods in Markov chains. Technical Report P-2092. The RAND Corporation. | Using $\chi^2$ to compare two Markov transition matrices | It looks as though you would like to use the Pearson $\chi^2$ test to assess whether a sample $x_1,...,x_n$ that is taken from a first-order chain with transition probabilities given by $M$ is well fi | Using $\chi^2$ to compare two Markov transition matrices
It looks as though you would like to use the Pearson $\chi^2$ test to assess whether a sample $x_1,...,x_n$ that is taken from a first-order chain with transition probabilities given by $M$ is well fit by the second order chain with transition probabilities given by $M^2$.
The notation you are using is a little confusing only because it seems to imply matrix multiplication. What I understand you to mean is that if I collapse transition probabilities in $M^2$ then I'll get the transition probabilities given in $M$.
Is the $\chi^2$ test appropriate here?
It depends on what you are testing. But, I think in your case that the idea is good.
The form of the $\chi^2$ test should be a bit different. For example, see the Wikipedia entry on Pearson's chi-squared test. If the expected values are denoted $E_\alpha$ and the observed values $O_\alpha$, the form of the test should be
$$X^2 =\sum_\alpha \frac{\left(O_\alpha - E_\alpha\right)^2}{E_\alpha}.$$
Now, we need to think of the second-order chain in its flattened first-order form. That is, the matrix of transition probabilities consists of the bigram transition probabilities.
If we let $f_{ij}$ be the frequencies of observed bigram transitions taken from the sample $x_1,...,x_n$, $f_{i}$ the observed frequencies of bigrams, and $p_{ij}$ the second-order transition probabilities given in $M^2$, then we can calculate the Pearson $\chi^2$ statistic
$$X^2 = \sum_{ij} \frac{\left(f_{ij} - f_{i}p_{ij}\right)^2}{ f_{i}p_{ij}},$$
and per Billingsley (1960)
$$X^2 \sim \chi^2_{d-s},$$
where $d$ is the number of positive entries in the transition matrix $M^2$ and $s$ is the number of unique bigrams.
This gives an evaluation of how well the distribution $M^2$ fits the data $x_1,...,x_n$.
Billingsley credits Bartlett (1951) with this result, so as usual these days we are looking at material that has been developed some time ago! Billingsley also notes that the usual $\chi^2$ test on values from a Markov chain may be interpreted as a test for independent sampling given that the sampling is derived from a first-order Markov chain.
MS Bartlett (1951) The frequency goodness of fit test for probability chains. Proc. Camb. Phil. Soc. 47: 86--95.
P Billingsley (1960) Statistical methods in Markov chains. Technical Report P-2092. The RAND Corporation. | Using $\chi^2$ to compare two Markov transition matrices
It looks as though you would like to use the Pearson $\chi^2$ test to assess whether a sample $x_1,...,x_n$ that is taken from a first-order chain with transition probabilities given by $M$ is well fi |
36,741 | Part correlation and R squared | Your output shows strong suppressive activity. It can never be shown on a Venn diagram. A suppressor IV is a predictor which addition to the model raises $R^2$ greater than its own $r^2$ with the DV, because suppressor is correlated mostly with the error term in the model which lacks the supressor, rather than correlated with the DV. Now, we know that the increase in $R^2$ due to inclusion a IV is the part correlation of this IV in the model obtained. So, if the absolute value of part correlation is greater than the absolute value of zero-order correlation, that variable is a suppressor. In your last table we see, that 4 of 5 predictors are suppressors. | Part correlation and R squared | Your output shows strong suppressive activity. It can never be shown on a Venn diagram. A suppressor IV is a predictor which addition to the model raises $R^2$ greater than its own $r^2$ with the DV, | Part correlation and R squared
Your output shows strong suppressive activity. It can never be shown on a Venn diagram. A suppressor IV is a predictor which addition to the model raises $R^2$ greater than its own $r^2$ with the DV, because suppressor is correlated mostly with the error term in the model which lacks the supressor, rather than correlated with the DV. Now, we know that the increase in $R^2$ due to inclusion a IV is the part correlation of this IV in the model obtained. So, if the absolute value of part correlation is greater than the absolute value of zero-order correlation, that variable is a suppressor. In your last table we see, that 4 of 5 predictors are suppressors. | Part correlation and R squared
Your output shows strong suppressive activity. It can never be shown on a Venn diagram. A suppressor IV is a predictor which addition to the model raises $R^2$ greater than its own $r^2$ with the DV, |
36,742 | OLS: $E[\epsilon_{it}^T\epsilon_{it}] \not= 0$ in 1st equation biases standard errors in 2nd equation? | In order to be sure you need to go into the details, this implies comparing the true variance covariance matrix with the one you get in the second ols stage.
The true one:
This can be obtained by replacing eq.2 into eq.1, pooled OLS follows, and from it, the true $\hat a , \hat b$ variance covariance matrix:
$Y_{it} = \alpha_i + \beta_i X_{it} + aD_t + bD_tZ_{i} +D_t u_{i} + \epsilon_{it}$
Using matrix notation to split the equation in $\gamma$ parameters and others leads to:
$Y = X\theta + Z\gamma + \varepsilon$
where we are interested in $V(\hat \gamma)$ , $\gamma=[a \; b] $, Z is a two column vector $Z=[D_t \; D_tZ_i]_{[i=1,..,N;t=1,...,T]}$ ( a similar structure defines X but this is not of interest) and where $V(\varepsilon) =\Sigma$ has a full structure of between firms covariances that's why it's not diagonal ($\sigma^2I_{NT}$)
like in the GAUSS-MARKOV assumptions. By Frish-Waugh we can express $\gamma$ ols as :
$\hat \gamma = (Z'M_{X}Z)^{-1}Z'M_{X}Y$ where $M_X= I-X(X'X)^{-1}X' $
which implies the following true variance:
$V(\hat \gamma) = H\Sigma H'$ where $H = (Z'M_{X}Z)^{-1}Z'M_{X}$
The other one
Under the assumption of non correlated firms (and time periods but this is not the issue), $\Sigma$ has a simplier diagonal structure $\Delta$. This means that $\Delta$ triangular terms are 0. Under an even simpler specification,( the one that is estimated by default by econometric and statistical software for OLS) $\Sigma$ follows GAUSS-Markov assumptions meaning that even the diagonal terms are equal thus $\Sigma$ is downgraded to $\sigma^2I$
This implies that not considering between firms correlation would lead to $V(\hat\gamma) $as:
$V(\hat \gamma) = H\Delta H'$ or $V(\hat \gamma) = H\sigma^2I H' \equiv \sigma^2(Z'M_xZ)^{-1}$
which, as it can be seen, are not equal to the true one. | OLS: $E[\epsilon_{it}^T\epsilon_{it}] \not= 0$ in 1st equation biases standard errors in 2nd equatio | In order to be sure you need to go into the details, this implies comparing the true variance covariance matrix with the one you get in the second ols stage.
The true one:
This can be obtained by rep | OLS: $E[\epsilon_{it}^T\epsilon_{it}] \not= 0$ in 1st equation biases standard errors in 2nd equation?
In order to be sure you need to go into the details, this implies comparing the true variance covariance matrix with the one you get in the second ols stage.
The true one:
This can be obtained by replacing eq.2 into eq.1, pooled OLS follows, and from it, the true $\hat a , \hat b$ variance covariance matrix:
$Y_{it} = \alpha_i + \beta_i X_{it} + aD_t + bD_tZ_{i} +D_t u_{i} + \epsilon_{it}$
Using matrix notation to split the equation in $\gamma$ parameters and others leads to:
$Y = X\theta + Z\gamma + \varepsilon$
where we are interested in $V(\hat \gamma)$ , $\gamma=[a \; b] $, Z is a two column vector $Z=[D_t \; D_tZ_i]_{[i=1,..,N;t=1,...,T]}$ ( a similar structure defines X but this is not of interest) and where $V(\varepsilon) =\Sigma$ has a full structure of between firms covariances that's why it's not diagonal ($\sigma^2I_{NT}$)
like in the GAUSS-MARKOV assumptions. By Frish-Waugh we can express $\gamma$ ols as :
$\hat \gamma = (Z'M_{X}Z)^{-1}Z'M_{X}Y$ where $M_X= I-X(X'X)^{-1}X' $
which implies the following true variance:
$V(\hat \gamma) = H\Sigma H'$ where $H = (Z'M_{X}Z)^{-1}Z'M_{X}$
The other one
Under the assumption of non correlated firms (and time periods but this is not the issue), $\Sigma$ has a simplier diagonal structure $\Delta$. This means that $\Delta$ triangular terms are 0. Under an even simpler specification,( the one that is estimated by default by econometric and statistical software for OLS) $\Sigma$ follows GAUSS-Markov assumptions meaning that even the diagonal terms are equal thus $\Sigma$ is downgraded to $\sigma^2I$
This implies that not considering between firms correlation would lead to $V(\hat\gamma) $as:
$V(\hat \gamma) = H\Delta H'$ or $V(\hat \gamma) = H\sigma^2I H' \equiv \sigma^2(Z'M_xZ)^{-1}$
which, as it can be seen, are not equal to the true one. | OLS: $E[\epsilon_{it}^T\epsilon_{it}] \not= 0$ in 1st equation biases standard errors in 2nd equatio
In order to be sure you need to go into the details, this implies comparing the true variance covariance matrix with the one you get in the second ols stage.
The true one:
This can be obtained by rep |
36,743 | OLS: $E[\epsilon_{it}^T\epsilon_{it}] \not= 0$ in 1st equation biases standard errors in 2nd equation? | I am putting up another answer with more details.
In standard linear regression model (in matrix form):
$$Y=X\beta+\varepsilon$$
the OLS estimate is the following
$$\hat\beta=(X^TX)^{-1}X^TY.$$
Its variance then is
$$Var(\hat\beta)=(X^TX)^{-1}X^TVar(Y)X(X^TX)^{-1}.$$
The usual assumption for regression is that
$$Var(Y)=\sigma^2I,$$
where $I$ is the identity matrix. Then
$$Var(\hat\beta)=\sigma^2(X^TX)^{-1}.$$
Now in your case you have two models:
$$Y_{i}=M_i\delta_i+\epsilon_i$$
and
$$\Gamma=Lc+u,$$
where
$Y_i^T=(Y_{i1},...,Y_{iT})$,
$M_i=[1,X_i,D]$, with $X_i^T=(X_{i1},...,X_{iT})$, $D^T=(D_1,...,D_T)$
$\delta_i^T=(\alpha_i,\beta_i,\gamma_i)$
$\epsilon_i^T=(\epsilon_{i1},...,\epsilon_{iT})$
$\Gamma^T=(\gamma_1,...,\gamma_n)$
$L=[1,Z]$, with $Z^T=(Z_1,...,Z_n)$
$c^T=(a,b)$
$u^T=(u_1,...,u_N)$.
Note that you state second model for the estimates of $\gamma$, which is not usual, hence I restate it in usual form, for the "true" $\gamma$.
Let us write down the covariance matrix for OLS estimates of coefficients $c$:
$$Var(\hat{c})=(L^TL)^{-1}L^TVar(\Gamma)L(L^TL)^{-1}$$
The problem is that we do not observe $\Gamma$. We observe the estimates $\hat\Gamma$. $\hat\gamma_i$ is part of vector
$$\hat\delta_i=\delta_i+(M_i^TM_i)^{-1}M_i^T\epsilon_i.$$
Assume that $\delta_i$ are random and independent with $\epsilon_i$ and $M_i$. This surely holds for $\gamma_i$ so we do not lose anything if we extend this for other elements of $\delta_i$.
Let us stack all $\hat\delta_i$ on top of each other:
$$\hat\delta^T=[\delta_1^T,...,\delta_N^T]$$
and explore the variance of $\hat\delta$:
$$Var(\hat\delta)=\begin{bmatrix}
Var(\hat\delta_1) & cov(\hat\delta_1,\hat\delta_2) & \dots & cov(\hat\delta_1,\hat\delta_N)\\
\dots & \dots & \dots & \dots\\
cov(\hat\delta_n,\hat\delta_1) & cov(\hat\delta_n,\delta_2) & \dots & Var(\hat\delta_N)
\end{bmatrix}$$
Assume that $Var(\epsilon_i)=\sigma^2_\epsilon I$ and that $E\epsilon_i\epsilon_j^T=0$.
For $i\neq j$ we have
\begin{align}
cov(\hat\delta_i,\hat\delta_j)&=cov(\delta_i,\delta_j)+cov((M_i^TM_i)^{-1}M_i^T\epsilon_i,(M_j^TM_j)^{-1}M_j^T\epsilon_j)\\
&=(M_i^TM_i)^{-1}M_i^TE(\epsilon_i\epsilon_j^T)M_j(M_j^TM_j)^{-1}\\
&=0
\end{align}
For diagonal elements we have
$$
Var(\hat\delta_i)=Var(\delta_i)+\sigma_\epsilon^2(M_i^TM_i)^{-1}
$$
Let us turn back to variance of $\hat c$. Since we substitute $\hat\Gamma$ instead of $\Gamma$ the variance is the following
$$Var(\hat{c})=(L^TL)^{-1}L^TVar(\hat\Gamma)L(L^TL)^{-1},$$
We can extract $Var(\hat\Gamma)$ from $Var(\hat\delta)$ by selecting appropriate elements:
$$Var(\hat\Gamma)=Var(\Gamma)+diag(g_1,...,g_n)$$
where $g_i$ is the element of $\sigma_\epsilon^2(M_i^TM_i)^{-1}$ corresponding to the $Var(\hat\gamma_i)$. Each $g_i$ is different from $g_j$ since they correspond to different $X_{it}$ and $X_{jt}$ which are not assumed to be equal.
So we get the surprising result, that algebraically even if we assume all the necessary properties, the resulting covariance matrix at least algebraically will not be equal to usual OLS covariance matrix, since for that we need that $Var(\hat\Gamma)$ is constant times identity matrix which it is clearly not.
All the formulas above were derived assuming that $X_{ij}$ are constant, so they are conditional on $X_{ij}$. This means that we actually calculated $Var(\hat\Gamma|X)$. By putting additional assumptions on $X_{ij}$, I think it would be possible to show that unconditional variance is OK.
The independence assumption placed on $\epsilon_i$ can also be relaxed to uncorrelatedness.
It would also be possible to use simulation study to see how covariance matrix differ if we use $\hat\Gamma$ instead of $\Gamma$. | OLS: $E[\epsilon_{it}^T\epsilon_{it}] \not= 0$ in 1st equation biases standard errors in 2nd equatio | I am putting up another answer with more details.
In standard linear regression model (in matrix form):
$$Y=X\beta+\varepsilon$$
the OLS estimate is the following
$$\hat\beta=(X^TX)^{-1}X^TY.$$
Its v | OLS: $E[\epsilon_{it}^T\epsilon_{it}] \not= 0$ in 1st equation biases standard errors in 2nd equation?
I am putting up another answer with more details.
In standard linear regression model (in matrix form):
$$Y=X\beta+\varepsilon$$
the OLS estimate is the following
$$\hat\beta=(X^TX)^{-1}X^TY.$$
Its variance then is
$$Var(\hat\beta)=(X^TX)^{-1}X^TVar(Y)X(X^TX)^{-1}.$$
The usual assumption for regression is that
$$Var(Y)=\sigma^2I,$$
where $I$ is the identity matrix. Then
$$Var(\hat\beta)=\sigma^2(X^TX)^{-1}.$$
Now in your case you have two models:
$$Y_{i}=M_i\delta_i+\epsilon_i$$
and
$$\Gamma=Lc+u,$$
where
$Y_i^T=(Y_{i1},...,Y_{iT})$,
$M_i=[1,X_i,D]$, with $X_i^T=(X_{i1},...,X_{iT})$, $D^T=(D_1,...,D_T)$
$\delta_i^T=(\alpha_i,\beta_i,\gamma_i)$
$\epsilon_i^T=(\epsilon_{i1},...,\epsilon_{iT})$
$\Gamma^T=(\gamma_1,...,\gamma_n)$
$L=[1,Z]$, with $Z^T=(Z_1,...,Z_n)$
$c^T=(a,b)$
$u^T=(u_1,...,u_N)$.
Note that you state second model for the estimates of $\gamma$, which is not usual, hence I restate it in usual form, for the "true" $\gamma$.
Let us write down the covariance matrix for OLS estimates of coefficients $c$:
$$Var(\hat{c})=(L^TL)^{-1}L^TVar(\Gamma)L(L^TL)^{-1}$$
The problem is that we do not observe $\Gamma$. We observe the estimates $\hat\Gamma$. $\hat\gamma_i$ is part of vector
$$\hat\delta_i=\delta_i+(M_i^TM_i)^{-1}M_i^T\epsilon_i.$$
Assume that $\delta_i$ are random and independent with $\epsilon_i$ and $M_i$. This surely holds for $\gamma_i$ so we do not lose anything if we extend this for other elements of $\delta_i$.
Let us stack all $\hat\delta_i$ on top of each other:
$$\hat\delta^T=[\delta_1^T,...,\delta_N^T]$$
and explore the variance of $\hat\delta$:
$$Var(\hat\delta)=\begin{bmatrix}
Var(\hat\delta_1) & cov(\hat\delta_1,\hat\delta_2) & \dots & cov(\hat\delta_1,\hat\delta_N)\\
\dots & \dots & \dots & \dots\\
cov(\hat\delta_n,\hat\delta_1) & cov(\hat\delta_n,\delta_2) & \dots & Var(\hat\delta_N)
\end{bmatrix}$$
Assume that $Var(\epsilon_i)=\sigma^2_\epsilon I$ and that $E\epsilon_i\epsilon_j^T=0$.
For $i\neq j$ we have
\begin{align}
cov(\hat\delta_i,\hat\delta_j)&=cov(\delta_i,\delta_j)+cov((M_i^TM_i)^{-1}M_i^T\epsilon_i,(M_j^TM_j)^{-1}M_j^T\epsilon_j)\\
&=(M_i^TM_i)^{-1}M_i^TE(\epsilon_i\epsilon_j^T)M_j(M_j^TM_j)^{-1}\\
&=0
\end{align}
For diagonal elements we have
$$
Var(\hat\delta_i)=Var(\delta_i)+\sigma_\epsilon^2(M_i^TM_i)^{-1}
$$
Let us turn back to variance of $\hat c$. Since we substitute $\hat\Gamma$ instead of $\Gamma$ the variance is the following
$$Var(\hat{c})=(L^TL)^{-1}L^TVar(\hat\Gamma)L(L^TL)^{-1},$$
We can extract $Var(\hat\Gamma)$ from $Var(\hat\delta)$ by selecting appropriate elements:
$$Var(\hat\Gamma)=Var(\Gamma)+diag(g_1,...,g_n)$$
where $g_i$ is the element of $\sigma_\epsilon^2(M_i^TM_i)^{-1}$ corresponding to the $Var(\hat\gamma_i)$. Each $g_i$ is different from $g_j$ since they correspond to different $X_{it}$ and $X_{jt}$ which are not assumed to be equal.
So we get the surprising result, that algebraically even if we assume all the necessary properties, the resulting covariance matrix at least algebraically will not be equal to usual OLS covariance matrix, since for that we need that $Var(\hat\Gamma)$ is constant times identity matrix which it is clearly not.
All the formulas above were derived assuming that $X_{ij}$ are constant, so they are conditional on $X_{ij}$. This means that we actually calculated $Var(\hat\Gamma|X)$. By putting additional assumptions on $X_{ij}$, I think it would be possible to show that unconditional variance is OK.
The independence assumption placed on $\epsilon_i$ can also be relaxed to uncorrelatedness.
It would also be possible to use simulation study to see how covariance matrix differ if we use $\hat\Gamma$ instead of $\Gamma$. | OLS: $E[\epsilon_{it}^T\epsilon_{it}] \not= 0$ in 1st equation biases standard errors in 2nd equatio
I am putting up another answer with more details.
In standard linear regression model (in matrix form):
$$Y=X\beta+\varepsilon$$
the OLS estimate is the following
$$\hat\beta=(X^TX)^{-1}X^TY.$$
Its v |
36,744 | OLS: $E[\epsilon_{it}^T\epsilon_{it}] \not= 0$ in 1st equation biases standard errors in 2nd equation? | I think the problem lies in definition of the second model. I think it is assumed that
$$\gamma_i=a+bZ_i+u_i$$
with the usual assumption that
$$cov(\gamma_i,\gamma_j|Z_1,...,Z_N)=0,$$
i.e. that the $\gamma_i$ are not correlated if we control for $Z_i$. Now when you substitute $\hat{\gamma}$ instead of $\gamma$, you need to check whether the assumption holds, i.e. if
$$cov(\hat{\gamma_i},\hat{\gamma}_j|Z_i)=0.$$
Now
$$\hat{\gamma}_i=\gamma_i+L(\epsilon_{it}),$$
where $L$ is some linear function. It is safe to assume that $\epsilon_{it}$ is independent of $Z_i$, but if $E\epsilon_{it}\epsilon_{jt}\neq0$, the necessary assumption does not hold.
Since the uncorrelatedness assumption is central to calculation of usual OLS statistics, this gives the reason why the standard errors are biased.
This was a rough outline, but I think the idea should work if you'd get into nitty gritty details of OLS machinery. | OLS: $E[\epsilon_{it}^T\epsilon_{it}] \not= 0$ in 1st equation biases standard errors in 2nd equatio | I think the problem lies in definition of the second model. I think it is assumed that
$$\gamma_i=a+bZ_i+u_i$$
with the usual assumption that
$$cov(\gamma_i,\gamma_j|Z_1,...,Z_N)=0,$$
i.e. that the $ | OLS: $E[\epsilon_{it}^T\epsilon_{it}] \not= 0$ in 1st equation biases standard errors in 2nd equation?
I think the problem lies in definition of the second model. I think it is assumed that
$$\gamma_i=a+bZ_i+u_i$$
with the usual assumption that
$$cov(\gamma_i,\gamma_j|Z_1,...,Z_N)=0,$$
i.e. that the $\gamma_i$ are not correlated if we control for $Z_i$. Now when you substitute $\hat{\gamma}$ instead of $\gamma$, you need to check whether the assumption holds, i.e. if
$$cov(\hat{\gamma_i},\hat{\gamma}_j|Z_i)=0.$$
Now
$$\hat{\gamma}_i=\gamma_i+L(\epsilon_{it}),$$
where $L$ is some linear function. It is safe to assume that $\epsilon_{it}$ is independent of $Z_i$, but if $E\epsilon_{it}\epsilon_{jt}\neq0$, the necessary assumption does not hold.
Since the uncorrelatedness assumption is central to calculation of usual OLS statistics, this gives the reason why the standard errors are biased.
This was a rough outline, but I think the idea should work if you'd get into nitty gritty details of OLS machinery. | OLS: $E[\epsilon_{it}^T\epsilon_{it}] \not= 0$ in 1st equation biases standard errors in 2nd equatio
I think the problem lies in definition of the second model. I think it is assumed that
$$\gamma_i=a+bZ_i+u_i$$
with the usual assumption that
$$cov(\gamma_i,\gamma_j|Z_1,...,Z_N)=0,$$
i.e. that the $ |
36,745 | What is the difference between Informative (IVM) and Relevance (RVM) vector machines | The RVM places an Automatic Relevance Determination (ARD) prior on the weights in a regularized regression/logistic regression setup. (The ARD prior is a just a weak gamma prior on the precision of a gaussian random variable). Marginalizing out the weights and maximizing the likelihood of the data with respect to the precision causes many of the precision parameters to become large, which would push the associated weights to zero. If you use feature vectors given by a design matrix, then this strategy selects a small set of examples that predict the target variable well.
The IVM strategy is fundamentally different from the RVM's strategy. The IVM is a Gaussian Process method that selects a small set of points from the training set using a greedy selection criterion (based on change in entropy of the posterior GP) and combines this strategy with standard GP regression/classification on the sparse set of points.
Unlike the SVM, for both the IVM and RVM there is not an obvious geometric interpretation of relevant or informative vectors. Basically, both of the algorithms find sparse (the SVM and IVM are dual sparse, but the RVM should probably be considered primal sparse) solutions for regression/classification problems but they use different approaches to do so. | What is the difference between Informative (IVM) and Relevance (RVM) vector machines | The RVM places an Automatic Relevance Determination (ARD) prior on the weights in a regularized regression/logistic regression setup. (The ARD prior is a just a weak gamma prior on the precision of a | What is the difference between Informative (IVM) and Relevance (RVM) vector machines
The RVM places an Automatic Relevance Determination (ARD) prior on the weights in a regularized regression/logistic regression setup. (The ARD prior is a just a weak gamma prior on the precision of a gaussian random variable). Marginalizing out the weights and maximizing the likelihood of the data with respect to the precision causes many of the precision parameters to become large, which would push the associated weights to zero. If you use feature vectors given by a design matrix, then this strategy selects a small set of examples that predict the target variable well.
The IVM strategy is fundamentally different from the RVM's strategy. The IVM is a Gaussian Process method that selects a small set of points from the training set using a greedy selection criterion (based on change in entropy of the posterior GP) and combines this strategy with standard GP regression/classification on the sparse set of points.
Unlike the SVM, for both the IVM and RVM there is not an obvious geometric interpretation of relevant or informative vectors. Basically, both of the algorithms find sparse (the SVM and IVM are dual sparse, but the RVM should probably be considered primal sparse) solutions for regression/classification problems but they use different approaches to do so. | What is the difference between Informative (IVM) and Relevance (RVM) vector machines
The RVM places an Automatic Relevance Determination (ARD) prior on the weights in a regularized regression/logistic regression setup. (The ARD prior is a just a weak gamma prior on the precision of a |
36,746 | What is the difference between Informative (IVM) and Relevance (RVM) vector machines | Some of the distinctions I have identified so far were hidden in (one of) the original papers by Neil Lawrence. There are two versions of "A Sparse Bayesian Compression Scheme - The Informative Vector Machine" [Kernel Workshop at NIPS 2001], one on the Microsoft research site, and one on Laurence's site.
In the MS version there is an extra sentence with the statement "the selected data points are close to the decision boundary, a characteristics shared with the SVM". So my original view that the IVM vectors represented the 'middle' was wrong.
The other point is that it is a compression scheme in the sense that it is looking for a 'sparse representation of the data set", so that the IVM also seeks to retain those vectors that provided the most information during the analysis, so they can be re-used, as a data set and any computation repeated.
This ability to reduce the size of the data set is useful when the computation is bigO(M^3)
The RVM does select the 'middle' vectors (and their weights) which act on the basis functions (e.g. Gaussians) (see Ch7.2 'Relevance Vector Machines' in Bishop's 'Pattern recognition and machine learning').
OK, so the explanation is a bit of a hand waving description and isn't fully complete, but hopefully it will help those who aren't as relaxed with compact matrix formulations. More feedback would still be welcome. | What is the difference between Informative (IVM) and Relevance (RVM) vector machines | Some of the distinctions I have identified so far were hidden in (one of) the original papers by Neil Lawrence. There are two versions of "A Sparse Bayesian Compression Scheme - The Informative Vector | What is the difference between Informative (IVM) and Relevance (RVM) vector machines
Some of the distinctions I have identified so far were hidden in (one of) the original papers by Neil Lawrence. There are two versions of "A Sparse Bayesian Compression Scheme - The Informative Vector Machine" [Kernel Workshop at NIPS 2001], one on the Microsoft research site, and one on Laurence's site.
In the MS version there is an extra sentence with the statement "the selected data points are close to the decision boundary, a characteristics shared with the SVM". So my original view that the IVM vectors represented the 'middle' was wrong.
The other point is that it is a compression scheme in the sense that it is looking for a 'sparse representation of the data set", so that the IVM also seeks to retain those vectors that provided the most information during the analysis, so they can be re-used, as a data set and any computation repeated.
This ability to reduce the size of the data set is useful when the computation is bigO(M^3)
The RVM does select the 'middle' vectors (and their weights) which act on the basis functions (e.g. Gaussians) (see Ch7.2 'Relevance Vector Machines' in Bishop's 'Pattern recognition and machine learning').
OK, so the explanation is a bit of a hand waving description and isn't fully complete, but hopefully it will help those who aren't as relaxed with compact matrix formulations. More feedback would still be welcome. | What is the difference between Informative (IVM) and Relevance (RVM) vector machines
Some of the distinctions I have identified so far were hidden in (one of) the original papers by Neil Lawrence. There are two versions of "A Sparse Bayesian Compression Scheme - The Informative Vector |
36,747 | A question about parameters of Gamma distribution in Bayesian econometrics | For anyone still struggling with Koops terrible notation: The problem is that Koop uses neither the scale nor the rate parametrization, but rather a "mean,degrees of freedom" parametrization (see Appendix, Def. B. 22).
The distribution of $h$ in a proper parametrization (shape, rate) is thus
$$
h \sim \text{Gamma}(shape = \underline{\nu}/2 , rate = \underline{\nu s}^2 / 2)
$$
using Koops notation for the parameters. | A question about parameters of Gamma distribution in Bayesian econometrics | For anyone still struggling with Koops terrible notation: The problem is that Koop uses neither the scale nor the rate parametrization, but rather a "mean,degrees of freedom" parametrization (see Appe | A question about parameters of Gamma distribution in Bayesian econometrics
For anyone still struggling with Koops terrible notation: The problem is that Koop uses neither the scale nor the rate parametrization, but rather a "mean,degrees of freedom" parametrization (see Appendix, Def. B. 22).
The distribution of $h$ in a proper parametrization (shape, rate) is thus
$$
h \sim \text{Gamma}(shape = \underline{\nu}/2 , rate = \underline{\nu s}^2 / 2)
$$
using Koops notation for the parameters. | A question about parameters of Gamma distribution in Bayesian econometrics
For anyone still struggling with Koops terrible notation: The problem is that Koop uses neither the scale nor the rate parametrization, but rather a "mean,degrees of freedom" parametrization (see Appe |
36,748 | A question about parameters of Gamma distribution in Bayesian econometrics | I think that the Wikipedia article is referring to a specific form of the gamma distribution known as $\chi^2$. Chi square is $\rm{Gamma}(\nu,1/2)$ and $s^2$ would be the constant that the $\chi^2$ random variable is multiplied by to get a random variable with the distribution of a variance estimate. That is $\alpha=\nu$ and $\beta=1/2$. It is s that is the standard error and not $s^2$. In the article you referred to the $\chi^2$ is listed under specials cases (second bullet). | A question about parameters of Gamma distribution in Bayesian econometrics | I think that the Wikipedia article is referring to a specific form of the gamma distribution known as $\chi^2$. Chi square is $\rm{Gamma}(\nu,1/2)$ and $s^2$ would be the constant that the $\chi^2$ r | A question about parameters of Gamma distribution in Bayesian econometrics
I think that the Wikipedia article is referring to a specific form of the gamma distribution known as $\chi^2$. Chi square is $\rm{Gamma}(\nu,1/2)$ and $s^2$ would be the constant that the $\chi^2$ random variable is multiplied by to get a random variable with the distribution of a variance estimate. That is $\alpha=\nu$ and $\beta=1/2$. It is s that is the standard error and not $s^2$. In the article you referred to the $\chi^2$ is listed under specials cases (second bullet). | A question about parameters of Gamma distribution in Bayesian econometrics
I think that the Wikipedia article is referring to a specific form of the gamma distribution known as $\chi^2$. Chi square is $\rm{Gamma}(\nu,1/2)$ and $s^2$ would be the constant that the $\chi^2$ r |
36,749 | A question about parameters of Gamma distribution in Bayesian econometrics | It is customary to impose (as a prior) either the gamma distribution to $h=\frac{1}{\sigma^2}$ or the inverse gamma distribution to $\sigma^2$. Then, the posteior will have a beautiful looking. I believe you can assign a gamma distribution to $\sigma^2$, and still all calculation to derive the marginal by integrating out $\sigma^2$ will go through. | A question about parameters of Gamma distribution in Bayesian econometrics | It is customary to impose (as a prior) either the gamma distribution to $h=\frac{1}{\sigma^2}$ or the inverse gamma distribution to $\sigma^2$. Then, the posteior will have a beautiful looking. I beli | A question about parameters of Gamma distribution in Bayesian econometrics
It is customary to impose (as a prior) either the gamma distribution to $h=\frac{1}{\sigma^2}$ or the inverse gamma distribution to $\sigma^2$. Then, the posteior will have a beautiful looking. I believe you can assign a gamma distribution to $\sigma^2$, and still all calculation to derive the marginal by integrating out $\sigma^2$ will go through. | A question about parameters of Gamma distribution in Bayesian econometrics
It is customary to impose (as a prior) either the gamma distribution to $h=\frac{1}{\sigma^2}$ or the inverse gamma distribution to $\sigma^2$. Then, the posteior will have a beautiful looking. I beli |
36,750 | Something like Central Limit Theorem for variance and maybe even for covariance? | For finite populations, as the sample size increases, the variance of the sample variance decreases (the finite population correction). When the sample size is equal to the population size, the sample variance is no longer a random variable. For any finite population, there will not be an asymptotic distribution of the sample variance. See Cochran (1977) Sampling Techniques.
For infinite size populations, see Theorem 5.3.2 on the first page of
http://www.unc.edu/~hannig/STOR655/handouts/Handout-asymptotics.pdf
Looks like the sample variance is asymptotically normal! | Something like Central Limit Theorem for variance and maybe even for covariance? | For finite populations, as the sample size increases, the variance of the sample variance decreases (the finite population correction). When the sample size is equal to the population size, the sample | Something like Central Limit Theorem for variance and maybe even for covariance?
For finite populations, as the sample size increases, the variance of the sample variance decreases (the finite population correction). When the sample size is equal to the population size, the sample variance is no longer a random variable. For any finite population, there will not be an asymptotic distribution of the sample variance. See Cochran (1977) Sampling Techniques.
For infinite size populations, see Theorem 5.3.2 on the first page of
http://www.unc.edu/~hannig/STOR655/handouts/Handout-asymptotics.pdf
Looks like the sample variance is asymptotically normal! | Something like Central Limit Theorem for variance and maybe even for covariance?
For finite populations, as the sample size increases, the variance of the sample variance decreases (the finite population correction). When the sample size is equal to the population size, the sample |
36,751 | Box-Cox transformation for mixed models | Taking a look at the web (e.g. google) always helps
http://www.sciencedirect.com/science/article/pii/S0378375804001235
http://onlinelibrary.wiley.com/doi/10.1111/j.1467-985X.2005.00391.x/abstract
Of course, linear Gaussian models with random effects is a huge topic. | Box-Cox transformation for mixed models | Taking a look at the web (e.g. google) always helps
http://www.sciencedirect.com/science/article/pii/S0378375804001235
http://onlinelibrary.wiley.com/doi/10.1111/j.1467-985X.2005.00391.x/abstract
Of c | Box-Cox transformation for mixed models
Taking a look at the web (e.g. google) always helps
http://www.sciencedirect.com/science/article/pii/S0378375804001235
http://onlinelibrary.wiley.com/doi/10.1111/j.1467-985X.2005.00391.x/abstract
Of course, linear Gaussian models with random effects is a huge topic. | Box-Cox transformation for mixed models
Taking a look at the web (e.g. google) always helps
http://www.sciencedirect.com/science/article/pii/S0378375804001235
http://onlinelibrary.wiley.com/doi/10.1111/j.1467-985X.2005.00391.x/abstract
Of c |
36,752 | Confused by MATLAB's implementation of ridge | This is a matlab program to validate what cardinal said, it is actually due to the centering and scaling
% Create A(10 by 3 matrix) and b(10 by 1 matrix)
A=rand(10,3);
b=rand(10,1);
lambda=0.01
% centering and scaling A
s=std(A,0,1);
s=repmat(s,10,1);
A=(A-repmat(mean(A),10,1))./s;
%check the result
X1=inv(A'*A+eye(3)*lambda)*A'*b;
X2=ridge(b,A,lambda,1);
x1 then equal x2 | Confused by MATLAB's implementation of ridge | This is a matlab program to validate what cardinal said, it is actually due to the centering and scaling
% Create A(10 by 3 matrix) and b(10 by 1 matrix)
A=rand(10,3);
b=rand(10,1);
lambda=0.01
% cent | Confused by MATLAB's implementation of ridge
This is a matlab program to validate what cardinal said, it is actually due to the centering and scaling
% Create A(10 by 3 matrix) and b(10 by 1 matrix)
A=rand(10,3);
b=rand(10,1);
lambda=0.01
% centering and scaling A
s=std(A,0,1);
s=repmat(s,10,1);
A=(A-repmat(mean(A),10,1))./s;
%check the result
X1=inv(A'*A+eye(3)*lambda)*A'*b;
X2=ridge(b,A,lambda,1);
x1 then equal x2 | Confused by MATLAB's implementation of ridge
This is a matlab program to validate what cardinal said, it is actually due to the centering and scaling
% Create A(10 by 3 matrix) and b(10 by 1 matrix)
A=rand(10,3);
b=rand(10,1);
lambda=0.01
% cent |
36,753 | Confused by MATLAB's implementation of ridge | You should specialize the scale in ridge as 0, so it looks like x = ridge(A, b, lambda,0). In this case, the first row of x is constant and the rest are the coefficients. In other words, x(2:end,:) should be the same result as you got by using (1). This is clearly stated in the mathlab documentation. Hoping this helps. | Confused by MATLAB's implementation of ridge | You should specialize the scale in ridge as 0, so it looks like x = ridge(A, b, lambda,0). In this case, the first row of x is constant and the rest are the coefficients. In other words, x(2:end,:) sh | Confused by MATLAB's implementation of ridge
You should specialize the scale in ridge as 0, so it looks like x = ridge(A, b, lambda,0). In this case, the first row of x is constant and the rest are the coefficients. In other words, x(2:end,:) should be the same result as you got by using (1). This is clearly stated in the mathlab documentation. Hoping this helps. | Confused by MATLAB's implementation of ridge
You should specialize the scale in ridge as 0, so it looks like x = ridge(A, b, lambda,0). In this case, the first row of x is constant and the rest are the coefficients. In other words, x(2:end,:) sh |
36,754 | Verifying neural network model performance | Just to make sure we are on the same page: You have a sequence of 1000 samples with 7 features each. There is a sequential pattern in there, which is why you process them with an RNN. At each timestep:
It depends. It might get better if you use different normalizations, hard to tell.
To me it just sounds like classification. I am not sure what you mean by ranking exactly.
No reason to be skeptical. Normally, training error drops like that--extremly quick for few iterations, very slow afterwards.
No, absolutely not. For some tasks, less than 100 iterations (= passes over the training set) suffice.
You are the one who has to say whether the error is small enough. :) We can't tell you without knowing what you are using the network for.
Hard to tell. You should use early stopping instead. Train the network until the error on some held out validation set rises--that's the moment from which on you only overfit. Use the weights found then to evaluate on a test set. (That makes it three sets: training, validation, test set).
Here are some tips that I can give:
make sure to clamp your maximal updates to some fixed value. E.g. when you do a learning step, don't apply updates bigger than 0.1 (RPROP can already do this),
try Long Short-Term Memory,
try Hessian free optimization (Ilya Sutskever has code on his webpage). | Verifying neural network model performance | Just to make sure we are on the same page: You have a sequence of 1000 samples with 7 features each. There is a sequential pattern in there, which is why you process them with an RNN. At each timestep | Verifying neural network model performance
Just to make sure we are on the same page: You have a sequence of 1000 samples with 7 features each. There is a sequential pattern in there, which is why you process them with an RNN. At each timestep:
It depends. It might get better if you use different normalizations, hard to tell.
To me it just sounds like classification. I am not sure what you mean by ranking exactly.
No reason to be skeptical. Normally, training error drops like that--extremly quick for few iterations, very slow afterwards.
No, absolutely not. For some tasks, less than 100 iterations (= passes over the training set) suffice.
You are the one who has to say whether the error is small enough. :) We can't tell you without knowing what you are using the network for.
Hard to tell. You should use early stopping instead. Train the network until the error on some held out validation set rises--that's the moment from which on you only overfit. Use the weights found then to evaluate on a test set. (That makes it three sets: training, validation, test set).
Here are some tips that I can give:
make sure to clamp your maximal updates to some fixed value. E.g. when you do a learning step, don't apply updates bigger than 0.1 (RPROP can already do this),
try Long Short-Term Memory,
try Hessian free optimization (Ilya Sutskever has code on his webpage). | Verifying neural network model performance
Just to make sure we are on the same page: You have a sequence of 1000 samples with 7 features each. There is a sequential pattern in there, which is why you process them with an RNN. At each timestep |
36,755 | Associating a probability to disease propagation in regions of a map | Alright, first, I actually think your question is more suited for the new "Computational Science" site that should be coming into Public Beta through Area 51 soon. There are a number of issues behind the modeling of infectious diseases that are not really within the scope of a statistical analysis site.
Answering your questions in order:
"Easy to interpret probabilities" is somewhat vague - probabilities of what? The disease moving to that region? Likelihood of being infected, as expressed through something like the final prevalence of disease?
There are some common ways to model this problem however. The first is an extension of the classic SIR type model, which are commonly known as "meta-population" models. Essentially, rather than a single set of SIR equations, you have a series of them, one for each region, with parameters in the model governing the interaction between the populations (in your case, map regions). The can be deterministic, at which point very little math is needed, or stochastic, which produces nice distributions of results for further statistical analysis.
Another, less "classic" way as mentioned by @Spacedman, is to use an agent-based model to track individuals. This kind of model is somewhat more difficult to implement, but has the advantage of producing individual level data that can be analyzed using more conventional statistical techniques.
You could also simulate this by representing the map as a set of nodes in a network, and modeling the spread of disease over that network using something like a percolation model.
As you can see, approaches to your problem abound.
In terms of specifics, it depends very much on your disease system, what you're trying to model, and what type of assumptions you're willing to settle for. Along with how much either programming or mathematical complexity you're willing to tolerate. It depends so much that this question is essentially unanswerable, on the scale of "What variables should I put in a regression model".
The answer is likely: A fair number. By way of example, even for a model that doesn't have geographic spread, but could, one model I'm working on has roughly 23 parameters. The agent based version has more. My best advice is to consult an expert.
Spatio-temporal models, meta-population models, spatially discrete models, disease percolation models...there are tons of names. I'd say meta-population models are probably the one which will yield a number of example models the most swiftly, but there are lots of different names. One way to search is also to search for models of the disease you're interested in, to see if there's an approach you could replicate. | Associating a probability to disease propagation in regions of a map | Alright, first, I actually think your question is more suited for the new "Computational Science" site that should be coming into Public Beta through Area 51 soon. There are a number of issues behind | Associating a probability to disease propagation in regions of a map
Alright, first, I actually think your question is more suited for the new "Computational Science" site that should be coming into Public Beta through Area 51 soon. There are a number of issues behind the modeling of infectious diseases that are not really within the scope of a statistical analysis site.
Answering your questions in order:
"Easy to interpret probabilities" is somewhat vague - probabilities of what? The disease moving to that region? Likelihood of being infected, as expressed through something like the final prevalence of disease?
There are some common ways to model this problem however. The first is an extension of the classic SIR type model, which are commonly known as "meta-population" models. Essentially, rather than a single set of SIR equations, you have a series of them, one for each region, with parameters in the model governing the interaction between the populations (in your case, map regions). The can be deterministic, at which point very little math is needed, or stochastic, which produces nice distributions of results for further statistical analysis.
Another, less "classic" way as mentioned by @Spacedman, is to use an agent-based model to track individuals. This kind of model is somewhat more difficult to implement, but has the advantage of producing individual level data that can be analyzed using more conventional statistical techniques.
You could also simulate this by representing the map as a set of nodes in a network, and modeling the spread of disease over that network using something like a percolation model.
As you can see, approaches to your problem abound.
In terms of specifics, it depends very much on your disease system, what you're trying to model, and what type of assumptions you're willing to settle for. Along with how much either programming or mathematical complexity you're willing to tolerate. It depends so much that this question is essentially unanswerable, on the scale of "What variables should I put in a regression model".
The answer is likely: A fair number. By way of example, even for a model that doesn't have geographic spread, but could, one model I'm working on has roughly 23 parameters. The agent based version has more. My best advice is to consult an expert.
Spatio-temporal models, meta-population models, spatially discrete models, disease percolation models...there are tons of names. I'd say meta-population models are probably the one which will yield a number of example models the most swiftly, but there are lots of different names. One way to search is also to search for models of the disease you're interested in, to see if there's an approach you could replicate. | Associating a probability to disease propagation in regions of a map
Alright, first, I actually think your question is more suited for the new "Computational Science" site that should be coming into Public Beta through Area 51 soon. There are a number of issues behind |
36,756 | Associating a probability to disease propagation in regions of a map | Sounds like you want to do "Agent-based modelling" - if you are computing with the behaviour of individuals. [Personally I prefer the term 'simulation' to 'modelling' when it's not what a statistician would call a 'model'...]
I suspect the simplest example is just to have a finite time-step, and rates of transfer between zones, and then pick which people move, they have infection probabilities and everyone has a susceptibility probability. Work out who catches the disease in the next time step.
What you then get is a non-deterministic (because you have some random number generator) simulation that you can run hundreds of times to get simulation uncertainties on the spread of your disease. I'm not sure how easy it is to compare these kinds of models with reality though... | Associating a probability to disease propagation in regions of a map | Sounds like you want to do "Agent-based modelling" - if you are computing with the behaviour of individuals. [Personally I prefer the term 'simulation' to 'modelling' when it's not what a statistician | Associating a probability to disease propagation in regions of a map
Sounds like you want to do "Agent-based modelling" - if you are computing with the behaviour of individuals. [Personally I prefer the term 'simulation' to 'modelling' when it's not what a statistician would call a 'model'...]
I suspect the simplest example is just to have a finite time-step, and rates of transfer between zones, and then pick which people move, they have infection probabilities and everyone has a susceptibility probability. Work out who catches the disease in the next time step.
What you then get is a non-deterministic (because you have some random number generator) simulation that you can run hundreds of times to get simulation uncertainties on the spread of your disease. I'm not sure how easy it is to compare these kinds of models with reality though... | Associating a probability to disease propagation in regions of a map
Sounds like you want to do "Agent-based modelling" - if you are computing with the behaviour of individuals. [Personally I prefer the term 'simulation' to 'modelling' when it's not what a statistician |
36,757 | General approach for non-parametric two-way ANOVA | The proportional odds ordinal logistic model is a generalization of the Wilcoxon and Kruskal-Wallis tests that extend to multiple covariates, interactions, etc. It is a semiparametric method that only uses the ranks of Y. It handles continuous Y, creating k-1 intercepts where k is the number of unique Y values. | General approach for non-parametric two-way ANOVA | The proportional odds ordinal logistic model is a generalization of the Wilcoxon and Kruskal-Wallis tests that extend to multiple covariates, interactions, etc. It is a semiparametric method that onl | General approach for non-parametric two-way ANOVA
The proportional odds ordinal logistic model is a generalization of the Wilcoxon and Kruskal-Wallis tests that extend to multiple covariates, interactions, etc. It is a semiparametric method that only uses the ranks of Y. It handles continuous Y, creating k-1 intercepts where k is the number of unique Y values. | General approach for non-parametric two-way ANOVA
The proportional odds ordinal logistic model is a generalization of the Wilcoxon and Kruskal-Wallis tests that extend to multiple covariates, interactions, etc. It is a semiparametric method that onl |
36,758 | General approach for non-parametric two-way ANOVA | This sounds good to me. There are some issues to consider though:
In option 2, you need to make sure to correct the p-values in your wilcox tests for multiple hypothesis testing. The pairwise.wilcox.test function in R will do this for you.
In my experience, even though the bootstrap approach is very nice here, if other people in your field (e.g. paper reviewers) are unfamiliar with it you can draw a lot of criticism.
It really depends on what is normal in your field, and what the purpose of the analysis is. If this work is for a paper, and practitioners in your field have a recipe for data analysis which does not match this one, it might be easier to justify using that approach (even if it's wrong). For example, in some fields the 'correct' procedure is just "Use ANOVA". No extra tests are performed, and the results are accepted as valid. ANOVA is reasonably robust to the violation of normality too, so in practice this approach (although overly simplistic) works out okay. | General approach for non-parametric two-way ANOVA | This sounds good to me. There are some issues to consider though:
In option 2, you need to make sure to correct the p-values in your wilcox tests for multiple hypothesis testing. The pairwise.wilcox. | General approach for non-parametric two-way ANOVA
This sounds good to me. There are some issues to consider though:
In option 2, you need to make sure to correct the p-values in your wilcox tests for multiple hypothesis testing. The pairwise.wilcox.test function in R will do this for you.
In my experience, even though the bootstrap approach is very nice here, if other people in your field (e.g. paper reviewers) are unfamiliar with it you can draw a lot of criticism.
It really depends on what is normal in your field, and what the purpose of the analysis is. If this work is for a paper, and practitioners in your field have a recipe for data analysis which does not match this one, it might be easier to justify using that approach (even if it's wrong). For example, in some fields the 'correct' procedure is just "Use ANOVA". No extra tests are performed, and the results are accepted as valid. ANOVA is reasonably robust to the violation of normality too, so in practice this approach (although overly simplistic) works out okay. | General approach for non-parametric two-way ANOVA
This sounds good to me. There are some issues to consider though:
In option 2, you need to make sure to correct the p-values in your wilcox tests for multiple hypothesis testing. The pairwise.wilcox. |
36,759 | Fit a robust regression line using an MM-estimator in R | By default, the documentation indicates that rlm uses psi=psi.huber weights. Thus, if you want to use Tukey's bisquare, you need to specify psi=psi.bisquare. The default settings are psi.bisquare(u, c = 4.685, deriv = 0), which you can change as desired. For instance, possibly something like
rlm(x ~ y, method="MM", psi=psi.bisquare, maxit=50)
You may also want to investigate whether you should use least-trimmed squares (init="lts") to initialize your starting values. The default is to use least squares. | Fit a robust regression line using an MM-estimator in R | By default, the documentation indicates that rlm uses psi=psi.huber weights. Thus, if you want to use Tukey's bisquare, you need to specify psi=psi.bisquare. The default settings are psi.bisquare(u, c | Fit a robust regression line using an MM-estimator in R
By default, the documentation indicates that rlm uses psi=psi.huber weights. Thus, if you want to use Tukey's bisquare, you need to specify psi=psi.bisquare. The default settings are psi.bisquare(u, c = 4.685, deriv = 0), which you can change as desired. For instance, possibly something like
rlm(x ~ y, method="MM", psi=psi.bisquare, maxit=50)
You may also want to investigate whether you should use least-trimmed squares (init="lts") to initialize your starting values. The default is to use least squares. | Fit a robust regression line using an MM-estimator in R
By default, the documentation indicates that rlm uses psi=psi.huber weights. Thus, if you want to use Tukey's bisquare, you need to specify psi=psi.bisquare. The default settings are psi.bisquare(u, c |
36,760 | Is Latin hypercube sampling effective in multiple dimensions? | I have split the issues described in your post into three questions below. A good reference for results on Latin Hypercube Sampling and other variance reduction techniques is this book chapter. Also, this book chapter provides information on some of the 'basics' of variance reduction.
Q0. What is variance reduction? Before going into the details, it's helpful to recall what 'variance reduction' actually means. As explained in the 'basics' book chapter, the error variance associated with a Monte Carlo procedure is typically of the form $\sigma^2/n$ under IID sampling. To reduce the error variance, we can either increase the sample size $n$ or find a way to reduce $\sigma$. Variance reduction is concerned with ways of reducing $\sigma$, so such methods may not have any effect on the way in which the error variance changes as $n$ varies.
Q1. Has Latin Hypercube Sampling been correctly implemented?
Your written description seems correct to me and is consistent with the description in the book chapter. My only comment is that the ranges of the $u^i_D$ variables don't seem to fill the whole unit interval; it seems that you actually require $u^i_D \in [\frac{i-1}{N}, \frac{i}{N}]$, but hopefully this error did not creep into your implementation. Anyway, the fact that both of the implementations gave similar results would suggest that your implementation is likely to be correct.
Q2. Are your results consistent with what you might expect from LHS? Proposition 10.4 in the book chapter states that the LHS variance can never be (much) worse than the variance obtained from IID sampling. Often, the LHS variance is much less than the IID variance. More precisely, Proposition 10.1 states that, for the LHS estimate $\hat{\mu}_{LHS}=\frac{1}{n} \sum_{i=1}^n f(X_i)$, we have $$\mathrm{Var}(\hat{\mu}_{LHS})=n^{-1}\int e(x)^2dx+o(n^{-1})$$ where $e(x)$ is the 'residual from additivity' of the function $f$ i.e. $f$ minus its best additive approximation (see p.10 of book chapter for details, $f$ is additive if we can write $f(x)=\mu+\sum_{j=1}^D f_j (x_j)$).
For $D=1$, every function is additive so $e=0$ and $\mathrm{Var}(\hat{\mu}_{LHS})=o(n^{-1})$ from Proposition 10.1. In fact, for $D=1$ LHS is equivalent to grid based stratification (Section 10.1 in book chapter) so the variance is actually $O(n^{-3})$ (equation 10.2 in book chapter; assumes $f$ is continuously differentiable). This seems not inconsistent with your first graph. The main point is that $D=1$ is a very special case!
For $D=2$, it is likely the case that $e\neq 0$ so you might expect a variance of order $O(n^{-1})$. Again, this is not inconsistent with your second graph. The actual variance reduction achieved (in comparison to IID sampling) will depend on how close your chosen function is to being additive.
In summary, LHS can be effective in low to moderate dimensions and especially for functions well approximated by additive functions. | Is Latin hypercube sampling effective in multiple dimensions? | I have split the issues described in your post into three questions below. A good reference for results on Latin Hypercube Sampling and other variance reduction techniques is this book chapter. Also, | Is Latin hypercube sampling effective in multiple dimensions?
I have split the issues described in your post into three questions below. A good reference for results on Latin Hypercube Sampling and other variance reduction techniques is this book chapter. Also, this book chapter provides information on some of the 'basics' of variance reduction.
Q0. What is variance reduction? Before going into the details, it's helpful to recall what 'variance reduction' actually means. As explained in the 'basics' book chapter, the error variance associated with a Monte Carlo procedure is typically of the form $\sigma^2/n$ under IID sampling. To reduce the error variance, we can either increase the sample size $n$ or find a way to reduce $\sigma$. Variance reduction is concerned with ways of reducing $\sigma$, so such methods may not have any effect on the way in which the error variance changes as $n$ varies.
Q1. Has Latin Hypercube Sampling been correctly implemented?
Your written description seems correct to me and is consistent with the description in the book chapter. My only comment is that the ranges of the $u^i_D$ variables don't seem to fill the whole unit interval; it seems that you actually require $u^i_D \in [\frac{i-1}{N}, \frac{i}{N}]$, but hopefully this error did not creep into your implementation. Anyway, the fact that both of the implementations gave similar results would suggest that your implementation is likely to be correct.
Q2. Are your results consistent with what you might expect from LHS? Proposition 10.4 in the book chapter states that the LHS variance can never be (much) worse than the variance obtained from IID sampling. Often, the LHS variance is much less than the IID variance. More precisely, Proposition 10.1 states that, for the LHS estimate $\hat{\mu}_{LHS}=\frac{1}{n} \sum_{i=1}^n f(X_i)$, we have $$\mathrm{Var}(\hat{\mu}_{LHS})=n^{-1}\int e(x)^2dx+o(n^{-1})$$ where $e(x)$ is the 'residual from additivity' of the function $f$ i.e. $f$ minus its best additive approximation (see p.10 of book chapter for details, $f$ is additive if we can write $f(x)=\mu+\sum_{j=1}^D f_j (x_j)$).
For $D=1$, every function is additive so $e=0$ and $\mathrm{Var}(\hat{\mu}_{LHS})=o(n^{-1})$ from Proposition 10.1. In fact, for $D=1$ LHS is equivalent to grid based stratification (Section 10.1 in book chapter) so the variance is actually $O(n^{-3})$ (equation 10.2 in book chapter; assumes $f$ is continuously differentiable). This seems not inconsistent with your first graph. The main point is that $D=1$ is a very special case!
For $D=2$, it is likely the case that $e\neq 0$ so you might expect a variance of order $O(n^{-1})$. Again, this is not inconsistent with your second graph. The actual variance reduction achieved (in comparison to IID sampling) will depend on how close your chosen function is to being additive.
In summary, LHS can be effective in low to moderate dimensions and especially for functions well approximated by additive functions. | Is Latin hypercube sampling effective in multiple dimensions?
I have split the issues described in your post into three questions below. A good reference for results on Latin Hypercube Sampling and other variance reduction techniques is this book chapter. Also, |
36,761 | Is Latin hypercube sampling effective in multiple dimensions? | http://statweb.stanford.edu/~owen/mc/Ch-var-adv.pdf
This paper discusses the variance reduction of Latin Hypercube Sampling in multiple dimensions. LHS doesn't enforce uniformity when sampling in multiple dimensions because it simply samples in each dimension independently and then combines the dimensions randomly. Stratified sampling of N2 bins as you mention is also referred to as Orthogonal Sampling as discussed on the Wikipedia page: https://en.wikipedia.org/wiki/Latin_hypercube_sampling and more enforces multi-dimensional uniformity by sampling from the bins of the all dimensions combined instead.
With a few tweaks to this style of sampling the error variance can be shown to be O(N-1-2/d) (in ref above). Although this provides large gains for small dimensions, in larger dimensions it begins to degrade back to the performance of ordinary Monte Carlo. | Is Latin hypercube sampling effective in multiple dimensions? | http://statweb.stanford.edu/~owen/mc/Ch-var-adv.pdf
This paper discusses the variance reduction of Latin Hypercube Sampling in multiple dimensions. LHS doesn't enforce uniformity when sampling in mult | Is Latin hypercube sampling effective in multiple dimensions?
http://statweb.stanford.edu/~owen/mc/Ch-var-adv.pdf
This paper discusses the variance reduction of Latin Hypercube Sampling in multiple dimensions. LHS doesn't enforce uniformity when sampling in multiple dimensions because it simply samples in each dimension independently and then combines the dimensions randomly. Stratified sampling of N2 bins as you mention is also referred to as Orthogonal Sampling as discussed on the Wikipedia page: https://en.wikipedia.org/wiki/Latin_hypercube_sampling and more enforces multi-dimensional uniformity by sampling from the bins of the all dimensions combined instead.
With a few tweaks to this style of sampling the error variance can be shown to be O(N-1-2/d) (in ref above). Although this provides large gains for small dimensions, in larger dimensions it begins to degrade back to the performance of ordinary Monte Carlo. | Is Latin hypercube sampling effective in multiple dimensions?
http://statweb.stanford.edu/~owen/mc/Ch-var-adv.pdf
This paper discusses the variance reduction of Latin Hypercube Sampling in multiple dimensions. LHS doesn't enforce uniformity when sampling in mult |
36,762 | Is Latin hypercube sampling effective in multiple dimensions? | I want to comment on "additivity". LHS makes e.g. sure that X1 and X2 are distributed well (usually in (0,1)), so if a design depends only on one variable you will get a "perfect" histogram and strong variance reduction. For the integration of f=100*X1+X2 you will get good results too, but not for X1-X2! This difference has an almost i.i.d. random distribution, no LHS characteristics.
In electronics, designs often exploit that 2 parameters influences will mostly cancel eachother (differential pair, current mirror, replica circuits, etc.), but the effect of mismatch X1-X2 is still present and often dominant. Thus LHS MC analysis behave not better then rnd MC in many electrical designs. | Is Latin hypercube sampling effective in multiple dimensions? | I want to comment on "additivity". LHS makes e.g. sure that X1 and X2 are distributed well (usually in (0,1)), so if a design depends only on one variable you will get a "perfect" histogram and strong | Is Latin hypercube sampling effective in multiple dimensions?
I want to comment on "additivity". LHS makes e.g. sure that X1 and X2 are distributed well (usually in (0,1)), so if a design depends only on one variable you will get a "perfect" histogram and strong variance reduction. For the integration of f=100*X1+X2 you will get good results too, but not for X1-X2! This difference has an almost i.i.d. random distribution, no LHS characteristics.
In electronics, designs often exploit that 2 parameters influences will mostly cancel eachother (differential pair, current mirror, replica circuits, etc.), but the effect of mismatch X1-X2 is still present and often dominant. Thus LHS MC analysis behave not better then rnd MC in many electrical designs. | Is Latin hypercube sampling effective in multiple dimensions?
I want to comment on "additivity". LHS makes e.g. sure that X1 and X2 are distributed well (usually in (0,1)), so if a design depends only on one variable you will get a "perfect" histogram and strong |
36,763 | Predicting total number of bugs based on number of bugs revealed by each tester | This is a capture-recapture question, in this case with more than two visits, briefly discussed by Wikipedia and in more detail by the R vignette on the package Rcapture
Your assumptions of
different testers having different skills ($t$)
each bug being equally likely to be caught (not $h$)
the probability of a bug being caught not being affected by whether it was previously caught (not $b$)
suggests an $M_t$ model. Using the R code
library(Rcapture)
bugscaught <- matrix(c(1,1,1,1,1,0,0,0,0,0,
0,0,1,0,1,1,1,0,0,0,
1,0,1,0,1,0,0,1,1,1),ncol=3)
closedp(bugscaught)
gives the following
Number of captured units: 10
Abundance estimations and model fits:
abundance stderr deviance df AIC
M0 13.1 3.1 6.781 5 23.614
Mt 12.9 3.0 6.128 3 26.961
Mh Chao 26.3 20.9 2.472 4 21.305
Mh Poisson2 115.9 237.3 2.472 4 21.305
Mh Darroch 696.0 2253.0 2.472 4 21.305
Mh Gamma3.5 4565.3 19853.4 2.472 4 21.305
Mth Chao 25.6 20.0 1.708 2 24.541
Mth Poisson2 113.6 232.5 1.708 2 24.541
Mth Darroch 699.7 2266.0 1.708 2 24.541
Mth Gamma3.5 4714.8 20515.0 1.708 2 24.541
Mb 16.7 13.7 6.526 4 25.359
Mbh 1.0 13.6 5.751 3 26.584
and for an $M_t$ model suggests a central estimate of 12.9 bugs with a standard deviation of 3, so I would suggest a range of from 10 to something like 20. If you want to dig deeper then the contents of closedp(bugscaught)$glm$Mt may have useful information.
As you can see, if you instead assume some bugs will be harder than others to find then depending on your model the central estimate could reach the thousands. | Predicting total number of bugs based on number of bugs revealed by each tester | This is a capture-recapture question, in this case with more than two visits, briefly discussed by Wikipedia and in more detail by the R vignette on the package Rcapture
Your assumptions of
differen | Predicting total number of bugs based on number of bugs revealed by each tester
This is a capture-recapture question, in this case with more than two visits, briefly discussed by Wikipedia and in more detail by the R vignette on the package Rcapture
Your assumptions of
different testers having different skills ($t$)
each bug being equally likely to be caught (not $h$)
the probability of a bug being caught not being affected by whether it was previously caught (not $b$)
suggests an $M_t$ model. Using the R code
library(Rcapture)
bugscaught <- matrix(c(1,1,1,1,1,0,0,0,0,0,
0,0,1,0,1,1,1,0,0,0,
1,0,1,0,1,0,0,1,1,1),ncol=3)
closedp(bugscaught)
gives the following
Number of captured units: 10
Abundance estimations and model fits:
abundance stderr deviance df AIC
M0 13.1 3.1 6.781 5 23.614
Mt 12.9 3.0 6.128 3 26.961
Mh Chao 26.3 20.9 2.472 4 21.305
Mh Poisson2 115.9 237.3 2.472 4 21.305
Mh Darroch 696.0 2253.0 2.472 4 21.305
Mh Gamma3.5 4565.3 19853.4 2.472 4 21.305
Mth Chao 25.6 20.0 1.708 2 24.541
Mth Poisson2 113.6 232.5 1.708 2 24.541
Mth Darroch 699.7 2266.0 1.708 2 24.541
Mth Gamma3.5 4714.8 20515.0 1.708 2 24.541
Mb 16.7 13.7 6.526 4 25.359
Mbh 1.0 13.6 5.751 3 26.584
and for an $M_t$ model suggests a central estimate of 12.9 bugs with a standard deviation of 3, so I would suggest a range of from 10 to something like 20. If you want to dig deeper then the contents of closedp(bugscaught)$glm$Mt may have useful information.
As you can see, if you instead assume some bugs will be harder than others to find then depending on your model the central estimate could reach the thousands. | Predicting total number of bugs based on number of bugs revealed by each tester
This is a capture-recapture question, in this case with more than two visits, briefly discussed by Wikipedia and in more detail by the R vignette on the package Rcapture
Your assumptions of
differen |
36,764 | Fourier data with non-integer periods, correcting for phase bias | The periodogram will estimate the periods. It will also handle noisy data and pick out multiple sinusoidal components of different period.
A quick and dirty Mathematica calculation is
data = N[Table[Sin[3.17*2*Pi*x/200], {x, 1, n}]];
welch = 1 - (2 (Range[n] - (n - 1)/2)/(n + 1))^2;
fData = Append[Abs[Fourier[welch data]]^2 / (Plus @@ (welch^2)), 0];
fData = (fData + Reverse[fData])/2;
fData = fData / (Plus @@ fData);
(You don't really need the last two steps, but I kept them in because they produced the illustrations below.)
Here's a plot of the important part of the periodogram in this example:
The points are the periodogram values while the line is a quick smooth (I used a polynomial interpolator of order 5, but with more time would apply a Gaussian kernel smooth):
f = Interpolation[Log[fData], InterpolationOrder -> 5];
period = x /. (NMaximize[f[x + 1], x] // Last)
The maximum of the smoothed value occurs at $3.17661$, whose closeness to $3.17$ is evidence of the promise of this technique.
Once you have an estimate of the period, it's straightforward to find the phase and amplitude (use nonlinear least squares, or run a tight bandpass filter over the Fourier transform and invert it).
NonlinearModelFit[data, a Sin[\[Phi] + period*2*Pi*x/200], {a, \[Phi]}, x]
The estimated amplitude ($a$) is $1.00011$ and phase ($\phi$) is $0.0212758$, both close to the actual values of $1$ and $0$, respectively. (The phase estimate is less than one sampling interval ($2\pi/200 = 0.0314$) from the correct phase, which is about as good as one can expect.) Compare the data to this fit:
The residuals exhibit some quasi-periodicity (attributable to cutting off the data at a non-integral period) and range from $-0.018$ to $0.021$. | Fourier data with non-integer periods, correcting for phase bias | The periodogram will estimate the periods. It will also handle noisy data and pick out multiple sinusoidal components of different period.
A quick and dirty Mathematica calculation is
data = N[Table[ | Fourier data with non-integer periods, correcting for phase bias
The periodogram will estimate the periods. It will also handle noisy data and pick out multiple sinusoidal components of different period.
A quick and dirty Mathematica calculation is
data = N[Table[Sin[3.17*2*Pi*x/200], {x, 1, n}]];
welch = 1 - (2 (Range[n] - (n - 1)/2)/(n + 1))^2;
fData = Append[Abs[Fourier[welch data]]^2 / (Plus @@ (welch^2)), 0];
fData = (fData + Reverse[fData])/2;
fData = fData / (Plus @@ fData);
(You don't really need the last two steps, but I kept them in because they produced the illustrations below.)
Here's a plot of the important part of the periodogram in this example:
The points are the periodogram values while the line is a quick smooth (I used a polynomial interpolator of order 5, but with more time would apply a Gaussian kernel smooth):
f = Interpolation[Log[fData], InterpolationOrder -> 5];
period = x /. (NMaximize[f[x + 1], x] // Last)
The maximum of the smoothed value occurs at $3.17661$, whose closeness to $3.17$ is evidence of the promise of this technique.
Once you have an estimate of the period, it's straightforward to find the phase and amplitude (use nonlinear least squares, or run a tight bandpass filter over the Fourier transform and invert it).
NonlinearModelFit[data, a Sin[\[Phi] + period*2*Pi*x/200], {a, \[Phi]}, x]
The estimated amplitude ($a$) is $1.00011$ and phase ($\phi$) is $0.0212758$, both close to the actual values of $1$ and $0$, respectively. (The phase estimate is less than one sampling interval ($2\pi/200 = 0.0314$) from the correct phase, which is about as good as one can expect.) Compare the data to this fit:
The residuals exhibit some quasi-periodicity (attributable to cutting off the data at a non-integral period) and range from $-0.018$ to $0.021$. | Fourier data with non-integer periods, correcting for phase bias
The periodogram will estimate the periods. It will also handle noisy data and pick out multiple sinusoidal components of different period.
A quick and dirty Mathematica calculation is
data = N[Table[ |
36,765 | What article or book clearly states that one can't use protected t-tests in within subjects ANOVA? | I don't know of any paper that makes that explicit statement probably because it's not entirely true by itself.
You are correct that sphericity should be met. But, you've left the issue of sphericity vague in your question because "met" is ill defined and somewhat subjective. With only 4 levels you probably aren't having very large sphericity violations. Masson & Loftus (2003; Loftus & Masson, 1994) have mentioned that you should adhere to sphericity before using pooled measures in similar situations to what you describe and have given guidelines; but there's no hard and fast rule. The kinds of comparisons they're doing in those papers are equivalent to repeated measures t-tests in terms of power and error rates so you should look at them.
Then there's the whole issue of whether there's any protection from a significant ANOVA in "protected" tests. What's being requested is pretty equivalent to Fisher's protected least significant difference (PLSD). These protected tests have been demonstrated not to be protected against alpha inflation in general. A simple simulation of a 3-level ANOVA with A1<A2 and A2=A3 will show a higher likelihood to find A2,A3 differences than expected from alpha using PLSD. (reference escapes me... but not the answer you want anyway)
That said, your argument about individual variances is problematic because, even if homogeneity or sphericity are not perfect, you often get a more accurate estimate from the pooled value. Therefore, even though the whole idea of the significant F protecting the alpha is questionable, you should probably be using the pooled variance. You haven't presented any argument that you get more protection from alpha inflation using individual tests.
And with all that said...
I'm not sure what you're trying to defend, a difference you found or one you did not. Regardless, don't. If pooling the variance makes a new difference appear or something go away report that. Report your effect sizes, your beliefs about the fact that sphericity isn't met... just tell the whole story. You should also make a statement about the power you have. There's no firm ground here, in what you've presented, to argue that the reviewer is wrong in the general case. | What article or book clearly states that one can't use protected t-tests in within subjects ANOVA? | I don't know of any paper that makes that explicit statement probably because it's not entirely true by itself.
You are correct that sphericity should be met. But, you've left the issue of spherici | What article or book clearly states that one can't use protected t-tests in within subjects ANOVA?
I don't know of any paper that makes that explicit statement probably because it's not entirely true by itself.
You are correct that sphericity should be met. But, you've left the issue of sphericity vague in your question because "met" is ill defined and somewhat subjective. With only 4 levels you probably aren't having very large sphericity violations. Masson & Loftus (2003; Loftus & Masson, 1994) have mentioned that you should adhere to sphericity before using pooled measures in similar situations to what you describe and have given guidelines; but there's no hard and fast rule. The kinds of comparisons they're doing in those papers are equivalent to repeated measures t-tests in terms of power and error rates so you should look at them.
Then there's the whole issue of whether there's any protection from a significant ANOVA in "protected" tests. What's being requested is pretty equivalent to Fisher's protected least significant difference (PLSD). These protected tests have been demonstrated not to be protected against alpha inflation in general. A simple simulation of a 3-level ANOVA with A1<A2 and A2=A3 will show a higher likelihood to find A2,A3 differences than expected from alpha using PLSD. (reference escapes me... but not the answer you want anyway)
That said, your argument about individual variances is problematic because, even if homogeneity or sphericity are not perfect, you often get a more accurate estimate from the pooled value. Therefore, even though the whole idea of the significant F protecting the alpha is questionable, you should probably be using the pooled variance. You haven't presented any argument that you get more protection from alpha inflation using individual tests.
And with all that said...
I'm not sure what you're trying to defend, a difference you found or one you did not. Regardless, don't. If pooling the variance makes a new difference appear or something go away report that. Report your effect sizes, your beliefs about the fact that sphericity isn't met... just tell the whole story. You should also make a statement about the power you have. There's no firm ground here, in what you've presented, to argue that the reviewer is wrong in the general case. | What article or book clearly states that one can't use protected t-tests in within subjects ANOVA?
I don't know of any paper that makes that explicit statement probably because it's not entirely true by itself.
You are correct that sphericity should be met. But, you've left the issue of spherici |
36,766 | Difference between loadings and correlations between observed variables and factor saved scores in factor analysis | I don't know R very well, so I can't track your code. But factor scores (unless the factors are simply principal components) are always approximate: exact scores cannot be computed because the uniqueness value for each case and variable is eternally unobservable. Thus, observed correlations between computed factor scores and the variables only approximate true correlations between factors and variables, the loadings. | Difference between loadings and correlations between observed variables and factor saved scores in f | I don't know R very well, so I can't track your code. But factor scores (unless the factors are simply principal components) are always approximate: exact scores cannot be computed because the uniquen | Difference between loadings and correlations between observed variables and factor saved scores in factor analysis
I don't know R very well, so I can't track your code. But factor scores (unless the factors are simply principal components) are always approximate: exact scores cannot be computed because the uniqueness value for each case and variable is eternally unobservable. Thus, observed correlations between computed factor scores and the variables only approximate true correlations between factors and variables, the loadings. | Difference between loadings and correlations between observed variables and factor saved scores in f
I don't know R very well, so I can't track your code. But factor scores (unless the factors are simply principal components) are always approximate: exact scores cannot be computed because the uniquen |
36,767 | Difference between loadings and correlations between observed variables and factor saved scores in factor analysis | fa() uses the minres factoring method by default fm="minres".
The loadings correspond to the correlations only with the principal components factorization method. You can compute it with principal():
fa1 <- principal(X, nfactors = 3, rotate = 'none')
cor(X, fa1$scores)
PC1 PC2 PC3
[1,] -0.10920804 0.53177096 0.62089920
[2,] 0.38040379 0.25737641 -0.61853742
[3,] -0.63568952 -0.07448425 0.42456182
[4,] -0.65982013 0.31649913 -0.44502612
[5,] 0.01177613 -0.74010933 0.10943722
[6,] -0.23698177 0.22859832 0.21876281
[7,] 0.22409045 0.43785156 0.36644127
[8,] 0.69310850 0.26912793 0.47151066
[9,] 0.15024503 0.65373157 -0.39777599
[10,] 0.85889193 -0.23091790 -0.02241569
fa1$loadings[1:10, 1:3]
PC1 PC2 PC3
[1,] -0.10920804 0.53177096 0.62089920
[2,] 0.38040379 0.25737641 -0.61853742
[3,] -0.63568952 -0.07448425 0.42456182
[4,] -0.65982013 0.31649913 -0.44502612
[5,] 0.01177613 -0.74010933 0.10943722
[6,] -0.23698177 0.22859832 0.21876281
[7,] 0.22409045 0.43785156 0.36644127
[8,] 0.69310850 0.26912793 0.47151066
[9,] 0.15024503 0.65373157 -0.39777599
[10,] 0.85889193 -0.23091790 -0.02241569 | Difference between loadings and correlations between observed variables and factor saved scores in f | fa() uses the minres factoring method by default fm="minres".
The loadings correspond to the correlations only with the principal components factorization method. You can compute it with principal():
| Difference between loadings and correlations between observed variables and factor saved scores in factor analysis
fa() uses the minres factoring method by default fm="minres".
The loadings correspond to the correlations only with the principal components factorization method. You can compute it with principal():
fa1 <- principal(X, nfactors = 3, rotate = 'none')
cor(X, fa1$scores)
PC1 PC2 PC3
[1,] -0.10920804 0.53177096 0.62089920
[2,] 0.38040379 0.25737641 -0.61853742
[3,] -0.63568952 -0.07448425 0.42456182
[4,] -0.65982013 0.31649913 -0.44502612
[5,] 0.01177613 -0.74010933 0.10943722
[6,] -0.23698177 0.22859832 0.21876281
[7,] 0.22409045 0.43785156 0.36644127
[8,] 0.69310850 0.26912793 0.47151066
[9,] 0.15024503 0.65373157 -0.39777599
[10,] 0.85889193 -0.23091790 -0.02241569
fa1$loadings[1:10, 1:3]
PC1 PC2 PC3
[1,] -0.10920804 0.53177096 0.62089920
[2,] 0.38040379 0.25737641 -0.61853742
[3,] -0.63568952 -0.07448425 0.42456182
[4,] -0.65982013 0.31649913 -0.44502612
[5,] 0.01177613 -0.74010933 0.10943722
[6,] -0.23698177 0.22859832 0.21876281
[7,] 0.22409045 0.43785156 0.36644127
[8,] 0.69310850 0.26912793 0.47151066
[9,] 0.15024503 0.65373157 -0.39777599
[10,] 0.85889193 -0.23091790 -0.02241569 | Difference between loadings and correlations between observed variables and factor saved scores in f
fa() uses the minres factoring method by default fm="minres".
The loadings correspond to the correlations only with the principal components factorization method. You can compute it with principal():
|
36,768 | Where can I read about the justification for the use of parametric probability distributions? | Nice question. I like Ben Bolker's descriptions from his book, Ecological Models and Data in R (preprint of the relevant chapter; the bestiary of distributions starts on page 19).
For each distribution, he has a few sentences to a page on where it comes from and what it's used for, plus some math and graphs. | Where can I read about the justification for the use of parametric probability distributions? | Nice question. I like Ben Bolker's descriptions from his book, Ecological Models and Data in R (preprint of the relevant chapter; the bestiary of distributions starts on page 19).
For each distributi | Where can I read about the justification for the use of parametric probability distributions?
Nice question. I like Ben Bolker's descriptions from his book, Ecological Models and Data in R (preprint of the relevant chapter; the bestiary of distributions starts on page 19).
For each distribution, he has a few sentences to a page on where it comes from and what it's used for, plus some math and graphs. | Where can I read about the justification for the use of parametric probability distributions?
Nice question. I like Ben Bolker's descriptions from his book, Ecological Models and Data in R (preprint of the relevant chapter; the bestiary of distributions starts on page 19).
For each distributi |
36,769 | Where can I read about the justification for the use of parametric probability distributions? | In some sense there is no such thing as a statistics without "parameters" and "models". It is an arbitrary labelling to some extent, depending on what you recognise as a "model" or "parameter". Parameters and models are basically ways of translating assumptions and knowledge about the real world into a mathematical system. But this is true of any mathematical algorithm. You need to somehow convert your problem from the real world into whatever mathematical framework you intend to use to solve it.
Using a probability distribution which has been assigned according to some principle is one way to do this conversion in a systematic and transparent way. The best principles I know of are the principle of maximum entropy (MaxEnt) and the principle of transformation groups (which I think could be also called the principle of "invariance" or "problem-indifference").
Once assigned you can use Bayesian probability theory to coherently manipulate these "input" probabilities which contain your information and assumptions into "output" probabilities which tell you how much uncertainty is present in the analysis you're interested in.
A few introductions from the Bayes/MaxEnt perspective described above can be found here, here, and here. These are based on the interpretation of probability as an extension of deductive logic. They are more on the theoretical side of things.
As a minor end-note, I recommend these methods mainly because they seem most appealing to me - I can't think of a good theoretical reason for giving up the normative behaviours which lie behind the Bayes/MaxEnt rationale. Of course, you may not be as compelled as I am, and I can think of a few practical compromises around feasibility and software limitations though. "real world" statistics can often be about which ideology you are approximating (approx Bayes vs approx Maximum Likelihood vs approx Design based) or which ideology you understand and are able to explain to your clients. | Where can I read about the justification for the use of parametric probability distributions? | In some sense there is no such thing as a statistics without "parameters" and "models". It is an arbitrary labelling to some extent, depending on what you recognise as a "model" or "parameter". Para | Where can I read about the justification for the use of parametric probability distributions?
In some sense there is no such thing as a statistics without "parameters" and "models". It is an arbitrary labelling to some extent, depending on what you recognise as a "model" or "parameter". Parameters and models are basically ways of translating assumptions and knowledge about the real world into a mathematical system. But this is true of any mathematical algorithm. You need to somehow convert your problem from the real world into whatever mathematical framework you intend to use to solve it.
Using a probability distribution which has been assigned according to some principle is one way to do this conversion in a systematic and transparent way. The best principles I know of are the principle of maximum entropy (MaxEnt) and the principle of transformation groups (which I think could be also called the principle of "invariance" or "problem-indifference").
Once assigned you can use Bayesian probability theory to coherently manipulate these "input" probabilities which contain your information and assumptions into "output" probabilities which tell you how much uncertainty is present in the analysis you're interested in.
A few introductions from the Bayes/MaxEnt perspective described above can be found here, here, and here. These are based on the interpretation of probability as an extension of deductive logic. They are more on the theoretical side of things.
As a minor end-note, I recommend these methods mainly because they seem most appealing to me - I can't think of a good theoretical reason for giving up the normative behaviours which lie behind the Bayes/MaxEnt rationale. Of course, you may not be as compelled as I am, and I can think of a few practical compromises around feasibility and software limitations though. "real world" statistics can often be about which ideology you are approximating (approx Bayes vs approx Maximum Likelihood vs approx Design based) or which ideology you understand and are able to explain to your clients. | Where can I read about the justification for the use of parametric probability distributions?
In some sense there is no such thing as a statistics without "parameters" and "models". It is an arbitrary labelling to some extent, depending on what you recognise as a "model" or "parameter". Para |
36,770 | Where can I read about the justification for the use of parametric probability distributions? | A Bayesian way to introduce and motivate parametric models is through Exchangeability and De Finetti's Representation Theorem. There is some discussion in this question:
What is so cool about de Finetti's representation theorem?
A great introduction is given in the first chapter of Schervish's Theory of Statistics. All the measure theoretic language needed for the discussion is given in his tour de force Appendix (with complete proofs!). I've learned much from this book, and I strongly recommend you to buy it.
This paper studies the generality of the Bayesian construction:
Sandra Fortini, Lucia Ladelli and Eugenio Regazzini
Sankhyā: The Indian Journal of Statistics, Series A (1961-2002)
Vol. 62, No. 1 (Feb., 2000), pp. 86-109
It's available for download here: http://sankhya.isical.ac.in/search/62a1/62a17092.pdf | Where can I read about the justification for the use of parametric probability distributions? | A Bayesian way to introduce and motivate parametric models is through Exchangeability and De Finetti's Representation Theorem. There is some discussion in this question:
What is so cool about de Finet | Where can I read about the justification for the use of parametric probability distributions?
A Bayesian way to introduce and motivate parametric models is through Exchangeability and De Finetti's Representation Theorem. There is some discussion in this question:
What is so cool about de Finetti's representation theorem?
A great introduction is given in the first chapter of Schervish's Theory of Statistics. All the measure theoretic language needed for the discussion is given in his tour de force Appendix (with complete proofs!). I've learned much from this book, and I strongly recommend you to buy it.
This paper studies the generality of the Bayesian construction:
Sandra Fortini, Lucia Ladelli and Eugenio Regazzini
Sankhyā: The Indian Journal of Statistics, Series A (1961-2002)
Vol. 62, No. 1 (Feb., 2000), pp. 86-109
It's available for download here: http://sankhya.isical.ac.in/search/62a1/62a17092.pdf | Where can I read about the justification for the use of parametric probability distributions?
A Bayesian way to introduce and motivate parametric models is through Exchangeability and De Finetti's Representation Theorem. There is some discussion in this question:
What is so cool about de Finet |
36,771 | High quality publishing house for books in the field of statistics | Well, here is a list of the companies that paid to exhibit at the most recent (2011) Joint Statistical Meetings. This includes a lot of publishers. I am not sure why a major publisher would not be at the biggest statistical meeting in the world, or at least at the largest one to draw a lot of different teachers and researchers from a large span of areas.
There's Springer, OUP, Sage, Wiley, W.H. Freeman, Elsevier, CRC, SIAM, CUP, BEP...
Did I miss anyone?
Note that some of these are conglomerates and may have different publishers or different series under the same umbrella firm.
By the way, O'Reilly is not a major statistics publisher. While I have a lot of their books, I don't turn to any of them for statistical insights. I am not sure they have any statistical editors, for that matter, but they do a good job on books on programming. | High quality publishing house for books in the field of statistics | Well, here is a list of the companies that paid to exhibit at the most recent (2011) Joint Statistical Meetings. This includes a lot of publishers. I am not sure why a major publisher would not be a | High quality publishing house for books in the field of statistics
Well, here is a list of the companies that paid to exhibit at the most recent (2011) Joint Statistical Meetings. This includes a lot of publishers. I am not sure why a major publisher would not be at the biggest statistical meeting in the world, or at least at the largest one to draw a lot of different teachers and researchers from a large span of areas.
There's Springer, OUP, Sage, Wiley, W.H. Freeman, Elsevier, CRC, SIAM, CUP, BEP...
Did I miss anyone?
Note that some of these are conglomerates and may have different publishers or different series under the same umbrella firm.
By the way, O'Reilly is not a major statistics publisher. While I have a lot of their books, I don't turn to any of them for statistical insights. I am not sure they have any statistical editors, for that matter, but they do a good job on books on programming. | High quality publishing house for books in the field of statistics
Well, here is a list of the companies that paid to exhibit at the most recent (2011) Joint Statistical Meetings. This includes a lot of publishers. I am not sure why a major publisher would not be a |
36,772 | High quality publishing house for books in the field of statistics | Another good one is Sage, which produces hundreds of small monographs in the Quantitative Applications in the Social Sciences series. | High quality publishing house for books in the field of statistics | Another good one is Sage, which produces hundreds of small monographs in the Quantitative Applications in the Social Sciences series. | High quality publishing house for books in the field of statistics
Another good one is Sage, which produces hundreds of small monographs in the Quantitative Applications in the Social Sciences series. | High quality publishing house for books in the field of statistics
Another good one is Sage, which produces hundreds of small monographs in the Quantitative Applications in the Social Sciences series. |
36,773 | How to derive Poisson distribution from gamma distribution? | I'm sure that Durrett's proof is nice. A straight forward solution to the question asked is as follows.
For $n \geq 1$
$$
\begin{array}{rcl}
P(N_t = n) & = & \int_0^t P(S_{n+1} > t \mid S_n = s) P(S_n \in ds) \\
& = & \int_0^t P(T_{n+1} > t-s) P(S_n \in ds) \\
& = & \int_0^t e^{-\lambda(t-s)} \frac{\lambda^n s^{n-1} e^{-\lambda s}}{(n-1)!} \mathrm{d} s \\
& = & e^{-\lambda t} \frac{\lambda^n }{(n-1)!} \int_0^t s^{n-1} \mathrm{d} s \\
& = & e^{-\lambda t} \frac{(\lambda t)^n}{n!}
\end{array}
$$
For $n = 0$ we have $P(N_t = 0) = P(T_1 > t) = e^{-\lambda t}$.
This does not prove that $(N_t)_{t \geq 0}$ is a Poisson process, which is harder, but it does show that the marginal distribution of $N_t$ is Poisson with mean $\lambda t$. | How to derive Poisson distribution from gamma distribution? | I'm sure that Durrett's proof is nice. A straight forward solution to the question asked is as follows.
For $n \geq 1$
$$
\begin{array}{rcl}
P(N_t = n) & = & \int_0^t P(S_{n+1} > t \mid S_n = s) P( | How to derive Poisson distribution from gamma distribution?
I'm sure that Durrett's proof is nice. A straight forward solution to the question asked is as follows.
For $n \geq 1$
$$
\begin{array}{rcl}
P(N_t = n) & = & \int_0^t P(S_{n+1} > t \mid S_n = s) P(S_n \in ds) \\
& = & \int_0^t P(T_{n+1} > t-s) P(S_n \in ds) \\
& = & \int_0^t e^{-\lambda(t-s)} \frac{\lambda^n s^{n-1} e^{-\lambda s}}{(n-1)!} \mathrm{d} s \\
& = & e^{-\lambda t} \frac{\lambda^n }{(n-1)!} \int_0^t s^{n-1} \mathrm{d} s \\
& = & e^{-\lambda t} \frac{(\lambda t)^n}{n!}
\end{array}
$$
For $n = 0$ we have $P(N_t = 0) = P(T_1 > t) = e^{-\lambda t}$.
This does not prove that $(N_t)_{t \geq 0}$ is a Poisson process, which is harder, but it does show that the marginal distribution of $N_t$ is Poisson with mean $\lambda t$. | How to derive Poisson distribution from gamma distribution?
I'm sure that Durrett's proof is nice. A straight forward solution to the question asked is as follows.
For $n \geq 1$
$$
\begin{array}{rcl}
P(N_t = n) & = & \int_0^t P(S_{n+1} > t \mid S_n = s) P( |
36,774 | Information on how value of k in k-fold cross-validation affects resulting accuracies | Not much of a "proof" but when k is small, you are removing a much larger chunk of your data, so you model has a much smaller amount of data to "learn from". For k=5 you are removing 20% of the data each time, whereas for k=200 you are only removing 0.5%. You model has a much better chance of picking up all the relevant "structure" in the training part when k is large. When k is small, the is a larger chance that the "left out" part will contain a structure which is absent from the "left in" bit - a bit like an "un-representative" sub-sample. | Information on how value of k in k-fold cross-validation affects resulting accuracies | Not much of a "proof" but when k is small, you are removing a much larger chunk of your data, so you model has a much smaller amount of data to "learn from". For k=5 you are removing 20% of the data | Information on how value of k in k-fold cross-validation affects resulting accuracies
Not much of a "proof" but when k is small, you are removing a much larger chunk of your data, so you model has a much smaller amount of data to "learn from". For k=5 you are removing 20% of the data each time, whereas for k=200 you are only removing 0.5%. You model has a much better chance of picking up all the relevant "structure" in the training part when k is large. When k is small, the is a larger chance that the "left out" part will contain a structure which is absent from the "left in" bit - a bit like an "un-representative" sub-sample. | Information on how value of k in k-fold cross-validation affects resulting accuracies
Not much of a "proof" but when k is small, you are removing a much larger chunk of your data, so you model has a much smaller amount of data to "learn from". For k=5 you are removing 20% of the data |
36,775 | Should grades be assigned to students based on a normal distribution? | Why should grades be normally distributed?
Sometimes they are but if the grades are not normally distributed then the bell curve grading system, where the middle say 70% get C's, is probably not a good one to base grades off of. Although that grading is pretty harsh, few instructors would actually do it.
Use distributions to describe the data, don't transform data to fit a particular distribution (although transformations can be helpful at times).
If you use the bell curve grading system and, extreme case, everyone aces the class. How do you decide grades?
Here is how I would decide final grades:
90-100%: A
80-90%: B
... | Should grades be assigned to students based on a normal distribution? | Why should grades be normally distributed?
Sometimes they are but if the grades are not normally distributed then the bell curve grading system, where the middle say 70% get C's, is probably not a goo | Should grades be assigned to students based on a normal distribution?
Why should grades be normally distributed?
Sometimes they are but if the grades are not normally distributed then the bell curve grading system, where the middle say 70% get C's, is probably not a good one to base grades off of. Although that grading is pretty harsh, few instructors would actually do it.
Use distributions to describe the data, don't transform data to fit a particular distribution (although transformations can be helpful at times).
If you use the bell curve grading system and, extreme case, everyone aces the class. How do you decide grades?
Here is how I would decide final grades:
90-100%: A
80-90%: B
... | Should grades be assigned to students based on a normal distribution?
Why should grades be normally distributed?
Sometimes they are but if the grades are not normally distributed then the bell curve grading system, where the middle say 70% get C's, is probably not a goo |
36,776 | Should grades be assigned to students based on a normal distribution? | 1.) Usually, grades are on an ordinal scale. So in a strict statistical sense, an overall grade should not be something like the mean grade, since adding such variables is not defined. The median grade however, does not clearly fit since it conveys the ranking between students in the same subject, not between subjects the same student learnt.
In the end, even in statistics departments students get their overall grade as the mean of the single grades. The reasoning is that in the median grade system (as long as the overall grade gets attention, see below), students would have no incentive to improve in the subjects they know they are weak. These weak subjects are like outliers and the median grade would be robust to them.
Maybe one has to remember that grades serve different purposes:
Predict the graduate's performance in his future job.
Give a selection criterion on whom to give an opportunity for further studies (scholarship, Ph.D. studies).
Help the very dump students identify their weak subjects where they have to work more.
Help professors to discipline their students (works of course only if purpose 1 and 2 are met).
An overall grade is only necessary for purpose 2, and there, only because a closer examination of the student's aptitude for the particular further studies is too expensive. Personally, I consider final grades as close to useless.
2.) As long as grades are correctly treated ordinal, there is no problem to rescaling them. But also, there would no use since proper statistical methods for such data are invariant under rescaling. However, if you compute mean grades, your transformation on the single grades will affect overall grade. This might be considered as unfair.
Also, the normal distribution on $\mathbb{R}$ is much finer than the coarse, disctrete grading systems we are used to. | Should grades be assigned to students based on a normal distribution? | 1.) Usually, grades are on an ordinal scale. So in a strict statistical sense, an overall grade should not be something like the mean grade, since adding such variables is not defined. The median grad | Should grades be assigned to students based on a normal distribution?
1.) Usually, grades are on an ordinal scale. So in a strict statistical sense, an overall grade should not be something like the mean grade, since adding such variables is not defined. The median grade however, does not clearly fit since it conveys the ranking between students in the same subject, not between subjects the same student learnt.
In the end, even in statistics departments students get their overall grade as the mean of the single grades. The reasoning is that in the median grade system (as long as the overall grade gets attention, see below), students would have no incentive to improve in the subjects they know they are weak. These weak subjects are like outliers and the median grade would be robust to them.
Maybe one has to remember that grades serve different purposes:
Predict the graduate's performance in his future job.
Give a selection criterion on whom to give an opportunity for further studies (scholarship, Ph.D. studies).
Help the very dump students identify their weak subjects where they have to work more.
Help professors to discipline their students (works of course only if purpose 1 and 2 are met).
An overall grade is only necessary for purpose 2, and there, only because a closer examination of the student's aptitude for the particular further studies is too expensive. Personally, I consider final grades as close to useless.
2.) As long as grades are correctly treated ordinal, there is no problem to rescaling them. But also, there would no use since proper statistical methods for such data are invariant under rescaling. However, if you compute mean grades, your transformation on the single grades will affect overall grade. This might be considered as unfair.
Also, the normal distribution on $\mathbb{R}$ is much finer than the coarse, disctrete grading systems we are used to. | Should grades be assigned to students based on a normal distribution?
1.) Usually, grades are on an ordinal scale. So in a strict statistical sense, an overall grade should not be something like the mean grade, since adding such variables is not defined. The median grad |
36,777 | Chow test or not? | Your question is most interesting to me and it's solution has been my primary research for a number of years.
There are a number of ways that "a structural break" may occur.
If there is a change in the Intercept or a change in Trend in "the latter portion of the time series" then one would be better suited to perform Intervention Detection (N.B. this is the empirical identification of the significant impact of an unspecified Deterministic Variable such as a Level Shift or a Change in Trend or the onset of a Seasonal Pulse ). Intervention Detection then is a pre-cursor to Intervention Modelling where a suggested variable is included in the model. You can find information on the web by googling "AUTOMATIC INTERVENTION DETECTION" . Some authors use the term "OUTLIER DETECTION" but like a lot of statistical language this can be confusing/imprecise . Detected Interventions can be any of the following (detecting a significant change in the mean of the residuals );
a 1 period change in Level ( i.e. a Pulse )
a multi-period contiguous change in Level ( i.e. a change in Intercept )
a systematic Pulse ( i.e. a Seasonal Pulse )
a trend change (i.e. 1,2,3,4,5,7,9,11,13,15 ..... )
These procedures are easily programmed IN R/SAS/Matlab and routinely available in a number of commercially available time series packages however there are many pitfalls that you need to be wary of such as whether to detect the stochastic structure first or do Intervention detection on the original series. This is like the chicken and egg problem. Early work in this area was limited to type 1's and as such will probably be insufficient for your needs .
If no such phenomenon is detected then one might consider the CHOW TEST which normally requires the user to pre-specify the point of hypothesized change. I have been researching and implementing procedures to DETECT the point of change by evaluating alternative hypothetical points in time to determine the most likely break point.
In closing one might also be sensitive to the possibility that there might have been a structural change in the error variance thus that might mask the CHOW TEST leading to a false acceptance of the null hypothesis of no significant break points in parameters. | Chow test or not? | Your question is most interesting to me and it's solution has been my primary research for a number of years.
There are a number of ways that "a structural break" may occur.
If there is a change in t | Chow test or not?
Your question is most interesting to me and it's solution has been my primary research for a number of years.
There are a number of ways that "a structural break" may occur.
If there is a change in the Intercept or a change in Trend in "the latter portion of the time series" then one would be better suited to perform Intervention Detection (N.B. this is the empirical identification of the significant impact of an unspecified Deterministic Variable such as a Level Shift or a Change in Trend or the onset of a Seasonal Pulse ). Intervention Detection then is a pre-cursor to Intervention Modelling where a suggested variable is included in the model. You can find information on the web by googling "AUTOMATIC INTERVENTION DETECTION" . Some authors use the term "OUTLIER DETECTION" but like a lot of statistical language this can be confusing/imprecise . Detected Interventions can be any of the following (detecting a significant change in the mean of the residuals );
a 1 period change in Level ( i.e. a Pulse )
a multi-period contiguous change in Level ( i.e. a change in Intercept )
a systematic Pulse ( i.e. a Seasonal Pulse )
a trend change (i.e. 1,2,3,4,5,7,9,11,13,15 ..... )
These procedures are easily programmed IN R/SAS/Matlab and routinely available in a number of commercially available time series packages however there are many pitfalls that you need to be wary of such as whether to detect the stochastic structure first or do Intervention detection on the original series. This is like the chicken and egg problem. Early work in this area was limited to type 1's and as such will probably be insufficient for your needs .
If no such phenomenon is detected then one might consider the CHOW TEST which normally requires the user to pre-specify the point of hypothesized change. I have been researching and implementing procedures to DETECT the point of change by evaluating alternative hypothetical points in time to determine the most likely break point.
In closing one might also be sensitive to the possibility that there might have been a structural change in the error variance thus that might mask the CHOW TEST leading to a false acceptance of the null hypothesis of no significant break points in parameters. | Chow test or not?
Your question is most interesting to me and it's solution has been my primary research for a number of years.
There are a number of ways that "a structural break" may occur.
If there is a change in t |
36,778 | Assessing multicollinearity of dichotomous predictor variables | I think you are trying to interpret P(A|B) and P(B|A) as if they should be the same thing. There is no reason for them to be equal, because of the product rule:
$$P(AB)=P(A|B)P(B)=P(B|A)P(A)$$
unless $P(B)=P(A)$ then $P(A|B) \neq P(B|A)$ in general. This explains the difference in the "yn" case. Unless you have a "balanced" table (row totals equal to column totals), the conditional probabilities (row and column) will not be equal.
A test for "logical/statistical independence" (but not causal independence) between categorical variables can be given as:
$$T=\sum_{ij} O_{ij} log\Big(\frac{O_{ij}}{E_{ij}}\Big)$$
Where $ij$ indexes the cells of the table (so in your example, $ij=11,12,21,22$). $O_{ij}$ is the observed value in the table, and $E_{ij}$ is what is "expected" under independence, which is simply the product of the marginals
$$E_{ij}=O_{\bullet \bullet}\frac{O_{i \bullet}}{O_{\bullet \bullet}}\frac{O_{\bullet j}}{O_{\bullet \bullet}}
=\frac{O_{i \bullet}O_{\bullet j}}{O_{\bullet \bullet}}$$
Where a "$\bullet$" indicates that you sum over that index. You can show that if you had a prior log-odds value for independence of $L_{I}$ then the posterior log-odds is $L_{I}-T$. The alternative hypothesis is $E_{ij}=O_{ij}$ (i.e. no simplification, no independence), for which $T=0$. Thus T says "how strongly" the data support non-independence, within the class of multinomial distributions. The good thing about this test is that it works for all $E_{ij}>0$, so you don't have to worry about a "sparse" table. This test will still give sensible results.
For the regressions, this is telling you that the average IQ value is different between the two values of sex, although I don't know the scale of the AIC difference (is this "big"?).
I'm not sure how appropriate the AIC is to a binomial GLM. It may be a better idea to look at the ANOVA and deviance tables for the LM and GLM respectively.
Also, have you plotted the data? always plot the data!!! this will be able to tell you things that the test does not. How different do the IQs look when plotted by sex? how different do the sexes look when plotted by IQ? | Assessing multicollinearity of dichotomous predictor variables | I think you are trying to interpret P(A|B) and P(B|A) as if they should be the same thing. There is no reason for them to be equal, because of the product rule:
$$P(AB)=P(A|B)P(B)=P(B|A)P(A)$$
unless | Assessing multicollinearity of dichotomous predictor variables
I think you are trying to interpret P(A|B) and P(B|A) as if they should be the same thing. There is no reason for them to be equal, because of the product rule:
$$P(AB)=P(A|B)P(B)=P(B|A)P(A)$$
unless $P(B)=P(A)$ then $P(A|B) \neq P(B|A)$ in general. This explains the difference in the "yn" case. Unless you have a "balanced" table (row totals equal to column totals), the conditional probabilities (row and column) will not be equal.
A test for "logical/statistical independence" (but not causal independence) between categorical variables can be given as:
$$T=\sum_{ij} O_{ij} log\Big(\frac{O_{ij}}{E_{ij}}\Big)$$
Where $ij$ indexes the cells of the table (so in your example, $ij=11,12,21,22$). $O_{ij}$ is the observed value in the table, and $E_{ij}$ is what is "expected" under independence, which is simply the product of the marginals
$$E_{ij}=O_{\bullet \bullet}\frac{O_{i \bullet}}{O_{\bullet \bullet}}\frac{O_{\bullet j}}{O_{\bullet \bullet}}
=\frac{O_{i \bullet}O_{\bullet j}}{O_{\bullet \bullet}}$$
Where a "$\bullet$" indicates that you sum over that index. You can show that if you had a prior log-odds value for independence of $L_{I}$ then the posterior log-odds is $L_{I}-T$. The alternative hypothesis is $E_{ij}=O_{ij}$ (i.e. no simplification, no independence), for which $T=0$. Thus T says "how strongly" the data support non-independence, within the class of multinomial distributions. The good thing about this test is that it works for all $E_{ij}>0$, so you don't have to worry about a "sparse" table. This test will still give sensible results.
For the regressions, this is telling you that the average IQ value is different between the two values of sex, although I don't know the scale of the AIC difference (is this "big"?).
I'm not sure how appropriate the AIC is to a binomial GLM. It may be a better idea to look at the ANOVA and deviance tables for the LM and GLM respectively.
Also, have you plotted the data? always plot the data!!! this will be able to tell you things that the test does not. How different do the IQs look when plotted by sex? how different do the sexes look when plotted by IQ? | Assessing multicollinearity of dichotomous predictor variables
I think you are trying to interpret P(A|B) and P(B|A) as if they should be the same thing. There is no reason for them to be equal, because of the product rule:
$$P(AB)=P(A|B)P(B)=P(B|A)P(A)$$
unless |
36,779 | Assessing multicollinearity of dichotomous predictor variables | Why are you worried about multicolinearity? The only reason that we need this assumption in regression is to ensure that we get unique estimates. Multicolinearity only matters for estimation when it is perfect---when one variable is an exact linear combination of the others.
If your experimentally-manipulated variables were randomly assigned, then their correlations with the observed predictors as well as unobserved factors should be (roughly) 0; it is this assumption that helps you get unbiased estimates.
That said, non-perfect multicolinearity can make your standard errors larger, but only on those variables that experience the multicolinearity issue. In your context, the standard errors of the coefficients on your experimental variables should not be impacted. | Assessing multicollinearity of dichotomous predictor variables | Why are you worried about multicolinearity? The only reason that we need this assumption in regression is to ensure that we get unique estimates. Multicolinearity only matters for estimation when it i | Assessing multicollinearity of dichotomous predictor variables
Why are you worried about multicolinearity? The only reason that we need this assumption in regression is to ensure that we get unique estimates. Multicolinearity only matters for estimation when it is perfect---when one variable is an exact linear combination of the others.
If your experimentally-manipulated variables were randomly assigned, then their correlations with the observed predictors as well as unobserved factors should be (roughly) 0; it is this assumption that helps you get unbiased estimates.
That said, non-perfect multicolinearity can make your standard errors larger, but only on those variables that experience the multicolinearity issue. In your context, the standard errors of the coefficients on your experimental variables should not be impacted. | Assessing multicollinearity of dichotomous predictor variables
Why are you worried about multicolinearity? The only reason that we need this assumption in regression is to ensure that we get unique estimates. Multicolinearity only matters for estimation when it i |
36,780 | What to do following poor fit statistics for a confirmatory factor analysis? | I would probably do the following:
1) Split the data into two roughly equal segments.
2) Perform exploratory analyses on one of these and derive a new model
3) Test the model on the other half of the data.
This at least will be something that's not done all that often, which will make it a better fit for publication (should you want to do so), and will give you an independent test of your model.
You could also fit both models (the prior one and the one you develop) to your test data, and compare the fit of both. | What to do following poor fit statistics for a confirmatory factor analysis? | I would probably do the following:
1) Split the data into two roughly equal segments.
2) Perform exploratory analyses on one of these and derive a new model
3) Test the model on the other half of the | What to do following poor fit statistics for a confirmatory factor analysis?
I would probably do the following:
1) Split the data into two roughly equal segments.
2) Perform exploratory analyses on one of these and derive a new model
3) Test the model on the other half of the data.
This at least will be something that's not done all that often, which will make it a better fit for publication (should you want to do so), and will give you an independent test of your model.
You could also fit both models (the prior one and the one you develop) to your test data, and compare the fit of both. | What to do following poor fit statistics for a confirmatory factor analysis?
I would probably do the following:
1) Split the data into two roughly equal segments.
2) Perform exploratory analyses on one of these and derive a new model
3) Test the model on the other half of the |
36,781 | What to do following poor fit statistics for a confirmatory factor analysis? | Instead of looking for statistical solutions that directly solve this problem, I would look for solutions that improve the diagnosis.
First, I'd compare the different samples used in the different studies.
Then, if you have the data, I'd look at the correlation patterns among the variables in the different samples. (You may be able to get these from other authors). | What to do following poor fit statistics for a confirmatory factor analysis? | Instead of looking for statistical solutions that directly solve this problem, I would look for solutions that improve the diagnosis.
First, I'd compare the different samples used in the different s | What to do following poor fit statistics for a confirmatory factor analysis?
Instead of looking for statistical solutions that directly solve this problem, I would look for solutions that improve the diagnosis.
First, I'd compare the different samples used in the different studies.
Then, if you have the data, I'd look at the correlation patterns among the variables in the different samples. (You may be able to get these from other authors). | What to do following poor fit statistics for a confirmatory factor analysis?
Instead of looking for statistical solutions that directly solve this problem, I would look for solutions that improve the diagnosis.
First, I'd compare the different samples used in the different s |
36,782 | Taking advantage of many pre-treatment measurements | This is not a complete answer, but just a few thoughts:
More pre-treatment measures should increase the reliability of your measurement of baseline differences. Increasing reliability of measuring baseline differences should increase your statistical power in detecting group differences (assuming a real effect exists) using the pre-post control design.
9000 pre-treatment measures is a lot. Such a design would usually imply that you are interested in the temporal dynamics of some phenomena. Nonetheless, if you are just using measurements as an indicator of baseline differences, then there would be a number of strategies for incorporating this into your model.
The simplest strategy would be to take the mean for each participant.
If there is trend in participant data, then an estimate of the individual's score just before the intervention may be more of interest.
Even more sophisticated would be to develop a model for each individual of what their score would be on the dependent variable following the intervention based on some projection using the pre-treatment measures. This might be more relevant if there was some form of seasonal or other systematic effect operating in different ways for different individuals.
You may also want to read this earlier question on strategies for analysing such designs. | Taking advantage of many pre-treatment measurements | This is not a complete answer, but just a few thoughts:
More pre-treatment measures should increase the reliability of your measurement of baseline differences. Increasing reliability of measuring ba | Taking advantage of many pre-treatment measurements
This is not a complete answer, but just a few thoughts:
More pre-treatment measures should increase the reliability of your measurement of baseline differences. Increasing reliability of measuring baseline differences should increase your statistical power in detecting group differences (assuming a real effect exists) using the pre-post control design.
9000 pre-treatment measures is a lot. Such a design would usually imply that you are interested in the temporal dynamics of some phenomena. Nonetheless, if you are just using measurements as an indicator of baseline differences, then there would be a number of strategies for incorporating this into your model.
The simplest strategy would be to take the mean for each participant.
If there is trend in participant data, then an estimate of the individual's score just before the intervention may be more of interest.
Even more sophisticated would be to develop a model for each individual of what their score would be on the dependent variable following the intervention based on some projection using the pre-treatment measures. This might be more relevant if there was some form of seasonal or other systematic effect operating in different ways for different individuals.
You may also want to read this earlier question on strategies for analysing such designs. | Taking advantage of many pre-treatment measurements
This is not a complete answer, but just a few thoughts:
More pre-treatment measures should increase the reliability of your measurement of baseline differences. Increasing reliability of measuring ba |
36,783 | Taking advantage of many pre-treatment measurements | Excuse my previous post. I now see that you are not referring to 9000 different covariates.
What I have written does not apply to your situation.
Sincerest apologies.
Paul
There is a lot of discussion about matching and dimensionality reduction on pre-treatment covariates that may be worthwhile examining - i.e. propensity weighting via logistic regression and establishing balance on the pre-treatment covariates vis a vis different matching approaches.
Please refer to the following ............... http://gking.harvard.edu/matchit
This approach is easily executed in r, but the number of variables you have would be looking to use would be very unlikely to work.
Cheers Paul | Taking advantage of many pre-treatment measurements | Excuse my previous post. I now see that you are not referring to 9000 different covariates.
What I have written does not apply to your situation.
Sincerest apologies.
Paul
There is a lot of discussi | Taking advantage of many pre-treatment measurements
Excuse my previous post. I now see that you are not referring to 9000 different covariates.
What I have written does not apply to your situation.
Sincerest apologies.
Paul
There is a lot of discussion about matching and dimensionality reduction on pre-treatment covariates that may be worthwhile examining - i.e. propensity weighting via logistic regression and establishing balance on the pre-treatment covariates vis a vis different matching approaches.
Please refer to the following ............... http://gking.harvard.edu/matchit
This approach is easily executed in r, but the number of variables you have would be looking to use would be very unlikely to work.
Cheers Paul | Taking advantage of many pre-treatment measurements
Excuse my previous post. I now see that you are not referring to 9000 different covariates.
What I have written does not apply to your situation.
Sincerest apologies.
Paul
There is a lot of discussi |
36,784 | How to choose df for comparisons between summary statistics (e.g. slope values)? | Here's how I have understood your question:
you have two groups of participants
Five observations per participant
Based on the five observations, you can extract a single summary statistic (e.g., if the five observations were performance over five time points, the summary statistic might be the slope of the regression line predicting performance from time)
General points:
If you want to test whether there are differences between groups on the summary statistic, you can do a standard t-test with standard degrees of freedom.
Having ore observations per individual will increase the reliability with which you measure the summary statistic.
Greater reliability of measurement means larger expected group differences and thus greater statistical power (see reliability attenuation).
Very similar points could be made if instead of having two groups you had a numeric variable measured once on each participant, such as age, and you wanted to correlate this with your summary statistic.
There are many ways to measure something on a set of participants. You just happened to have applied an algorithm (e.g., a linear regression leading to a slope) to a set of observations to derive your measure. | How to choose df for comparisons between summary statistics (e.g. slope values)? | Here's how I have understood your question:
you have two groups of participants
Five observations per participant
Based on the five observations, you can extract a single summary statistic (e.g., if | How to choose df for comparisons between summary statistics (e.g. slope values)?
Here's how I have understood your question:
you have two groups of participants
Five observations per participant
Based on the five observations, you can extract a single summary statistic (e.g., if the five observations were performance over five time points, the summary statistic might be the slope of the regression line predicting performance from time)
General points:
If you want to test whether there are differences between groups on the summary statistic, you can do a standard t-test with standard degrees of freedom.
Having ore observations per individual will increase the reliability with which you measure the summary statistic.
Greater reliability of measurement means larger expected group differences and thus greater statistical power (see reliability attenuation).
Very similar points could be made if instead of having two groups you had a numeric variable measured once on each participant, such as age, and you wanted to correlate this with your summary statistic.
There are many ways to measure something on a set of participants. You just happened to have applied an algorithm (e.g., a linear regression leading to a slope) to a set of observations to derive your measure. | How to choose df for comparisons between summary statistics (e.g. slope values)?
Here's how I have understood your question:
you have two groups of participants
Five observations per participant
Based on the five observations, you can extract a single summary statistic (e.g., if |
36,785 | How to choose df for comparisons between summary statistics (e.g. slope values)? | This a very simple multi-level (a.k.a. hierarchical) model. Douglas Bate his currently working on a book on the subject (draft avalaible here: http://lme4.r-forge.r-project.org/book/). While there are many books on this subject, Doug's has the added benefit of being designed arround the 'lme' R package, a very handy pakage designed to fit such model. I think it is best for you to go and read the first chapter of that book as well as practice the exemples provided there inside R. You can always come back with more specific questions. | How to choose df for comparisons between summary statistics (e.g. slope values)? | This a very simple multi-level (a.k.a. hierarchical) model. Douglas Bate his currently working on a book on the subject (draft avalaible here: http://lme4.r-forge.r-project.org/book/). While there are | How to choose df for comparisons between summary statistics (e.g. slope values)?
This a very simple multi-level (a.k.a. hierarchical) model. Douglas Bate his currently working on a book on the subject (draft avalaible here: http://lme4.r-forge.r-project.org/book/). While there are many books on this subject, Doug's has the added benefit of being designed arround the 'lme' R package, a very handy pakage designed to fit such model. I think it is best for you to go and read the first chapter of that book as well as practice the exemples provided there inside R. You can always come back with more specific questions. | How to choose df for comparisons between summary statistics (e.g. slope values)?
This a very simple multi-level (a.k.a. hierarchical) model. Douglas Bate his currently working on a book on the subject (draft avalaible here: http://lme4.r-forge.r-project.org/book/). While there are |
36,786 | How to choose df for comparisons between summary statistics (e.g. slope values)? | Concerning your more specific question (i.e. how many degrees of freedom): the question is how many replicates do you have. Look at the early pages of chapter 19 of the R book for examples and guidelines for such accounting.
We could do the accounting here but i don't understand the design of your experiment (probably due to difference in vocabulary, it could be easier if you explained it in formal (i.e. math) script with care to define the indices).
You might also want to Check the following paper
Hurlbert, S.H. (1984) Pseudoreplication and the design of ecological field experiments. Ecological Monographs, 54, 187–211. | How to choose df for comparisons between summary statistics (e.g. slope values)? | Concerning your more specific question (i.e. how many degrees of freedom): the question is how many replicates do you have. Look at the early pages of chapter 19 of the R book for examples and guideli | How to choose df for comparisons between summary statistics (e.g. slope values)?
Concerning your more specific question (i.e. how many degrees of freedom): the question is how many replicates do you have. Look at the early pages of chapter 19 of the R book for examples and guidelines for such accounting.
We could do the accounting here but i don't understand the design of your experiment (probably due to difference in vocabulary, it could be easier if you explained it in formal (i.e. math) script with care to define the indices).
You might also want to Check the following paper
Hurlbert, S.H. (1984) Pseudoreplication and the design of ecological field experiments. Ecological Monographs, 54, 187–211. | How to choose df for comparisons between summary statistics (e.g. slope values)?
Concerning your more specific question (i.e. how many degrees of freedom): the question is how many replicates do you have. Look at the early pages of chapter 19 of the R book for examples and guideli |
36,787 | Intraclass correlation and aggregation | I think (1) is not a statistical question but a subject-area one. E.g., in the described example it would be up to those who study group psychology to determine appropriate language for the strength of ICCs. This is analogous to a Pearson correlation -- what constitutes 'strong' differs depending on whether one is working in, for example, sociology or physics.
(2) is to an extent also subject-area specific -- it depends on what researchers are aiming to measure and describe. But from a statistical point of view ICC is a reasonable metric for within-team relatedness. However I agree with Mike that when you say you'd like to
"describe the extent to which
the measure of team effectiveness is a
property of the team member's
idiosyncratic belief or a property of
a shared belief about the team"
then it is probably more appropriate to use variance components in their raw form than to convert them into an ICC.
To clarify, think of the ICC as calculated within a mixed model. For a single-level mixed model with random group-level intercepts $b_i \sim N(0, \sigma^2_b)$ and within-group errors $\epsilon_{ij} \stackrel{\mathrm{iid}}{\sim} N(0, \sigma^2)$, $\sigma^2_b$ describes the amount of variation between teams and $\sigma^2$ describes variation within teams. Then, for a single team, we get a response covariance matrix of $\sigma^2 \mathbf{I} + \sigma^2_b \mathbf{1}\mathbf{1}'$ which when converted to a correlation matrix is $\frac{\sigma^2}{\sigma^2 + \sigma^2_b} \mathbf{I} + \frac{\sigma^2_b}{\sigma^2 + \sigma^2_b} \mathbf{1}\mathbf{1}'$. So, $\frac{\sigma^2_b}{\sigma^2 + \sigma^2_b} = \mathrm{ICC}$ describes the level of correlation between effectiveness responses within a team, but it sounds as though you may be more interested in $\sigma^2$ and $\sigma^2_b$, or perhaps $\frac{\sigma^2}{\sigma^2_b}$. | Intraclass correlation and aggregation | I think (1) is not a statistical question but a subject-area one. E.g., in the described example it would be up to those who study group psychology to determine appropriate language for the strength o | Intraclass correlation and aggregation
I think (1) is not a statistical question but a subject-area one. E.g., in the described example it would be up to those who study group psychology to determine appropriate language for the strength of ICCs. This is analogous to a Pearson correlation -- what constitutes 'strong' differs depending on whether one is working in, for example, sociology or physics.
(2) is to an extent also subject-area specific -- it depends on what researchers are aiming to measure and describe. But from a statistical point of view ICC is a reasonable metric for within-team relatedness. However I agree with Mike that when you say you'd like to
"describe the extent to which
the measure of team effectiveness is a
property of the team member's
idiosyncratic belief or a property of
a shared belief about the team"
then it is probably more appropriate to use variance components in their raw form than to convert them into an ICC.
To clarify, think of the ICC as calculated within a mixed model. For a single-level mixed model with random group-level intercepts $b_i \sim N(0, \sigma^2_b)$ and within-group errors $\epsilon_{ij} \stackrel{\mathrm{iid}}{\sim} N(0, \sigma^2)$, $\sigma^2_b$ describes the amount of variation between teams and $\sigma^2$ describes variation within teams. Then, for a single team, we get a response covariance matrix of $\sigma^2 \mathbf{I} + \sigma^2_b \mathbf{1}\mathbf{1}'$ which when converted to a correlation matrix is $\frac{\sigma^2}{\sigma^2 + \sigma^2_b} \mathbf{I} + \frac{\sigma^2_b}{\sigma^2 + \sigma^2_b} \mathbf{1}\mathbf{1}'$. So, $\frac{\sigma^2_b}{\sigma^2 + \sigma^2_b} = \mathrm{ICC}$ describes the level of correlation between effectiveness responses within a team, but it sounds as though you may be more interested in $\sigma^2$ and $\sigma^2_b$, or perhaps $\frac{\sigma^2}{\sigma^2_b}$. | Intraclass correlation and aggregation
I think (1) is not a statistical question but a subject-area one. E.g., in the described example it would be up to those who study group psychology to determine appropriate language for the strength o |
36,788 | Intraclass correlation and aggregation | 1) With correlations, you can never really give sensible cut-offs, but the general rules of the normal correlation apply I'd say.
2) Regarding the appropriateness of the ICC : depending on the data, the ICC is equivalent to an F-test (see eg Commenges & Jacqmin, 1994 and Kistner & Muller, 2004). So in essence, the mixed model framework can tell you at least as much about your hypothesis, and allows for simultaneously testing more hypotheses than the ICC.
Cronbach's $\alpha$ is also directly related to the ICC, and another measure that is (was?) often reported, albeit in the context of agreement between items within a group. This approach comes from psychological questionnaires, where a cut-off of 0.7 is rather often used to determine whether the questions really group into the studied factors. | Intraclass correlation and aggregation | 1) With correlations, you can never really give sensible cut-offs, but the general rules of the normal correlation apply I'd say.
2) Regarding the appropriateness of the ICC : depending on the data, | Intraclass correlation and aggregation
1) With correlations, you can never really give sensible cut-offs, but the general rules of the normal correlation apply I'd say.
2) Regarding the appropriateness of the ICC : depending on the data, the ICC is equivalent to an F-test (see eg Commenges & Jacqmin, 1994 and Kistner & Muller, 2004). So in essence, the mixed model framework can tell you at least as much about your hypothesis, and allows for simultaneously testing more hypotheses than the ICC.
Cronbach's $\alpha$ is also directly related to the ICC, and another measure that is (was?) often reported, albeit in the context of agreement between items within a group. This approach comes from psychological questionnaires, where a cut-off of 0.7 is rather often used to determine whether the questions really group into the studied factors. | Intraclass correlation and aggregation
1) With correlations, you can never really give sensible cut-offs, but the general rules of the normal correlation apply I'd say.
2) Regarding the appropriateness of the ICC : depending on the data, |
36,789 | Intraclass correlation and aggregation | Paul Bliese has an article discussing the intraclass correlation in teams research. He writes that
In [his extensive] experience with U.S. Army [teams] data ...he never encountered ICC(1) values greater than .30 [, and that he] typically [sees] values between .05 and .20.
He goes on to suggest that he would be
surprised to find ICC(1) values greater than .30 in most applied field research.
I have read articles that cite this article, arguably inappropriately, suggesting an ICC(1) value of greater than .05 is needed to justify aggregation.
References
Bliese, P. D. (2000). Within-group agreement, non-independence, and reliability: Implications for data aggregation and analysis. PDF | Intraclass correlation and aggregation | Paul Bliese has an article discussing the intraclass correlation in teams research. He writes that
In [his extensive] experience with U.S. Army [teams] data ...he never encountered ICC(1) values gre | Intraclass correlation and aggregation
Paul Bliese has an article discussing the intraclass correlation in teams research. He writes that
In [his extensive] experience with U.S. Army [teams] data ...he never encountered ICC(1) values greater than .30 [, and that he] typically [sees] values between .05 and .20.
He goes on to suggest that he would be
surprised to find ICC(1) values greater than .30 in most applied field research.
I have read articles that cite this article, arguably inappropriately, suggesting an ICC(1) value of greater than .05 is needed to justify aggregation.
References
Bliese, P. D. (2000). Within-group agreement, non-independence, and reliability: Implications for data aggregation and analysis. PDF | Intraclass correlation and aggregation
Paul Bliese has an article discussing the intraclass correlation in teams research. He writes that
In [his extensive] experience with U.S. Army [teams] data ...he never encountered ICC(1) values gre |
36,790 | On the high dimensional bootstrap | TL;DR The article that you refer to makes things look more worse than they actually are. Their bootstrapping procedure is not a good way to apply bootstrapping. In the case of OLS there shouldn't be big problems with high dimensionality if the sample size is large. If you can not get correct results with OLS, where a correct confidence interval can be easily computed analytically, then something must be wrong with the implementation of the bootstrapping method.
It is good though to be reminded that the residuals are not the same as the errors and that we can use simulations with OLS to test (potentially wrong) implementations of bootstrapping.
Simple reproduction of the article results
The article that you refer to is performing simulations of errors by bootstrapping/resampling of the residuals. Below is a simple example that reproduces this.
The model is a linear regression with $n=500$ samples (or 250 pairs) and $p=125$ parameters. The distributions that are plotted here are just for the first parameter estimate $\hat{\beta}_1$.
Discrepancy in estimated sample variance
The third image, resampling the true errors, gives a correct indication of the sample distribution of the coefficient.
The first and second images, resampling all residuals, or resampling the pairs, have distributions with a different variance. They lead to errors in the estimates of standard errors and confidence intervals.
The reason for the discrepancy is that bootstrapping only works when the bootstrapped samples are a good representation of the true distribution. This is not the case when $p/n$ is large.
resampling residuals The bootstrap samples are created by simulating errors by sampling from the residuals, however the variance of the residuals is lower than the variance of the errors $$\text{Var}(r_i) \approx \left(1-\frac{p}{n}\right) \text{Var}(\epsilon)$$
pairwise resampling in the case of pairwise resampling the distribution is effectively a scaled binomial distribution. The variance will be
$$\text{Var}(r_{i,paired}) \approx \frac{1}{2} \text{Var}(\epsilon)$$
The factor $0.5$ stems from the Bernoulli variable having variance $0.25$ and the scaling is the difference of the two pairs which has variance $2\text{Var}(\epsilon)$
Discussion
You should only use bootstrapping when the sampling is done from a distribution that represents the population distribution. This is not the case with paired resampling, which is sampling a distribution with half the variance, and with the residual resampling, which is sampling a distribution with smaller variance if $p/n$ is large.
The bootstrapping is often performed when a distribution of is difficult to compute. This is either the case when 1) the assumptions about the error distribution are false, or the error distribution is unknown 2) the propagation of errors is difficult to compute.
For ordinary linear regression, the second case is not an issue. The statistic is a linear sum of the data and it's sampling distribution will often approximate a normal distribution. With different cost functions the behaviour might not be too far. The problem is just to estimate the variance, and the residuals are often a good indication for this. But, one has to apply the right corrections.
The problem is more difficult in the situation where the error distribution has large tails and the variance is not easily estimated with a small sample. In this case the typical remedy is to simply gather more data. Potentially one could do an advanced semi-parametric bootstrapping by combining the residuals with a normal distribution that relates to the residuals being the errors with the estimate subtracted (the estimate being some correlated normal distribution).
Plot of reproduction
set.seed(1)
n = 500 # data samples
p = 125 # parameters
m = 1000 # times resampling
### create paired data
X = matrix(rnorm(n*p/2),ncol =p)
X = rbind(X,X)
Y = rnorm(n)
solve(t(X)%*%X)[1,1] ### this is the theoretic variance
### compute main model
mod = lm(Y~X+0)
### variables used for resampling
Y_m = predict(mod)
res = mod$residuals
err = Y
### perform resampling of residuals
b_residuals = sapply(1:m, FUN = function(i) {
Y_s = Y_m + sample(res,n)
lm(Y_s~X+0)$coefficients[1]
})
### perform resampling of errors
b_errors = sapply(1:m, FUN = function(i) {
Y_s = Y_m + sample(err,n)
lm(Y_s~X+0)$coefficients[1]
})
### perform paired resampling
b_paired = sapply(1:m, FUN = function(i) {
selection = rep(1:(n/2),2)+rbinom(n,1,0.5)*n/2
Y_s = Y_m + res[selection]
lm(Y_s~X+0)$coefficients[1]
})
### plot histograms
layout(matrix(1:3,3))
hist(b_residuals, breaks = seq(-0.5,0.5,0.02),
freq = 0, ylim = c(0,10), main = "resampling of residuals", xlab = expression(beta[1]))
lines(mod$coefficients[1]*c(1,1),c(0,10), lty = 2, col = 2)
hist(b_paired, breaks = seq(-0.5,0.5,0.02),
freq = 0, ylim = c(0,10), main = "resampling of residual pairs", xlab = expression(beta[1]))
lines(mod$coefficients[1]*c(1,1),c(0,10), lty = 2, col = 2)
hist(b_errors, breaks = seq(-0.5,0.5,0.02),
freq = 0, ylim = c(0,10), main = "resampling of true errors", xlab = expression(beta[1]))
lines(mod$coefficients[1]*c(1,1),c(0,10), lty = 2, col = 2)
var(b_residuals)/var(b_errors)
var(b_paired)/var(b_errors) | On the high dimensional bootstrap | TL;DR The article that you refer to makes things look more worse than they actually are. Their bootstrapping procedure is not a good way to apply bootstrapping. In the case of OLS there shouldn't be b | On the high dimensional bootstrap
TL;DR The article that you refer to makes things look more worse than they actually are. Their bootstrapping procedure is not a good way to apply bootstrapping. In the case of OLS there shouldn't be big problems with high dimensionality if the sample size is large. If you can not get correct results with OLS, where a correct confidence interval can be easily computed analytically, then something must be wrong with the implementation of the bootstrapping method.
It is good though to be reminded that the residuals are not the same as the errors and that we can use simulations with OLS to test (potentially wrong) implementations of bootstrapping.
Simple reproduction of the article results
The article that you refer to is performing simulations of errors by bootstrapping/resampling of the residuals. Below is a simple example that reproduces this.
The model is a linear regression with $n=500$ samples (or 250 pairs) and $p=125$ parameters. The distributions that are plotted here are just for the first parameter estimate $\hat{\beta}_1$.
Discrepancy in estimated sample variance
The third image, resampling the true errors, gives a correct indication of the sample distribution of the coefficient.
The first and second images, resampling all residuals, or resampling the pairs, have distributions with a different variance. They lead to errors in the estimates of standard errors and confidence intervals.
The reason for the discrepancy is that bootstrapping only works when the bootstrapped samples are a good representation of the true distribution. This is not the case when $p/n$ is large.
resampling residuals The bootstrap samples are created by simulating errors by sampling from the residuals, however the variance of the residuals is lower than the variance of the errors $$\text{Var}(r_i) \approx \left(1-\frac{p}{n}\right) \text{Var}(\epsilon)$$
pairwise resampling in the case of pairwise resampling the distribution is effectively a scaled binomial distribution. The variance will be
$$\text{Var}(r_{i,paired}) \approx \frac{1}{2} \text{Var}(\epsilon)$$
The factor $0.5$ stems from the Bernoulli variable having variance $0.25$ and the scaling is the difference of the two pairs which has variance $2\text{Var}(\epsilon)$
Discussion
You should only use bootstrapping when the sampling is done from a distribution that represents the population distribution. This is not the case with paired resampling, which is sampling a distribution with half the variance, and with the residual resampling, which is sampling a distribution with smaller variance if $p/n$ is large.
The bootstrapping is often performed when a distribution of is difficult to compute. This is either the case when 1) the assumptions about the error distribution are false, or the error distribution is unknown 2) the propagation of errors is difficult to compute.
For ordinary linear regression, the second case is not an issue. The statistic is a linear sum of the data and it's sampling distribution will often approximate a normal distribution. With different cost functions the behaviour might not be too far. The problem is just to estimate the variance, and the residuals are often a good indication for this. But, one has to apply the right corrections.
The problem is more difficult in the situation where the error distribution has large tails and the variance is not easily estimated with a small sample. In this case the typical remedy is to simply gather more data. Potentially one could do an advanced semi-parametric bootstrapping by combining the residuals with a normal distribution that relates to the residuals being the errors with the estimate subtracted (the estimate being some correlated normal distribution).
Plot of reproduction
set.seed(1)
n = 500 # data samples
p = 125 # parameters
m = 1000 # times resampling
### create paired data
X = matrix(rnorm(n*p/2),ncol =p)
X = rbind(X,X)
Y = rnorm(n)
solve(t(X)%*%X)[1,1] ### this is the theoretic variance
### compute main model
mod = lm(Y~X+0)
### variables used for resampling
Y_m = predict(mod)
res = mod$residuals
err = Y
### perform resampling of residuals
b_residuals = sapply(1:m, FUN = function(i) {
Y_s = Y_m + sample(res,n)
lm(Y_s~X+0)$coefficients[1]
})
### perform resampling of errors
b_errors = sapply(1:m, FUN = function(i) {
Y_s = Y_m + sample(err,n)
lm(Y_s~X+0)$coefficients[1]
})
### perform paired resampling
b_paired = sapply(1:m, FUN = function(i) {
selection = rep(1:(n/2),2)+rbinom(n,1,0.5)*n/2
Y_s = Y_m + res[selection]
lm(Y_s~X+0)$coefficients[1]
})
### plot histograms
layout(matrix(1:3,3))
hist(b_residuals, breaks = seq(-0.5,0.5,0.02),
freq = 0, ylim = c(0,10), main = "resampling of residuals", xlab = expression(beta[1]))
lines(mod$coefficients[1]*c(1,1),c(0,10), lty = 2, col = 2)
hist(b_paired, breaks = seq(-0.5,0.5,0.02),
freq = 0, ylim = c(0,10), main = "resampling of residual pairs", xlab = expression(beta[1]))
lines(mod$coefficients[1]*c(1,1),c(0,10), lty = 2, col = 2)
hist(b_errors, breaks = seq(-0.5,0.5,0.02),
freq = 0, ylim = c(0,10), main = "resampling of true errors", xlab = expression(beta[1]))
lines(mod$coefficients[1]*c(1,1),c(0,10), lty = 2, col = 2)
var(b_residuals)/var(b_errors)
var(b_paired)/var(b_errors) | On the high dimensional bootstrap
TL;DR The article that you refer to makes things look more worse than they actually are. Their bootstrapping procedure is not a good way to apply bootstrapping. In the case of OLS there shouldn't be b |
36,791 | What is the roadmap to self-taught probability and statistics for artificial intelligence? | If you were an academic, one must assume you already have a good reference for multivariable calculus, linear algebra, and differential equations – these are not optional. I personally heard from Witten and Tibshirani that their texts have the greatest value in working out the problems at excruciating detail including intensive matrix algebra. So, bone up on these skills if you haven't already.
A mathematical pedagogy is fundamentally different from computer science. Whereas CS advocates a top-down approach, mathematics is about finding generalizations. That's why (on this site and elsewhere) you have many self proclaimed "ML experts" who have fit enough algorithms on Kaggle to burn out a network of NVidia graphics cards, but who can't write down an estimating equation to save their life.
If you were a diligent student, you would hope to cover all this over the course of 4-6 years of dedicated study.
If you were a graduate statistics student, you would write a theory course from, say, Casella Berger (research other posts on this one, there may be better texts), linear modeling, and then advanced theory up to minimax estimation, empirical processes, etc. Texts might include Ferguson's A Course in Large Sample Theory, or Lehman, Casella's Theory of Point Estimation. At that point you can read and understand foundational work. These are necessary to "prove" that many algorithmic solutions are well motivated such as the bootstrap, LARS, etc. Referring to "Bayesian" alone is a forgivable newbie mistake, but to participate meaningfully on this site, you need to be more precise. Peter Hoffs "A First Course in Bayesian Statistics" should cover a broad number of areas. Harrell's "Regression Modeling Strategies" is an applied text with some modern solutions that provide a lot of area for research.
Take a look at this page from Arcones and Gine regarding bootstrapping. A procedure as simple as resampling rows with replacement from a dataset repeatedly requires knowledge in a practically completely new area of statistics, empirical process theory. (see texts from Van Der Vaart and Wellner for a reference on this... not for faint of heart!).
If you want to understand the mettle that these researchers bring to the theoretical forefront, you just need to look up any related article on premier research journals, such as biometrika, JRRS, JASA, etc. It is a good exercise at times to find a journal article you really want to understand that's way beyond your ability and try to replicate the results, looking up cited references as needed. With Sci-Hub this is within almost anyone's reach. | What is the roadmap to self-taught probability and statistics for artificial intelligence? | If you were an academic, one must assume you already have a good reference for multivariable calculus, linear algebra, and differential equations – these are not optional. I personally heard from Witt | What is the roadmap to self-taught probability and statistics for artificial intelligence?
If you were an academic, one must assume you already have a good reference for multivariable calculus, linear algebra, and differential equations – these are not optional. I personally heard from Witten and Tibshirani that their texts have the greatest value in working out the problems at excruciating detail including intensive matrix algebra. So, bone up on these skills if you haven't already.
A mathematical pedagogy is fundamentally different from computer science. Whereas CS advocates a top-down approach, mathematics is about finding generalizations. That's why (on this site and elsewhere) you have many self proclaimed "ML experts" who have fit enough algorithms on Kaggle to burn out a network of NVidia graphics cards, but who can't write down an estimating equation to save their life.
If you were a diligent student, you would hope to cover all this over the course of 4-6 years of dedicated study.
If you were a graduate statistics student, you would write a theory course from, say, Casella Berger (research other posts on this one, there may be better texts), linear modeling, and then advanced theory up to minimax estimation, empirical processes, etc. Texts might include Ferguson's A Course in Large Sample Theory, or Lehman, Casella's Theory of Point Estimation. At that point you can read and understand foundational work. These are necessary to "prove" that many algorithmic solutions are well motivated such as the bootstrap, LARS, etc. Referring to "Bayesian" alone is a forgivable newbie mistake, but to participate meaningfully on this site, you need to be more precise. Peter Hoffs "A First Course in Bayesian Statistics" should cover a broad number of areas. Harrell's "Regression Modeling Strategies" is an applied text with some modern solutions that provide a lot of area for research.
Take a look at this page from Arcones and Gine regarding bootstrapping. A procedure as simple as resampling rows with replacement from a dataset repeatedly requires knowledge in a practically completely new area of statistics, empirical process theory. (see texts from Van Der Vaart and Wellner for a reference on this... not for faint of heart!).
If you want to understand the mettle that these researchers bring to the theoretical forefront, you just need to look up any related article on premier research journals, such as biometrika, JRRS, JASA, etc. It is a good exercise at times to find a journal article you really want to understand that's way beyond your ability and try to replicate the results, looking up cited references as needed. With Sci-Hub this is within almost anyone's reach. | What is the roadmap to self-taught probability and statistics for artificial intelligence?
If you were an academic, one must assume you already have a good reference for multivariable calculus, linear algebra, and differential equations – these are not optional. I personally heard from Witt |
36,792 | Does convergence in distribution with correlation tending to 1 implies convergence in probability? | Convergence in probability holds under additional (weak) conditions
It is possible to establish convergence in probability here if you can first establish the moment convergence $\mathbb{E}(X_n) \rightarrow \mathbb{E}(Z)$ and $\mathbb{V}(X_n) \rightarrow \mathbb{V}(Z)$. Additional conditions for the moment convergence require that the sequence of moments $\mathbb{E}(X_n)$ and $\mathbb{V}(X_n)$ are bounded for sufficiently large $n$ (see e.g., here). Here it is worth noting that the antecedent limit condition $\mathbb{Corr}(X_n, Z) \rightarrow 1$ already implies that the correlation exists, so $\mathbb{E}(Z)$ and $\mathbb{V}(Z)$ are finite and the other moments are finite for sufficiently large $n$ so all we need to go further to get the required condition is to assume that they are also bounded for sufficiently large $n$.
Now, assuming you can first establish the required convergence in moments, you then have $\mathbb{E}(X_n - Z) \rightarrow 0$ as an initial property. Moreover, since $\mathbb{Corr}(X_n, Z) \rightarrow 1$ and $\mathbb{V}(X_n) \rightarrow \mathbb{V}(Z)$ you also have the property:
$$\begin{align}
\mathbb{V}(X_n - Z)
&= \mathbb{V}(X_n) - 2 \cdot \mathbb{Cov}(X_n, Z) + \mathbb{V}(Z) \\[12pt]
&= \mathbb{V}(X_n) - 2 \cdot \mathbb{Corr}(X_n, Z) \sqrt{\mathbb{V}(X_n) \mathbb{V}(Z)} + \mathbb{V}(Z) \\[6pt]
&\rightarrow \mathbb{V}(Z) - 2 \times 1 \times \sqrt{\mathbb{V}(Z) \cdot \mathbb{V}(Z)} + \mathbb{V}(Z) \\[10pt]
&= 2 \mathbb{V}(Z) - 2 \mathbb{V}(Z) \\[12pt]
&= 0. \\[6pt]
\end{align}$$
These two properties establish convergence in mean-square which then implies convergence in probabilty (using Markov's inequality). | Does convergence in distribution with correlation tending to 1 implies convergence in probability? | Convergence in probability holds under additional (weak) conditions
It is possible to establish convergence in probability here if you can first establish the moment convergence $\mathbb{E}(X_n) \righ | Does convergence in distribution with correlation tending to 1 implies convergence in probability?
Convergence in probability holds under additional (weak) conditions
It is possible to establish convergence in probability here if you can first establish the moment convergence $\mathbb{E}(X_n) \rightarrow \mathbb{E}(Z)$ and $\mathbb{V}(X_n) \rightarrow \mathbb{V}(Z)$. Additional conditions for the moment convergence require that the sequence of moments $\mathbb{E}(X_n)$ and $\mathbb{V}(X_n)$ are bounded for sufficiently large $n$ (see e.g., here). Here it is worth noting that the antecedent limit condition $\mathbb{Corr}(X_n, Z) \rightarrow 1$ already implies that the correlation exists, so $\mathbb{E}(Z)$ and $\mathbb{V}(Z)$ are finite and the other moments are finite for sufficiently large $n$ so all we need to go further to get the required condition is to assume that they are also bounded for sufficiently large $n$.
Now, assuming you can first establish the required convergence in moments, you then have $\mathbb{E}(X_n - Z) \rightarrow 0$ as an initial property. Moreover, since $\mathbb{Corr}(X_n, Z) \rightarrow 1$ and $\mathbb{V}(X_n) \rightarrow \mathbb{V}(Z)$ you also have the property:
$$\begin{align}
\mathbb{V}(X_n - Z)
&= \mathbb{V}(X_n) - 2 \cdot \mathbb{Cov}(X_n, Z) + \mathbb{V}(Z) \\[12pt]
&= \mathbb{V}(X_n) - 2 \cdot \mathbb{Corr}(X_n, Z) \sqrt{\mathbb{V}(X_n) \mathbb{V}(Z)} + \mathbb{V}(Z) \\[6pt]
&\rightarrow \mathbb{V}(Z) - 2 \times 1 \times \sqrt{\mathbb{V}(Z) \cdot \mathbb{V}(Z)} + \mathbb{V}(Z) \\[10pt]
&= 2 \mathbb{V}(Z) - 2 \mathbb{V}(Z) \\[12pt]
&= 0. \\[6pt]
\end{align}$$
These two properties establish convergence in mean-square which then implies convergence in probabilty (using Markov's inequality). | Does convergence in distribution with correlation tending to 1 implies convergence in probability?
Convergence in probability holds under additional (weak) conditions
It is possible to establish convergence in probability here if you can first establish the moment convergence $\mathbb{E}(X_n) \righ |
36,793 | Does convergence in distribution with correlation tending to 1 implies convergence in probability? | Two notes on the convergence in moments assumed for @Ben's answer
It's not easy.
Suppose we had instead that $X_n\stackrel{d}{\to} Z$, and $Z_n\stackrel{p}{\to}Z$ and $\mathbb{Corr}[X_n, Z_n]\to 1$. Convergence in probability of $X_n$ to $Z$ need not hold in this slightly modified problem.
Take
$U\sim U[0,1]$,
$V\sim N(0,1)$
$X_n\sim N(0,1)$ if $U>1/n$ and $X_n=2^n$ if $U<1/n$
$Z_n=V$ if $U>1/n$ and $Z_n=2^n$ if $U<1/n$
$Z=V$
with all the $N(0,1)$s independent. Then $\mathbb{Corr}[X_n, Z_n]$ exists for every $n$ and converges to 1, but $X_n$ does not converge in probability to $Z$ (it doesn't converge in probability at all).
By using the same $N(0,1)$ for all $n$ in the definition of $X_n$, you could also arrange for $X_n$ to converge in probability to a $N(0,1)$ that was independent of $Z$
The result is true.
You can't do anything like the construction in part 1, because $Z$ doesn't vary with $n$. Heuristically, you need increasingly rare and extreme outliers in the $X_n$ and they can't stay correlated with fixed outliers in $Z$
Proof
Rather than working with the variance, we work with a truncated variance. Given a finite, positive $M$, write $X^M_n$ for $X_n\{|X_n|<M\}$. We know the variance of $Z$ is finite, and there's only one of it so it's also uniformly bounded and we don't need to truncate it.
Now for any fixed $M$, the truncated variance is continuous with respect to convergence in distribution, so
$$\mathbb{V}[X^M_n-Z]\to \mathbb{V}[Z^M-Z]$$
and given any $\epsilon>0$ we can choose $M$ so the limit is less than $\epsilon$, by finiteness of $\mathbb{V}[Z]$.
So, given $\epsilon$, we can find $N$ and $M$ such that for $n>N$
$$\mathbb{V}[X^M_n-Z]<2\epsilon$$
and (since $X_n$ converges in distribution)
$$\mathbb{P}[X_n\neq X_n^M]<\epsilon$$
Now for any $\eta$
$$\mathbb{P}[|X_n-Z]>\eta]\leq \mathbb{P}[|X_n-X_n^M]>\eta]+ \mathbb{P}[|Z-X_n^M]>\eta]$$
The first term is bounded by $\epsilon$ and the second (via Chebyshev's inequality) by something like $2\epsilon/\eta$. So we can choose $\epsilon$ to make it small and we are (finally) done.
Check
Why wouldn't this proof work for the modified problem where the result is false? The very first line
$$\mathbb{V}[X^M_n-Z]\to \mathbb{V}[Z^M-Z]$$
fails, since the correlation condition is on $Z_n$ rather than $Z$. It's important to the proof that $Z$ doesn't need truncation. | Does convergence in distribution with correlation tending to 1 implies convergence in probability? | Two notes on the convergence in moments assumed for @Ben's answer
It's not easy.
Suppose we had instead that $X_n\stackrel{d}{\to} Z$, and $Z_n\stackrel{p}{\to}Z$ and $\mathbb{Corr}[X_n, Z_n]\to 1$. C | Does convergence in distribution with correlation tending to 1 implies convergence in probability?
Two notes on the convergence in moments assumed for @Ben's answer
It's not easy.
Suppose we had instead that $X_n\stackrel{d}{\to} Z$, and $Z_n\stackrel{p}{\to}Z$ and $\mathbb{Corr}[X_n, Z_n]\to 1$. Convergence in probability of $X_n$ to $Z$ need not hold in this slightly modified problem.
Take
$U\sim U[0,1]$,
$V\sim N(0,1)$
$X_n\sim N(0,1)$ if $U>1/n$ and $X_n=2^n$ if $U<1/n$
$Z_n=V$ if $U>1/n$ and $Z_n=2^n$ if $U<1/n$
$Z=V$
with all the $N(0,1)$s independent. Then $\mathbb{Corr}[X_n, Z_n]$ exists for every $n$ and converges to 1, but $X_n$ does not converge in probability to $Z$ (it doesn't converge in probability at all).
By using the same $N(0,1)$ for all $n$ in the definition of $X_n$, you could also arrange for $X_n$ to converge in probability to a $N(0,1)$ that was independent of $Z$
The result is true.
You can't do anything like the construction in part 1, because $Z$ doesn't vary with $n$. Heuristically, you need increasingly rare and extreme outliers in the $X_n$ and they can't stay correlated with fixed outliers in $Z$
Proof
Rather than working with the variance, we work with a truncated variance. Given a finite, positive $M$, write $X^M_n$ for $X_n\{|X_n|<M\}$. We know the variance of $Z$ is finite, and there's only one of it so it's also uniformly bounded and we don't need to truncate it.
Now for any fixed $M$, the truncated variance is continuous with respect to convergence in distribution, so
$$\mathbb{V}[X^M_n-Z]\to \mathbb{V}[Z^M-Z]$$
and given any $\epsilon>0$ we can choose $M$ so the limit is less than $\epsilon$, by finiteness of $\mathbb{V}[Z]$.
So, given $\epsilon$, we can find $N$ and $M$ such that for $n>N$
$$\mathbb{V}[X^M_n-Z]<2\epsilon$$
and (since $X_n$ converges in distribution)
$$\mathbb{P}[X_n\neq X_n^M]<\epsilon$$
Now for any $\eta$
$$\mathbb{P}[|X_n-Z]>\eta]\leq \mathbb{P}[|X_n-X_n^M]>\eta]+ \mathbb{P}[|Z-X_n^M]>\eta]$$
The first term is bounded by $\epsilon$ and the second (via Chebyshev's inequality) by something like $2\epsilon/\eta$. So we can choose $\epsilon$ to make it small and we are (finally) done.
Check
Why wouldn't this proof work for the modified problem where the result is false? The very first line
$$\mathbb{V}[X^M_n-Z]\to \mathbb{V}[Z^M-Z]$$
fails, since the correlation condition is on $Z_n$ rather than $Z$. It's important to the proof that $Z$ doesn't need truncation. | Does convergence in distribution with correlation tending to 1 implies convergence in probability?
Two notes on the convergence in moments assumed for @Ben's answer
It's not easy.
Suppose we had instead that $X_n\stackrel{d}{\to} Z$, and $Z_n\stackrel{p}{\to}Z$ and $\mathbb{Corr}[X_n, Z_n]\to 1$. C |
36,794 | Does convergence in distribution with correlation tending to 1 implies convergence in probability? | Isn't this a reminiscent of the CLT? If the distribution of $X_{n}$ converges to Z, this means that the probability converges to Z as well. After all, the probabilities are obtained from distributions. Pearson coefficient:
\begin{equation}
\rho_{X,Z}=\frac{cov(X,Z)}{\sigma_{X} \sigma_{Z}},
\end{equation}
but in the limit n$\rightarrow$ $\infty$: X$\rightarrow$Z, thus:
\begin{equation}
\rho_{X,Z}=\frac{cov(X,Z)}{\sigma_{X} \sigma_{Z}}=\frac{E[XZ]-E[X]E[Z]}{\sigma^{2}}=\frac{E[Z^{2}]-E[Z]^{2}}{\sigma^{2}}=\frac{\sigma^{2}}{\sigma^{2}}=1,
\end{equation}
where I have defined $\sigma_{X}$=$\sigma_{Z}$=$\sigma$ in the limit n$\rightarrow$ $\infty$. | Does convergence in distribution with correlation tending to 1 implies convergence in probability? | Isn't this a reminiscent of the CLT? If the distribution of $X_{n}$ converges to Z, this means that the probability converges to Z as well. After all, the probabilities are obtained from distributions | Does convergence in distribution with correlation tending to 1 implies convergence in probability?
Isn't this a reminiscent of the CLT? If the distribution of $X_{n}$ converges to Z, this means that the probability converges to Z as well. After all, the probabilities are obtained from distributions. Pearson coefficient:
\begin{equation}
\rho_{X,Z}=\frac{cov(X,Z)}{\sigma_{X} \sigma_{Z}},
\end{equation}
but in the limit n$\rightarrow$ $\infty$: X$\rightarrow$Z, thus:
\begin{equation}
\rho_{X,Z}=\frac{cov(X,Z)}{\sigma_{X} \sigma_{Z}}=\frac{E[XZ]-E[X]E[Z]}{\sigma^{2}}=\frac{E[Z^{2}]-E[Z]^{2}}{\sigma^{2}}=\frac{\sigma^{2}}{\sigma^{2}}=1,
\end{equation}
where I have defined $\sigma_{X}$=$\sigma_{Z}$=$\sigma$ in the limit n$\rightarrow$ $\infty$. | Does convergence in distribution with correlation tending to 1 implies convergence in probability?
Isn't this a reminiscent of the CLT? If the distribution of $X_{n}$ converges to Z, this means that the probability converges to Z as well. After all, the probabilities are obtained from distributions |
36,795 | How competitive is stepwise regression when it comes to pure prediction? | Stepwise regression is not generally bad at prediction, in the sense that it is not generally worse than, say, LASSO or best subset selection. Which is to say it may be quite good! Recent evidence on that can be found in Hastie et al. (2020). (I regard the authors as pretty much the ultimate experts in the field.) Noting that LASSO does not universally dominate ridge nor the other way around (see e.g. Tibshirani (1996)), I conjecture that stepwise regression may do better or worse than ridge, too.
References:
Hastie, T., Tibshirani, R., & Tibshirani, R. (2020). Best subset, forward stepwise or lasso? Analysis and recommendations based on extensive comparisons. Statistical Science, 35(4), 579-592.
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1), 267-288. | How competitive is stepwise regression when it comes to pure prediction? | Stepwise regression is not generally bad at prediction, in the sense that it is not generally worse than, say, LASSO or best subset selection. Which is to say it may be quite good! Recent evidence on | How competitive is stepwise regression when it comes to pure prediction?
Stepwise regression is not generally bad at prediction, in the sense that it is not generally worse than, say, LASSO or best subset selection. Which is to say it may be quite good! Recent evidence on that can be found in Hastie et al. (2020). (I regard the authors as pretty much the ultimate experts in the field.) Noting that LASSO does not universally dominate ridge nor the other way around (see e.g. Tibshirani (1996)), I conjecture that stepwise regression may do better or worse than ridge, too.
References:
Hastie, T., Tibshirani, R., & Tibshirani, R. (2020). Best subset, forward stepwise or lasso? Analysis and recommendations based on extensive comparisons. Statistical Science, 35(4), 579-592.
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society: Series B (Methodological), 58(1), 267-288. | How competitive is stepwise regression when it comes to pure prediction?
Stepwise regression is not generally bad at prediction, in the sense that it is not generally worse than, say, LASSO or best subset selection. Which is to say it may be quite good! Recent evidence on |
36,796 | Does $E\frac{1}{\|x\|^{4}} \rightarrow \frac{1}{E\|x\|^4}$ in high dimensions? | Reformulation in terms of linear combination of $\chi^2(1)$ variables
We can reformulate the problem.
Let's rewrite $$\Vert \mathbf{x} \Vert^4 =\left(\sum_{i=1}^n x_k^2\right)^2 = Y_n^2$$
Such that we can focus on the variable
$$Y_n = \sum_{k=1}^n x_k^2$$
and the problem statement in terms of $Y_n$ becomes
$$E[1/Y_n^2] \to 1/E[Y_n^2]$$
we can make another reformulation and consider the eigenvalues $\lambda_k$ of the matrix $\Sigma$ then we can express $Y_n$ as a linear combination of $n$ iid chi squared variables.
$$Y_n \sim \sum_{k=1}^n \lambda_k Z_k \qquad \text{where $\forall k:Z_k \sim \chi^2(1)$}$$
where we have the conditions that
$\sum_{k=1}^n \lambda_k = 1$, which relates to the condition that $Tr(\Sigma) = 1$.
$\max(\lambda_k) \to 0$, which relates to the spectral norm approaching zero.
The mechanism behind the convergence
The expectation and variance of $Y_n$ is
$$E[Y_n] = \sum_{k=1}^n \lambda_k = 1$$
and
$$\text{Var}[Y_n] = \sum_{k=1}^n 2 \lambda_k^2 \to 0$$
Intuitively: The variable $Y_n$ approaches a constant value $1$ and that is how $E[1/Y_n^2]$ will approach $1/E[Y_n^2]$.
I am not sure how to make this formal. I am thinking about something like the continuous mapping theorem. If $Y_n \to 1$ then $f(Y_n) \to f(1)$. But I am not sure whether the decreasing variance is sufficient to state that $Y_n \to 1$ and what sort of convergence is exactly needed or allowed to make the statements.
A problem with the convergence
In intuitive terms we see that the variance shrinks to zero and that is what makes the convergence happen, at least seemingly in simulations. A point that worries me is that a function like inverse $E[1/Y_n^2]$ can involve division by zero and result into an infinite or undefined expectation. For instance if we have some normal distributed variable $W_n \sim \mathcal{N}(1,1/n)$ then we do not get convergence $E[1/W_n] \to 1/E[W_n]$ because the expectation of $1/W_n$ is undefined.
So a problem with the above intuitive reasoning is that the $E[1/Y_n^2]$ may be undefined when the density of $Y_n$ at zero is finite. For instance the inverse of the square of a chi-squared distribution has no finite expectation value when $\nu \leq 4$ (see the variance of a inverse chi-squared distribution).
What we need to proof is that we can not have the $\lambda_k$ in such a way while $max(\lambda_k)$ approaches zero.
I imagine for instance a dominant term that approaches zero very slowly while the remaining terms approach zero very quickly. E.g. some slowly decreasing function of $n$ such that
$$\lambda_k = \begin{cases} f(n) &\quad \text{if} \quad k=n \\
\frac{1-f(n)}{n-1} &\quad \text{if} \quad k\neq n \end{cases}$$
Then $Y_n$ is a sum of two chi squared variables, one with 1 degree of freedom and another with $n$ degrees of freedom.
$$Y_n \sim f(n) \chi(1) + \frac{1-f(n)}{n-1} \chi(n-1)$$
I don't believe that this $Y_n$ has a non-zero density. I also don't believe that any other similar approach can result in a non-zero density for $Y_n$.
We have
$$Y_n \sim \sum_{k=1}^n \lambda_k Z_k > \sum_{k=1}^n \min(\lambda_k) Z_k \sim \Gamma(k=n/2, \theta = 2 \min(\lambda_k))$$
Because the $Y_n$ is gonna be made of at least 4 components (otherwise $\max(\lambda)$ can't approach zero) you get that the variable $Y_n$ is at least as large as a scaled chi-squared variable with more than 4 degrees of freedom and the density at zero should should be zero. | Does $E\frac{1}{\|x\|^{4}} \rightarrow \frac{1}{E\|x\|^4}$ in high dimensions? | Reformulation in terms of linear combination of $\chi^2(1)$ variables
We can reformulate the problem.
Let's rewrite $$\Vert \mathbf{x} \Vert^4 =\left(\sum_{i=1}^n x_k^2\right)^2 = Y_n^2$$
Such that we | Does $E\frac{1}{\|x\|^{4}} \rightarrow \frac{1}{E\|x\|^4}$ in high dimensions?
Reformulation in terms of linear combination of $\chi^2(1)$ variables
We can reformulate the problem.
Let's rewrite $$\Vert \mathbf{x} \Vert^4 =\left(\sum_{i=1}^n x_k^2\right)^2 = Y_n^2$$
Such that we can focus on the variable
$$Y_n = \sum_{k=1}^n x_k^2$$
and the problem statement in terms of $Y_n$ becomes
$$E[1/Y_n^2] \to 1/E[Y_n^2]$$
we can make another reformulation and consider the eigenvalues $\lambda_k$ of the matrix $\Sigma$ then we can express $Y_n$ as a linear combination of $n$ iid chi squared variables.
$$Y_n \sim \sum_{k=1}^n \lambda_k Z_k \qquad \text{where $\forall k:Z_k \sim \chi^2(1)$}$$
where we have the conditions that
$\sum_{k=1}^n \lambda_k = 1$, which relates to the condition that $Tr(\Sigma) = 1$.
$\max(\lambda_k) \to 0$, which relates to the spectral norm approaching zero.
The mechanism behind the convergence
The expectation and variance of $Y_n$ is
$$E[Y_n] = \sum_{k=1}^n \lambda_k = 1$$
and
$$\text{Var}[Y_n] = \sum_{k=1}^n 2 \lambda_k^2 \to 0$$
Intuitively: The variable $Y_n$ approaches a constant value $1$ and that is how $E[1/Y_n^2]$ will approach $1/E[Y_n^2]$.
I am not sure how to make this formal. I am thinking about something like the continuous mapping theorem. If $Y_n \to 1$ then $f(Y_n) \to f(1)$. But I am not sure whether the decreasing variance is sufficient to state that $Y_n \to 1$ and what sort of convergence is exactly needed or allowed to make the statements.
A problem with the convergence
In intuitive terms we see that the variance shrinks to zero and that is what makes the convergence happen, at least seemingly in simulations. A point that worries me is that a function like inverse $E[1/Y_n^2]$ can involve division by zero and result into an infinite or undefined expectation. For instance if we have some normal distributed variable $W_n \sim \mathcal{N}(1,1/n)$ then we do not get convergence $E[1/W_n] \to 1/E[W_n]$ because the expectation of $1/W_n$ is undefined.
So a problem with the above intuitive reasoning is that the $E[1/Y_n^2]$ may be undefined when the density of $Y_n$ at zero is finite. For instance the inverse of the square of a chi-squared distribution has no finite expectation value when $\nu \leq 4$ (see the variance of a inverse chi-squared distribution).
What we need to proof is that we can not have the $\lambda_k$ in such a way while $max(\lambda_k)$ approaches zero.
I imagine for instance a dominant term that approaches zero very slowly while the remaining terms approach zero very quickly. E.g. some slowly decreasing function of $n$ such that
$$\lambda_k = \begin{cases} f(n) &\quad \text{if} \quad k=n \\
\frac{1-f(n)}{n-1} &\quad \text{if} \quad k\neq n \end{cases}$$
Then $Y_n$ is a sum of two chi squared variables, one with 1 degree of freedom and another with $n$ degrees of freedom.
$$Y_n \sim f(n) \chi(1) + \frac{1-f(n)}{n-1} \chi(n-1)$$
I don't believe that this $Y_n$ has a non-zero density. I also don't believe that any other similar approach can result in a non-zero density for $Y_n$.
We have
$$Y_n \sim \sum_{k=1}^n \lambda_k Z_k > \sum_{k=1}^n \min(\lambda_k) Z_k \sim \Gamma(k=n/2, \theta = 2 \min(\lambda_k))$$
Because the $Y_n$ is gonna be made of at least 4 components (otherwise $\max(\lambda)$ can't approach zero) you get that the variable $Y_n$ is at least as large as a scaled chi-squared variable with more than 4 degrees of freedom and the density at zero should should be zero. | Does $E\frac{1}{\|x\|^{4}} \rightarrow \frac{1}{E\|x\|^4}$ in high dimensions?
Reformulation in terms of linear combination of $\chi^2(1)$ variables
We can reformulate the problem.
Let's rewrite $$\Vert \mathbf{x} \Vert^4 =\left(\sum_{i=1}^n x_k^2\right)^2 = Y_n^2$$
Such that we |
36,797 | Is there a Good Illustrative Example where the Hinge Loss (SVM) Gives a Higher Accuracy than the Logistic Loss | The example attributed to Olivier Bousquet doesn't work as clearly as depicted in the blog post mentioned in an answer to a related question, but I have been able to make it work to some extent in a more realistic setting, so I'll post it here in case it stimulates further (hopefully simpler or more informative) examples.
The (of course) adversarial learning task is shown below. The probability of membership of the positive class is given by
$$p(\mathcal{C}_+|x) = \frac{0.5}{(1 + \exp(-100*x))} + 0.25 + 0.24*\sin(20*x)$$
Note there are features of the true probability of class membership that do not affect the decision boundary, so any model may be distracted by modelling those irrelevant undulations at the expense of accurately determining the optimal decision boundary. To demonstrate that Bousqet's example builds a series of logistic regression-style models, of increasing complexity, based on Legendre polynomials (for numerical considerations). Here are the first seven basis functions:
The blog example is implemented in Mathmatica, which is a language I don't know, but I have been able to replicate their results in MATLAB tolerably well. What I think they have done is to fit these Legendre polynomial models firstly using the cross-entropy metric, and then using the hinge loss, but rather strangely they have fitted it directly to the true (sampled) probability of class membership, i.e. the response values lie between 0 and 1. Using the cross-entropy, I get this result:
Which is broadly the same as the Mathmatica implementation. Note that in the attempt to model the undulations in the probability of class membership, the model has overshot on occasion and so the accuracy is lower than we would get by simply placing a threshold at $x = 0$.
It wasn't completely clear how to implement the model with the hinge loss, as the logistic function clips the output to lie in the range of 0 to 1, so instead I used the hinge loss on the weighted sum of the Legendre basis functions and then afterwards applied the logistic function. This is the result:
All the models now achieve the optimal accuracy, although the estimates of the posterior probability of class membership are clearly inferior (if not actually plain wrong!).
HOWEVER this is not what we actually do when we have a classification task. If we knew the optimal posterior probability of class membership to determine the targets for the training data, we probably wouldn't need to build a classifier in the first place! So I then modified the code so that instead of the response values being the sampled true probability, I generated random x values (uniform distribution from -1 to +1) and then generated binary responses according to the probability of class membership.
This is the result for the cross-entropy error metric, which is pretty much the same as before.
Here is the result for the hinge loss, which is very different.
So why the difference? Well in the blog version, if we set the weight "linear" term to a very high value, then the weighted sum will be less than -1 for all of the data to the left of $x = 0$ and a value greater than +1 for all of the data on the right. In which case, the hinge loss will be zero, and we will get a classifier with minimal error regardless of model complexity. The hinge loss cannot be negative. However, if we sample the labels from a conditional Bernoulli distribution, we will have data on both sides of $x=0$ that are both positively and negatively labelled, and if they are the wrong side of $x=0$ they will have a non-zero hinge loss, and hence we will start penalising models with a large linear term increasingly harshly. It does have some excess error caused by trying to model the right-most undulation, but it does see to be more robust in terms of accuracy than the cross-entropy loss. So it is an example of how trying to classify the data directly, rather than estimate probability and then threshold, it just isn't as clear cut as the example in the blog suggests.
Update #1: Here are the results with a larger dataset (so sampling noise is less likely to be a factor). For the previous results I used 1024 training patterns, and for these I used 65536 (I work in a computer science department ;o). It seems to improve things a bit for the hinge loss, but the cross-entropy results look broadly similar.
Cross-entropy loss:
Hinge loss:
It is interesting (i.e. worrying) that for some of the simpler models, the output does not go through $(0, 1/2)$...
FWIW, this is the most complex of the hinge-loss models without the logistic transformation (but with an offset of 0.5 to make it easier to compare with the probabilities). | Is there a Good Illustrative Example where the Hinge Loss (SVM) Gives a Higher Accuracy than the Log | The example attributed to Olivier Bousquet doesn't work as clearly as depicted in the blog post mentioned in an answer to a related question, but I have been able to make it work to some extent in a m | Is there a Good Illustrative Example where the Hinge Loss (SVM) Gives a Higher Accuracy than the Logistic Loss
The example attributed to Olivier Bousquet doesn't work as clearly as depicted in the blog post mentioned in an answer to a related question, but I have been able to make it work to some extent in a more realistic setting, so I'll post it here in case it stimulates further (hopefully simpler or more informative) examples.
The (of course) adversarial learning task is shown below. The probability of membership of the positive class is given by
$$p(\mathcal{C}_+|x) = \frac{0.5}{(1 + \exp(-100*x))} + 0.25 + 0.24*\sin(20*x)$$
Note there are features of the true probability of class membership that do not affect the decision boundary, so any model may be distracted by modelling those irrelevant undulations at the expense of accurately determining the optimal decision boundary. To demonstrate that Bousqet's example builds a series of logistic regression-style models, of increasing complexity, based on Legendre polynomials (for numerical considerations). Here are the first seven basis functions:
The blog example is implemented in Mathmatica, which is a language I don't know, but I have been able to replicate their results in MATLAB tolerably well. What I think they have done is to fit these Legendre polynomial models firstly using the cross-entropy metric, and then using the hinge loss, but rather strangely they have fitted it directly to the true (sampled) probability of class membership, i.e. the response values lie between 0 and 1. Using the cross-entropy, I get this result:
Which is broadly the same as the Mathmatica implementation. Note that in the attempt to model the undulations in the probability of class membership, the model has overshot on occasion and so the accuracy is lower than we would get by simply placing a threshold at $x = 0$.
It wasn't completely clear how to implement the model with the hinge loss, as the logistic function clips the output to lie in the range of 0 to 1, so instead I used the hinge loss on the weighted sum of the Legendre basis functions and then afterwards applied the logistic function. This is the result:
All the models now achieve the optimal accuracy, although the estimates of the posterior probability of class membership are clearly inferior (if not actually plain wrong!).
HOWEVER this is not what we actually do when we have a classification task. If we knew the optimal posterior probability of class membership to determine the targets for the training data, we probably wouldn't need to build a classifier in the first place! So I then modified the code so that instead of the response values being the sampled true probability, I generated random x values (uniform distribution from -1 to +1) and then generated binary responses according to the probability of class membership.
This is the result for the cross-entropy error metric, which is pretty much the same as before.
Here is the result for the hinge loss, which is very different.
So why the difference? Well in the blog version, if we set the weight "linear" term to a very high value, then the weighted sum will be less than -1 for all of the data to the left of $x = 0$ and a value greater than +1 for all of the data on the right. In which case, the hinge loss will be zero, and we will get a classifier with minimal error regardless of model complexity. The hinge loss cannot be negative. However, if we sample the labels from a conditional Bernoulli distribution, we will have data on both sides of $x=0$ that are both positively and negatively labelled, and if they are the wrong side of $x=0$ they will have a non-zero hinge loss, and hence we will start penalising models with a large linear term increasingly harshly. It does have some excess error caused by trying to model the right-most undulation, but it does see to be more robust in terms of accuracy than the cross-entropy loss. So it is an example of how trying to classify the data directly, rather than estimate probability and then threshold, it just isn't as clear cut as the example in the blog suggests.
Update #1: Here are the results with a larger dataset (so sampling noise is less likely to be a factor). For the previous results I used 1024 training patterns, and for these I used 65536 (I work in a computer science department ;o). It seems to improve things a bit for the hinge loss, but the cross-entropy results look broadly similar.
Cross-entropy loss:
Hinge loss:
It is interesting (i.e. worrying) that for some of the simpler models, the output does not go through $(0, 1/2)$...
FWIW, this is the most complex of the hinge-loss models without the logistic transformation (but with an offset of 0.5 to make it easier to compare with the probabilities). | Is there a Good Illustrative Example where the Hinge Loss (SVM) Gives a Higher Accuracy than the Log
The example attributed to Olivier Bousquet doesn't work as clearly as depicted in the blog post mentioned in an answer to a related question, but I have been able to make it work to some extent in a m |
36,798 | Is there a Good Illustrative Example where the Hinge Loss (SVM) Gives a Higher Accuracy than the Logistic Loss | The results you shared are fascinating. Here's some more exploration in a similar direction.
The idea behind the Bousquet example is to create a situation where optimizing the logistic loss prioritizes fitting the underlying probability distribution at the expense of accuracy. But, it's not clear to me that accuracy and fit to the distribution would have to be opposed here. For example, it seems like a model that exactly matches the underlying distribution should yield both optimal accuracy and optimal logistic loss (at least in expectation).
I'll build a similar example that tries to simplify things, with only two models in the hypothesis space. One gives better accuracy but worse fit to the underlying distribution, and the other does the opposite. The tension between these two objectives is explicitly baked into the problem. I'll work directly with expected losses, so issues related to finite samples and/or optimization won't play any role.
True distribution
Suppose each point $x$ is drawn i.i.d. from the uniform distribution on $[-1, 1]$ and its class label $y \in \{-1, +1\}$ is drawn from a Bernoulli distribution $p(y \mid x)$. Similar to the Bousquet example, the conditional probability of the positive class is a 'wavy step function' (see plot below):
$$p(y=1 \mid x) =
.598 \ \sigma(100 x) - .201 + .2 \sin(20 x)$$
where $\sigma$ is the logistic sigmoid function.
Models
Suppose our hypothesis space contains only two models (with no free parameters so 'fitting' means choosing one or the other):
The first model is a step function:
$$\hat{p}_1(y=1 \mid x) = \begin{cases}
\sigma(1) & x \ge 0 \\
\sigma(-1) & x < 0 \\
\end{cases}$$
The second is a wavy step function, similar to the true distribution, but with slightly different parameters:
$$\hat{p}_2(y=1 \mid x) =
.398 \ \sigma(100 x) - .301 + .3 \sin(20 x)$$
Where needed (e.g. for computing the hinge loss), 'raw' classifier outputs are computed as $f(x) = \operatorname{logit}(\hat{p}(y=1 \mid x))$. Since we're interested in accuracy, point predictions are computed as the mode of the predicted distribution over class labels (equivalent to the sign of the 'raw' output), which is the optimal decision under the 0-1 loss.
Note that the hypothesis space doesn't contain the true distribution. We're forced to choose between two approximations that make different tradeoffs. The first model (step function) is designed to make more accurate point predictions, at the expense of fit to the underlying distribution. In contrast, the second model (wavy step) is designed to match the underlying distribution better, at the expense of accuracy. To confirm that these tradeoffs are indeed present, the table below shows the expected 0-1 loss (better for model 1) and expected KL divergence from the true distribution to the model (better for model 2).
Hinge vs. logistic loss
Here are the expected losses for each model, where the expectation is taken w.r.t. the true data generating process (calculated by numerical integration). The best model according to each loss function is shown in bold+parentheses:
$$\begin{array}{rc}
& \text{Model 1 (step)} & \text{Model 2 (wavy step)} \\
\text{0-1} & \mathbf{(.208)} & .269 \\
\text{KL} & .113 & \mathbf{(.044)} \\
\text{Hinge} & \mathbf{(.416)} & .580 \\
\text{Logistic} & .521 & \mathbf{(.474)} \\
\end{array}$$
Suppose we choose a model from our hypothesis space by minimizing the expected loss (i.e. what empirical risk minimization tries to do by proxy). As the table shows, minimizing the hinge loss gives the first model (prioritizing accuracy), whereas minimizing the logistic loss gives the second model (prioritizing fit to the underlying distribution).
Notes
Contrary to this example, accuracy and fit to the underlying distribution aren't always opposed. Even when such a conflict exists, the hinge and logistic losses may not necessarily behave as shown above.
For example, the amplitude of model 1's step function matters. It doesn't affect the accuracy (which only depends on the sign), but it does affect the hinge loss. Increasing the amplitude too far (overconfidence) incurs increasing penalties for misclassified points. And, shrinking it too far (underconfidence) penalizes correct predictions, which increasingly fall inside the margin. In both cases, the hinge loss will eventually favor the second model, thereby accepting a decrease in accuracy. This emphasizes that: 1) the hinge loss doesn't always agree with the 0-1 loss (it's only a convex surrogate) and 2) the effects in question depend on the hypothesis space.
In practice, I'd bet that regularization plays an important role too, together with the model selection algorithm. For example, regularization strength is often chosen to maximize validation set accuracy. Even if the logistic loss is used to fit the parameters, using the 0-1 loss for model selection might sacrifice fit to the underlying distribution in favor of accuracy. | Is there a Good Illustrative Example where the Hinge Loss (SVM) Gives a Higher Accuracy than the Log | The results you shared are fascinating. Here's some more exploration in a similar direction.
The idea behind the Bousquet example is to create a situation where optimizing the logistic loss prioritize | Is there a Good Illustrative Example where the Hinge Loss (SVM) Gives a Higher Accuracy than the Logistic Loss
The results you shared are fascinating. Here's some more exploration in a similar direction.
The idea behind the Bousquet example is to create a situation where optimizing the logistic loss prioritizes fitting the underlying probability distribution at the expense of accuracy. But, it's not clear to me that accuracy and fit to the distribution would have to be opposed here. For example, it seems like a model that exactly matches the underlying distribution should yield both optimal accuracy and optimal logistic loss (at least in expectation).
I'll build a similar example that tries to simplify things, with only two models in the hypothesis space. One gives better accuracy but worse fit to the underlying distribution, and the other does the opposite. The tension between these two objectives is explicitly baked into the problem. I'll work directly with expected losses, so issues related to finite samples and/or optimization won't play any role.
True distribution
Suppose each point $x$ is drawn i.i.d. from the uniform distribution on $[-1, 1]$ and its class label $y \in \{-1, +1\}$ is drawn from a Bernoulli distribution $p(y \mid x)$. Similar to the Bousquet example, the conditional probability of the positive class is a 'wavy step function' (see plot below):
$$p(y=1 \mid x) =
.598 \ \sigma(100 x) - .201 + .2 \sin(20 x)$$
where $\sigma$ is the logistic sigmoid function.
Models
Suppose our hypothesis space contains only two models (with no free parameters so 'fitting' means choosing one or the other):
The first model is a step function:
$$\hat{p}_1(y=1 \mid x) = \begin{cases}
\sigma(1) & x \ge 0 \\
\sigma(-1) & x < 0 \\
\end{cases}$$
The second is a wavy step function, similar to the true distribution, but with slightly different parameters:
$$\hat{p}_2(y=1 \mid x) =
.398 \ \sigma(100 x) - .301 + .3 \sin(20 x)$$
Where needed (e.g. for computing the hinge loss), 'raw' classifier outputs are computed as $f(x) = \operatorname{logit}(\hat{p}(y=1 \mid x))$. Since we're interested in accuracy, point predictions are computed as the mode of the predicted distribution over class labels (equivalent to the sign of the 'raw' output), which is the optimal decision under the 0-1 loss.
Note that the hypothesis space doesn't contain the true distribution. We're forced to choose between two approximations that make different tradeoffs. The first model (step function) is designed to make more accurate point predictions, at the expense of fit to the underlying distribution. In contrast, the second model (wavy step) is designed to match the underlying distribution better, at the expense of accuracy. To confirm that these tradeoffs are indeed present, the table below shows the expected 0-1 loss (better for model 1) and expected KL divergence from the true distribution to the model (better for model 2).
Hinge vs. logistic loss
Here are the expected losses for each model, where the expectation is taken w.r.t. the true data generating process (calculated by numerical integration). The best model according to each loss function is shown in bold+parentheses:
$$\begin{array}{rc}
& \text{Model 1 (step)} & \text{Model 2 (wavy step)} \\
\text{0-1} & \mathbf{(.208)} & .269 \\
\text{KL} & .113 & \mathbf{(.044)} \\
\text{Hinge} & \mathbf{(.416)} & .580 \\
\text{Logistic} & .521 & \mathbf{(.474)} \\
\end{array}$$
Suppose we choose a model from our hypothesis space by minimizing the expected loss (i.e. what empirical risk minimization tries to do by proxy). As the table shows, minimizing the hinge loss gives the first model (prioritizing accuracy), whereas minimizing the logistic loss gives the second model (prioritizing fit to the underlying distribution).
Notes
Contrary to this example, accuracy and fit to the underlying distribution aren't always opposed. Even when such a conflict exists, the hinge and logistic losses may not necessarily behave as shown above.
For example, the amplitude of model 1's step function matters. It doesn't affect the accuracy (which only depends on the sign), but it does affect the hinge loss. Increasing the amplitude too far (overconfidence) incurs increasing penalties for misclassified points. And, shrinking it too far (underconfidence) penalizes correct predictions, which increasingly fall inside the margin. In both cases, the hinge loss will eventually favor the second model, thereby accepting a decrease in accuracy. This emphasizes that: 1) the hinge loss doesn't always agree with the 0-1 loss (it's only a convex surrogate) and 2) the effects in question depend on the hypothesis space.
In practice, I'd bet that regularization plays an important role too, together with the model selection algorithm. For example, regularization strength is often chosen to maximize validation set accuracy. Even if the logistic loss is used to fit the parameters, using the 0-1 loss for model selection might sacrifice fit to the underlying distribution in favor of accuracy. | Is there a Good Illustrative Example where the Hinge Loss (SVM) Gives a Higher Accuracy than the Log
The results you shared are fascinating. Here's some more exploration in a similar direction.
The idea behind the Bousquet example is to create a situation where optimizing the logistic loss prioritize |
36,799 | Permutation testing for machine learning: permute entire set or only training set? | There are few things to unpack here. The goal of permutation testing is to get a null distribution for your test statistic by permuting labels and repeating your procedure many times.
Your test statistic is e.g., average accuracy, and your procedure is CV. So you should permute the labels (all labels, because all labels go into the procedure) and then split the data into folds and run CV.
If you permute only the training set, then you are not getting valid null, because you don't have randomness in your outcome labels. If you permute data only in the test set, this will not be valid because it would not take into account the dependence between CV folds, which is the whole reason for doing permutation testing and not just some binomial test.
There are few caveats.
If you perform CV split randomly, then you can also simply permute the data first and then continue CV.
If your CV split is done so that each fold has the same proportion of labels from each class or that the folds are balanced based on the same other variables, then you have to permute so that this is also the case in your permutations. Usually, an easy way to do it is to permute first and then create your balanced splits.
If you don't have random folds, but they are already given, e.g., each fold is data from different cities, or different hospitals, or different measuring devices, then you have to permute within these folds so that labels from the same hospital will not get permuted with labels from a different hospital.
You might have other so-called "exchangeability blocks" that are not based on folds, e.g., you have different hospitals, but you don't split your data by hospital, then you should permute your data within these blocks, but not necessarily within folds. | Permutation testing for machine learning: permute entire set or only training set? | There are few things to unpack here. The goal of permutation testing is to get a null distribution for your test statistic by permuting labels and repeating your procedure many times.
Your test statis | Permutation testing for machine learning: permute entire set or only training set?
There are few things to unpack here. The goal of permutation testing is to get a null distribution for your test statistic by permuting labels and repeating your procedure many times.
Your test statistic is e.g., average accuracy, and your procedure is CV. So you should permute the labels (all labels, because all labels go into the procedure) and then split the data into folds and run CV.
If you permute only the training set, then you are not getting valid null, because you don't have randomness in your outcome labels. If you permute data only in the test set, this will not be valid because it would not take into account the dependence between CV folds, which is the whole reason for doing permutation testing and not just some binomial test.
There are few caveats.
If you perform CV split randomly, then you can also simply permute the data first and then continue CV.
If your CV split is done so that each fold has the same proportion of labels from each class or that the folds are balanced based on the same other variables, then you have to permute so that this is also the case in your permutations. Usually, an easy way to do it is to permute first and then create your balanced splits.
If you don't have random folds, but they are already given, e.g., each fold is data from different cities, or different hospitals, or different measuring devices, then you have to permute within these folds so that labels from the same hospital will not get permuted with labels from a different hospital.
You might have other so-called "exchangeability blocks" that are not based on folds, e.g., you have different hospitals, but you don't split your data by hospital, then you should permute your data within these blocks, but not necessarily within folds. | Permutation testing for machine learning: permute entire set or only training set?
There are few things to unpack here. The goal of permutation testing is to get a null distribution for your test statistic by permuting labels and repeating your procedure many times.
Your test statis |
36,800 | Permutation testing for machine learning: permute entire set or only training set? | At prediction time, you have the $(y_i, \hat y_i)$ pairs of the actual labels and the predictions. Notice that the result would be the same if you permuted the actual labels $y_i$'s and if you permuted the predictions $\hat y_i$'s, since permuting them breaks the pairing. So such a permutation test would create the null distribution in the scenario where the predictions were made at random but the distribution of the predictions is fixed.
But notice what does the paper say:
A significant classifier for Test 1 rejects the null hypothesis that
the features and the labels are independent, i.e., that there is no
difference between the classes. If the original data contains
dependency between data points and labels, then: (1) a significant
classifier $f$ will use such information to achieve a good
classification accuracy, resulting into a small $p$-value; (2) if the
classifier $f$ is not significant with Test 1, $f$ was not able to use
the existing dependency between data and labels in the original data.
Finally, if the original data did not contain any real dependency
between data points and labels, then all classifiers would have a high
$p$-value and the null hypothesis would never be rejected.
Applying randomizations on the original data is therefore a powerful
way to understand how the different classifiers use the structure
implicit in the data, if such structure exists. [...]
It mentions "different classifiers" using the structure of the data. If you permuted whole data it is a different question that is answered. A model trained on data with permuted labels learns to find spurious correlations. Having a small train error in such a case tells you how much is it prone to overfit. There is another difference when permuting only the labels and comparing them to predictions. In the first case, you are looking at the distribution of predictions from a single model. In the second case, you look at the distribution of predictions from different "null" models. Only the second case tells you how does the classifier learns the structure of the data.
Finally, correct me if I'm wrong, but the paper does not seem to say anything about the train and test data. They seem to be describing training the classifier on a dataset $D$ and comparing the performance to the permuted datasets $D'$, but those are the training errors. | Permutation testing for machine learning: permute entire set or only training set? | At prediction time, you have the $(y_i, \hat y_i)$ pairs of the actual labels and the predictions. Notice that the result would be the same if you permuted the actual labels $y_i$'s and if you permute | Permutation testing for machine learning: permute entire set or only training set?
At prediction time, you have the $(y_i, \hat y_i)$ pairs of the actual labels and the predictions. Notice that the result would be the same if you permuted the actual labels $y_i$'s and if you permuted the predictions $\hat y_i$'s, since permuting them breaks the pairing. So such a permutation test would create the null distribution in the scenario where the predictions were made at random but the distribution of the predictions is fixed.
But notice what does the paper say:
A significant classifier for Test 1 rejects the null hypothesis that
the features and the labels are independent, i.e., that there is no
difference between the classes. If the original data contains
dependency between data points and labels, then: (1) a significant
classifier $f$ will use such information to achieve a good
classification accuracy, resulting into a small $p$-value; (2) if the
classifier $f$ is not significant with Test 1, $f$ was not able to use
the existing dependency between data and labels in the original data.
Finally, if the original data did not contain any real dependency
between data points and labels, then all classifiers would have a high
$p$-value and the null hypothesis would never be rejected.
Applying randomizations on the original data is therefore a powerful
way to understand how the different classifiers use the structure
implicit in the data, if such structure exists. [...]
It mentions "different classifiers" using the structure of the data. If you permuted whole data it is a different question that is answered. A model trained on data with permuted labels learns to find spurious correlations. Having a small train error in such a case tells you how much is it prone to overfit. There is another difference when permuting only the labels and comparing them to predictions. In the first case, you are looking at the distribution of predictions from a single model. In the second case, you look at the distribution of predictions from different "null" models. Only the second case tells you how does the classifier learns the structure of the data.
Finally, correct me if I'm wrong, but the paper does not seem to say anything about the train and test data. They seem to be describing training the classifier on a dataset $D$ and comparing the performance to the permuted datasets $D'$, but those are the training errors. | Permutation testing for machine learning: permute entire set or only training set?
At prediction time, you have the $(y_i, \hat y_i)$ pairs of the actual labels and the predictions. Notice that the result would be the same if you permuted the actual labels $y_i$'s and if you permute |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.