idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
25,801 | Simple linear regression fit manually via matrix equations does not match lm() output | This R code can be used to calculate Y (a vector of y values, the fitted values) and Beta (a vector of the coefficients) via matrix regression for a given dataset which I called insert.dataset. This should work even if you add additional numeric variables to the formula.
library(matlib) # enables function inv() to calculate a matrix's inverse
model <- lm(formula = y ~ x, data = insert.dataset) # save linear model
beta0 <- rep(1, nrow(model$model)) # column of 1s representing coefficient beta0 (intercept)
X <- as.matrix(cbind(beta0, model$model[,-1]), nrow=nrow(model$model)) # create X matrix, replacing column of outcomes with beta0
# Matrix equation to create Y (fitted values), using X and coefficients
Y <- X %*% model$coefficients
model$fitted.values # should be identical to Y
# Matrix equation to create Beta (coefficients), using X and Y
Beta <- inv(t(X) %*% X) %*% t(X) %*% model$fitted.values
model$coefficients # should be identical to Beta
It should be noted that the output from lm already has vectors representing Y (model$fitted.values) and Beta model$coefficients, but some small modification is needed to obtain X (the matrix of observed predictor values). That small modification replaces the column containing the observed y values with a column full of 1s (this is needed for multiplication with beta0) and is why you are not getting the 2 * 1 matrix that you want. | Simple linear regression fit manually via matrix equations does not match lm() output | This R code can be used to calculate Y (a vector of y values, the fitted values) and Beta (a vector of the coefficients) via matrix regression for a given dataset which I called insert.dataset. This s | Simple linear regression fit manually via matrix equations does not match lm() output
This R code can be used to calculate Y (a vector of y values, the fitted values) and Beta (a vector of the coefficients) via matrix regression for a given dataset which I called insert.dataset. This should work even if you add additional numeric variables to the formula.
library(matlib) # enables function inv() to calculate a matrix's inverse
model <- lm(formula = y ~ x, data = insert.dataset) # save linear model
beta0 <- rep(1, nrow(model$model)) # column of 1s representing coefficient beta0 (intercept)
X <- as.matrix(cbind(beta0, model$model[,-1]), nrow=nrow(model$model)) # create X matrix, replacing column of outcomes with beta0
# Matrix equation to create Y (fitted values), using X and coefficients
Y <- X %*% model$coefficients
model$fitted.values # should be identical to Y
# Matrix equation to create Beta (coefficients), using X and Y
Beta <- inv(t(X) %*% X) %*% t(X) %*% model$fitted.values
model$coefficients # should be identical to Beta
It should be noted that the output from lm already has vectors representing Y (model$fitted.values) and Beta model$coefficients, but some small modification is needed to obtain X (the matrix of observed predictor values). That small modification replaces the column containing the observed y values with a column full of 1s (this is needed for multiplication with beta0) and is why you are not getting the 2 * 1 matrix that you want. | Simple linear regression fit manually via matrix equations does not match lm() output
This R code can be used to calculate Y (a vector of y values, the fitted values) and Beta (a vector of the coefficients) via matrix regression for a given dataset which I called insert.dataset. This s |
25,802 | Negative Binomial "Process" | Several stochastic processes lead to marginal counts having a Negative
Binomial (NB) distribution and can therefore be called NB processes.
Among them, the NB Lévy Process is of special interest since
increments (counts) over non-overlapping time intervals are
independent, a property shared with the Poisson Process, a Gamma
process and the Wiener Process. The count $N_t$ on an interval of
length $t$ has the NB distribution
$$
N_t \sim \textrm{NB}(r,\,p), \quad r = \gamma t
$$
so the process depends on the two parameters $\gamma >0$ (with the
dimension of an inverse time) and the probability $p$ ($0 < p < 1$).
The expectation is proportional to the interval length, and so is its
variance
$$
\mathbb{E}(N_t) = \gamma t \, (1-p)/p \qquad
\textrm{Var}(N_t) = \gamma t \, (1-p)/p^2.
$$
The variance is greater than the mean (overdispersion), and the
index of dispersion $\textrm{Var}(N_t)/\mathbb{E}(N_t) = 1/p$ does not
depend on $t$. When $p$ is close to $1$ and $\gamma (1-p)$
is close to $\lambda >0$, the process behaves like
a Poisson Process with rate $\lambda$.
An explanation for overdispersion is that several events can happen at
the same time, so a small interval can contain more than one event.
It is easy to fit such a process by Maximum Likelihood when the
intervals have different lengths. In this case we face a NB regression
with a link function differing from the default link in NB GLMs. A
special likelihood maximisation is useful.
The article by T.J. Kozubowski and K. Podgorski provide theoretical
results as well as an illustration.
Curiously enough, this process does not seem to be frequently used
as such by statisticians. | Negative Binomial "Process" | Several stochastic processes lead to marginal counts having a Negative
Binomial (NB) distribution and can therefore be called NB processes.
Among them, the NB Lévy Process is of special interest since | Negative Binomial "Process"
Several stochastic processes lead to marginal counts having a Negative
Binomial (NB) distribution and can therefore be called NB processes.
Among them, the NB Lévy Process is of special interest since
increments (counts) over non-overlapping time intervals are
independent, a property shared with the Poisson Process, a Gamma
process and the Wiener Process. The count $N_t$ on an interval of
length $t$ has the NB distribution
$$
N_t \sim \textrm{NB}(r,\,p), \quad r = \gamma t
$$
so the process depends on the two parameters $\gamma >0$ (with the
dimension of an inverse time) and the probability $p$ ($0 < p < 1$).
The expectation is proportional to the interval length, and so is its
variance
$$
\mathbb{E}(N_t) = \gamma t \, (1-p)/p \qquad
\textrm{Var}(N_t) = \gamma t \, (1-p)/p^2.
$$
The variance is greater than the mean (overdispersion), and the
index of dispersion $\textrm{Var}(N_t)/\mathbb{E}(N_t) = 1/p$ does not
depend on $t$. When $p$ is close to $1$ and $\gamma (1-p)$
is close to $\lambda >0$, the process behaves like
a Poisson Process with rate $\lambda$.
An explanation for overdispersion is that several events can happen at
the same time, so a small interval can contain more than one event.
It is easy to fit such a process by Maximum Likelihood when the
intervals have different lengths. In this case we face a NB regression
with a link function differing from the default link in NB GLMs. A
special likelihood maximisation is useful.
The article by T.J. Kozubowski and K. Podgorski provide theoretical
results as well as an illustration.
Curiously enough, this process does not seem to be frequently used
as such by statisticians. | Negative Binomial "Process"
Several stochastic processes lead to marginal counts having a Negative
Binomial (NB) distribution and can therefore be called NB processes.
Among them, the NB Lévy Process is of special interest since |
25,803 | Understanding Feature Hashing | The matrix is constructed in the following way:
rows represent lines
columns represent features
and every entry matrix(i,j)=k means:
In line i, the word with index j appears k times.
So to is mapped to index 3. It appears exactly one time in line 1. So m(1,3)=1.
More examples
likes is mapped to index 2. It appears exactly two times in the first line. So m(1,2)=2
also is mapped to index 6. It does not appear in line 1, but one time in line 2. So m(1,6)=0 and m(2,6)=1. | Understanding Feature Hashing | The matrix is constructed in the following way:
rows represent lines
columns represent features
and every entry matrix(i,j)=k means:
In line i, the word with index j appears k times.
So to is mappe | Understanding Feature Hashing
The matrix is constructed in the following way:
rows represent lines
columns represent features
and every entry matrix(i,j)=k means:
In line i, the word with index j appears k times.
So to is mapped to index 3. It appears exactly one time in line 1. So m(1,3)=1.
More examples
likes is mapped to index 2. It appears exactly two times in the first line. So m(1,2)=2
also is mapped to index 6. It does not appear in line 1, but one time in line 2. So m(1,6)=0 and m(2,6)=1. | Understanding Feature Hashing
The matrix is constructed in the following way:
rows represent lines
columns represent features
and every entry matrix(i,j)=k means:
In line i, the word with index j appears k times.
So to is mappe |
25,804 | Understanding Feature Hashing | As Steffen pointed out, the example matrix encodes the number of times a word appears in a text. The position of the encoding into the matrix is given by the word (column position on the matrix) and by the text (row position on the matrix).
Now, The hashing trick works the same way, though you don't have to initially define the dictionary containing the column position for each word.
In fact it is the hashing function that will give you the range of possible column positions (the hashing function will give you a minimum and maximum value possible) and the exact position of the word you want to encode into the matrix. So for example, let's imagine that the word "likes" is hashed by our hashing function into the number 5674, then the column 5674 will contain the encodings relative to the word "likes".
In such a fashion you won't need to build a dictionary before analyzing the text. If you will use a sparse matrix as your text matrix you won't even have to define exactly what the matrix size will have to be. Just by scanning the text, on the fly, you will convert words into column positions by the hashing function and your text matrix will be populated of data (frequencies, i.e.) accordingly to what document you are progressively analyzing (row position). | Understanding Feature Hashing | As Steffen pointed out, the example matrix encodes the number of times a word appears in a text. The position of the encoding into the matrix is given by the word (column position on the matrix) and b | Understanding Feature Hashing
As Steffen pointed out, the example matrix encodes the number of times a word appears in a text. The position of the encoding into the matrix is given by the word (column position on the matrix) and by the text (row position on the matrix).
Now, The hashing trick works the same way, though you don't have to initially define the dictionary containing the column position for each word.
In fact it is the hashing function that will give you the range of possible column positions (the hashing function will give you a minimum and maximum value possible) and the exact position of the word you want to encode into the matrix. So for example, let's imagine that the word "likes" is hashed by our hashing function into the number 5674, then the column 5674 will contain the encodings relative to the word "likes".
In such a fashion you won't need to build a dictionary before analyzing the text. If you will use a sparse matrix as your text matrix you won't even have to define exactly what the matrix size will have to be. Just by scanning the text, on the fly, you will convert words into column positions by the hashing function and your text matrix will be populated of data (frequencies, i.e.) accordingly to what document you are progressively analyzing (row position). | Understanding Feature Hashing
As Steffen pointed out, the example matrix encodes the number of times a word appears in a text. The position of the encoding into the matrix is given by the word (column position on the matrix) and b |
25,805 | Confidence intervals for cross-validated statistics | For our credit risk paper on predicting loan defaults, a reviewer also suggested we produce confidence intervals for cross validation estimates and in particular recommended bootstrapping of the resampled mean.
Bootstrapped CIs were produced for risk ranking measures including the AUC, H-measure and the Kolmogorov-Smirnov (K-S) statistic. They were used to compare discrimination performance of two survival models - Mixture Cure, Cox with logistic regression.
It would be interesting to learn of other approaches to such CIs.
Tong, E.N.C., Mues, C. & Thomas, L.C. (2012) Mixture cure models in credit scoring: If and when borrowers default. European Journal of Operational Research, 218, (1), 132-139. | Confidence intervals for cross-validated statistics | For our credit risk paper on predicting loan defaults, a reviewer also suggested we produce confidence intervals for cross validation estimates and in particular recommended bootstrapping of the resam | Confidence intervals for cross-validated statistics
For our credit risk paper on predicting loan defaults, a reviewer also suggested we produce confidence intervals for cross validation estimates and in particular recommended bootstrapping of the resampled mean.
Bootstrapped CIs were produced for risk ranking measures including the AUC, H-measure and the Kolmogorov-Smirnov (K-S) statistic. They were used to compare discrimination performance of two survival models - Mixture Cure, Cox with logistic regression.
It would be interesting to learn of other approaches to such CIs.
Tong, E.N.C., Mues, C. & Thomas, L.C. (2012) Mixture cure models in credit scoring: If and when borrowers default. European Journal of Operational Research, 218, (1), 132-139. | Confidence intervals for cross-validated statistics
For our credit risk paper on predicting loan defaults, a reviewer also suggested we produce confidence intervals for cross validation estimates and in particular recommended bootstrapping of the resam |
25,806 | Confidence intervals for cross-validated statistics | If you can't assume independence of the data splits (which in many scenarios you can't), here's a method that allows for the computation of "valid" confidence intervals around your error. It was recently published by Stanford (2021) so there still aren't python packages, but they did create an R package.
I was interested in the topic so I made a less technical writeup, but the paper tells the full story.
Paper Info (in case the link dies):
Name: Cross-validation: what does it estimate and how well does it do it?
Authors: Stephen Bates, Trevor Hastie, and Robert Tibshirani
Year: 2021
Key conclusions: "We have made two main contributions. First, we discussed point estimates of prediction error via subsampling
techniques. Our primary result is that common estimates of prediction error—cross-validation, bootstrap, data splitting, and covariance penalties—cannot be viewed as estimates of the prediction error of the final model fit on the whole data. ... Secondly, we discuss inference for cross-validation, deriving an estimator for the MSE of the CV point estimate, nested CV." | Confidence intervals for cross-validated statistics | If you can't assume independence of the data splits (which in many scenarios you can't), here's a method that allows for the computation of "valid" confidence intervals around your error. It was recen | Confidence intervals for cross-validated statistics
If you can't assume independence of the data splits (which in many scenarios you can't), here's a method that allows for the computation of "valid" confidence intervals around your error. It was recently published by Stanford (2021) so there still aren't python packages, but they did create an R package.
I was interested in the topic so I made a less technical writeup, but the paper tells the full story.
Paper Info (in case the link dies):
Name: Cross-validation: what does it estimate and how well does it do it?
Authors: Stephen Bates, Trevor Hastie, and Robert Tibshirani
Year: 2021
Key conclusions: "We have made two main contributions. First, we discussed point estimates of prediction error via subsampling
techniques. Our primary result is that common estimates of prediction error—cross-validation, bootstrap, data splitting, and covariance penalties—cannot be viewed as estimates of the prediction error of the final model fit on the whole data. ... Secondly, we discuss inference for cross-validation, deriving an estimator for the MSE of the CV point estimate, nested CV." | Confidence intervals for cross-validated statistics
If you can't assume independence of the data splits (which in many scenarios you can't), here's a method that allows for the computation of "valid" confidence intervals around your error. It was recen |
25,807 | Confidence intervals for cross-validated statistics | Recently I published a paper reporting mean and 95% confidence intervals for a number of performance statistics (accuracy, sensitivity, specificity etc) for a logistic regression model.
We used 10 repetitions of 10 fold cross validation, taking the test set result for each fold produced 100 values for each performance statistic.
If you can reasonably assume these values are independent then 95% confidence intervals can be calculated from these values. If you can't assume independence then bootstrapping as discussed above may be more appropriate. | Confidence intervals for cross-validated statistics | Recently I published a paper reporting mean and 95% confidence intervals for a number of performance statistics (accuracy, sensitivity, specificity etc) for a logistic regression model.
We used 10 rep | Confidence intervals for cross-validated statistics
Recently I published a paper reporting mean and 95% confidence intervals for a number of performance statistics (accuracy, sensitivity, specificity etc) for a logistic regression model.
We used 10 repetitions of 10 fold cross validation, taking the test set result for each fold produced 100 values for each performance statistic.
If you can reasonably assume these values are independent then 95% confidence intervals can be calculated from these values. If you can't assume independence then bootstrapping as discussed above may be more appropriate. | Confidence intervals for cross-validated statistics
Recently I published a paper reporting mean and 95% confidence intervals for a number of performance statistics (accuracy, sensitivity, specificity etc) for a logistic regression model.
We used 10 rep |
25,808 | $N(\theta,\theta)$: MLE for a Normal where mean=variance | There are some typos (or algebraical mistakes) in the signs of the log-likelihood, followed by the corresponding unpleasant consequences.
Since this is a well-known problem, I will only point out a reference with the solution:
Asymptotic Theory of Statistics and Probability pp. 53, by Anirban DasGupta. | $N(\theta,\theta)$: MLE for a Normal where mean=variance | There are some typos (or algebraical mistakes) in the signs of the log-likelihood, followed by the corresponding unpleasant consequences.
Since this is a well-known problem, I will only point out a r | $N(\theta,\theta)$: MLE for a Normal where mean=variance
There are some typos (or algebraical mistakes) in the signs of the log-likelihood, followed by the corresponding unpleasant consequences.
Since this is a well-known problem, I will only point out a reference with the solution:
Asymptotic Theory of Statistics and Probability pp. 53, by Anirban DasGupta. | $N(\theta,\theta)$: MLE for a Normal where mean=variance
There are some typos (or algebraical mistakes) in the signs of the log-likelihood, followed by the corresponding unpleasant consequences.
Since this is a well-known problem, I will only point out a r |
25,809 | $N(\theta,\theta)$: MLE for a Normal where mean=variance | Recall that the normal distribution $N(\mu, \sigma^2)$ has pdf $f(x\mid \mu ,\sigma ^{2})={\frac {1}{{\sqrt {2\pi \sigma ^{2}}}\ }}\exp {\left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)},$ Note here that $\mu = \theta$ and $\sigma^2 = \theta$ and therefore $\sigma = \sqrt{\theta}$
\begin{aligned}
L(x_1,x_2,...,x_n | \theta) &= \prod_{i=1}^n f(x_i | \theta)
\\
&= \prod_{i=1}^n \frac{1}{\sqrt{2 \pi \theta}} \ \exp \Big \{ - \frac{1}{2 \theta} (x_i - \theta)^2 \Big\}
\\
& = (2 \pi)^{-n/2} (\theta)^{-n /2} \prod_{i=1}^n \ \exp \Big \{ - \frac{1}{2 \theta} (x_i - \theta)^2 \Big\}
\\
& = (2 \pi)^{-n/2} (\theta)^{-n /2} \ \exp \Big \{ - \frac{1}{2 \theta} \sum_{i=1}^n (x_i - \theta)^2 \Big\}
\\
\log L& = - \frac{n}{2} \log(2\pi) - \frac{n}{2} \log(\theta) - \frac{1}{2 \theta} \sum_{i=1}^n (x_i - \theta)^2
\end{aligned}
Consider the term $\frac{1}{2 \theta} \sum_{i=1}^n (x_i - \theta)^2$ which can be expanded and simplified
\begin{aligned}
\frac{1}{2 \theta} \sum_{i=1}^n (x_i - \theta)^2 & = \frac{1}{2 \theta} \sum_{i=1}^n (x_i - \theta)(x_i - \theta)
\\
& = \frac{1}{2 \theta} \sum_{i=1}^n \left( x_i^2 - 2 \theta x_i + \theta^2 \right)
\\
& = \frac{1}{2 \theta} \left( \sum_{i=1}^n (x_i^2) - 2 \theta \sum_{i=1}^n (x_i) + n\theta^2 \right)
\\
& = \frac{1}{2 \theta} \sum_{i=1}^n (x_i^2) - \sum_{i=1} (x_i) + \frac{n\theta}{2}
\end{aligned}
We can now compute the derivative with respect to $\theta$, equate to zero and solve for $\theta$
\begin{aligned}
\log L& = - \frac{n}{2} \log(2\pi) - \frac{n}{2} \log(\theta) - \left( \frac{1}{2 \theta} \sum_{i=1}^n (x_i^2) - \sum_{i=1} (x_i) + \frac{n\theta}{2} \right)
\\
\frac{d}{d\theta} \log L & = \frac{-n}{2\theta} - \left( \frac{-1}{2\theta^2} \sum_{i=1}^n (x_i^2) + \frac{n}{2} \right) = 0
\\
& = \frac{-n}{2\theta} + \frac{1}{2\theta^2} \sum_{i=1}^n (x_i^2) - \frac{n}{2}
\\
& = - \theta^2 - \theta + \frac{1}{n} \sum_{i=1}^n (x_i^2)
\\
&\text{let $s = \frac{1}{n} \sum_{i=1}^n (x_i^2)$}
\\
0 & = - \theta^2 - \theta + s
\\
\hat \theta &= \frac{\sqrt{1 + 4s} -1 }{2}
\end{aligned} | $N(\theta,\theta)$: MLE for a Normal where mean=variance | Recall that the normal distribution $N(\mu, \sigma^2)$ has pdf $f(x\mid \mu ,\sigma ^{2})={\frac {1}{{\sqrt {2\pi \sigma ^{2}}}\ }}\exp {\left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)},$ Note her | $N(\theta,\theta)$: MLE for a Normal where mean=variance
Recall that the normal distribution $N(\mu, \sigma^2)$ has pdf $f(x\mid \mu ,\sigma ^{2})={\frac {1}{{\sqrt {2\pi \sigma ^{2}}}\ }}\exp {\left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)},$ Note here that $\mu = \theta$ and $\sigma^2 = \theta$ and therefore $\sigma = \sqrt{\theta}$
\begin{aligned}
L(x_1,x_2,...,x_n | \theta) &= \prod_{i=1}^n f(x_i | \theta)
\\
&= \prod_{i=1}^n \frac{1}{\sqrt{2 \pi \theta}} \ \exp \Big \{ - \frac{1}{2 \theta} (x_i - \theta)^2 \Big\}
\\
& = (2 \pi)^{-n/2} (\theta)^{-n /2} \prod_{i=1}^n \ \exp \Big \{ - \frac{1}{2 \theta} (x_i - \theta)^2 \Big\}
\\
& = (2 \pi)^{-n/2} (\theta)^{-n /2} \ \exp \Big \{ - \frac{1}{2 \theta} \sum_{i=1}^n (x_i - \theta)^2 \Big\}
\\
\log L& = - \frac{n}{2} \log(2\pi) - \frac{n}{2} \log(\theta) - \frac{1}{2 \theta} \sum_{i=1}^n (x_i - \theta)^2
\end{aligned}
Consider the term $\frac{1}{2 \theta} \sum_{i=1}^n (x_i - \theta)^2$ which can be expanded and simplified
\begin{aligned}
\frac{1}{2 \theta} \sum_{i=1}^n (x_i - \theta)^2 & = \frac{1}{2 \theta} \sum_{i=1}^n (x_i - \theta)(x_i - \theta)
\\
& = \frac{1}{2 \theta} \sum_{i=1}^n \left( x_i^2 - 2 \theta x_i + \theta^2 \right)
\\
& = \frac{1}{2 \theta} \left( \sum_{i=1}^n (x_i^2) - 2 \theta \sum_{i=1}^n (x_i) + n\theta^2 \right)
\\
& = \frac{1}{2 \theta} \sum_{i=1}^n (x_i^2) - \sum_{i=1} (x_i) + \frac{n\theta}{2}
\end{aligned}
We can now compute the derivative with respect to $\theta$, equate to zero and solve for $\theta$
\begin{aligned}
\log L& = - \frac{n}{2} \log(2\pi) - \frac{n}{2} \log(\theta) - \left( \frac{1}{2 \theta} \sum_{i=1}^n (x_i^2) - \sum_{i=1} (x_i) + \frac{n\theta}{2} \right)
\\
\frac{d}{d\theta} \log L & = \frac{-n}{2\theta} - \left( \frac{-1}{2\theta^2} \sum_{i=1}^n (x_i^2) + \frac{n}{2} \right) = 0
\\
& = \frac{-n}{2\theta} + \frac{1}{2\theta^2} \sum_{i=1}^n (x_i^2) - \frac{n}{2}
\\
& = - \theta^2 - \theta + \frac{1}{n} \sum_{i=1}^n (x_i^2)
\\
&\text{let $s = \frac{1}{n} \sum_{i=1}^n (x_i^2)$}
\\
0 & = - \theta^2 - \theta + s
\\
\hat \theta &= \frac{\sqrt{1 + 4s} -1 }{2}
\end{aligned} | $N(\theta,\theta)$: MLE for a Normal where mean=variance
Recall that the normal distribution $N(\mu, \sigma^2)$ has pdf $f(x\mid \mu ,\sigma ^{2})={\frac {1}{{\sqrt {2\pi \sigma ^{2}}}\ }}\exp {\left(-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}\right)},$ Note her |
25,810 | $N(\theta,\theta)$: MLE for a Normal where mean=variance | Consider $\log f(x) = -0.5\log (2 \pi \theta) - 0.5 \frac{(x - \theta)^2}{\theta}$ and
$$
\frac{\partial}{\partial\theta} \log f(x) \propto -\frac{1}{\theta}+\frac{x^2}{\theta^2} -1
$$
Thus,
$$
\frac{\partial}{\partial\theta} \ell (x) = 0 = -n(1 + \frac{1}{\theta}) +\sum \frac{x_k^2}{\theta^2}
$$
so $\theta^2 + \theta = \frac{1}{n}\sum x_k^2$ which gives
$\theta^* = \sqrt{\frac{1}{n}\sum x_k^2 + \tfrac{1}{4}}-\frac{1}{2}$.
Ignore the negative root since it contradicts $\theta \ge0$. | $N(\theta,\theta)$: MLE for a Normal where mean=variance | Consider $\log f(x) = -0.5\log (2 \pi \theta) - 0.5 \frac{(x - \theta)^2}{\theta}$ and
$$
\frac{\partial}{\partial\theta} \log f(x) \propto -\frac{1}{\theta}+\frac{x^2}{\theta^2} -1
$$
Thus,
$$
\frac{ | $N(\theta,\theta)$: MLE for a Normal where mean=variance
Consider $\log f(x) = -0.5\log (2 \pi \theta) - 0.5 \frac{(x - \theta)^2}{\theta}$ and
$$
\frac{\partial}{\partial\theta} \log f(x) \propto -\frac{1}{\theta}+\frac{x^2}{\theta^2} -1
$$
Thus,
$$
\frac{\partial}{\partial\theta} \ell (x) = 0 = -n(1 + \frac{1}{\theta}) +\sum \frac{x_k^2}{\theta^2}
$$
so $\theta^2 + \theta = \frac{1}{n}\sum x_k^2$ which gives
$\theta^* = \sqrt{\frac{1}{n}\sum x_k^2 + \tfrac{1}{4}}-\frac{1}{2}$.
Ignore the negative root since it contradicts $\theta \ge0$. | $N(\theta,\theta)$: MLE for a Normal where mean=variance
Consider $\log f(x) = -0.5\log (2 \pi \theta) - 0.5 \frac{(x - \theta)^2}{\theta}$ and
$$
\frac{\partial}{\partial\theta} \log f(x) \propto -\frac{1}{\theta}+\frac{x^2}{\theta^2} -1
$$
Thus,
$$
\frac{ |
25,811 | Is it possible to have a variable that acts as both an effect modifier and a confounder? | A confounding variable must:
Be independently associated with the outcome;
Be associated with the exposure
Must not lie on the causal pathway between exposure and outcome.
These are the criteria for considering a variable as a potential confounding variable. If the potential confounder is discovered (through stratification and adjustment testing) to actually confound the relation between risk and outcome, then any unadjusted association seen between risk and outcome is an artifact of the confounder and hence not a real effect.
An effect modifier on the other hand does not confound. If an effect is real but the magnitude of the effect is different depending on some variable X, then that variable X is an effect modifier.
To answer your question therefore it is to my understanding not possible to have a variable that acts as both an effect modifier and a confounding variable for a given study sample and a given pair of risk factors and outcomes.
You can find more information here | Is it possible to have a variable that acts as both an effect modifier and a confounder? | A confounding variable must:
Be independently associated with the outcome;
Be associated with the exposure
Must not lie on the causal pathway between exposure and outcome.
These are the criteria fo | Is it possible to have a variable that acts as both an effect modifier and a confounder?
A confounding variable must:
Be independently associated with the outcome;
Be associated with the exposure
Must not lie on the causal pathway between exposure and outcome.
These are the criteria for considering a variable as a potential confounding variable. If the potential confounder is discovered (through stratification and adjustment testing) to actually confound the relation between risk and outcome, then any unadjusted association seen between risk and outcome is an artifact of the confounder and hence not a real effect.
An effect modifier on the other hand does not confound. If an effect is real but the magnitude of the effect is different depending on some variable X, then that variable X is an effect modifier.
To answer your question therefore it is to my understanding not possible to have a variable that acts as both an effect modifier and a confounding variable for a given study sample and a given pair of risk factors and outcomes.
You can find more information here | Is it possible to have a variable that acts as both an effect modifier and a confounder?
A confounding variable must:
Be independently associated with the outcome;
Be associated with the exposure
Must not lie on the causal pathway between exposure and outcome.
These are the criteria fo |
25,812 | Is it possible to have a variable that acts as both an effect modifier and a confounder? | Yes, it is absolutely possible that a variable is both a confounder and an effect modifier. We can run a quick simulation in R to verify this: Consider the following mechanism with $x$ being the treatment and $y$ the outcome. $c$ influences both $x$ and $y$ and, therefore, it is a confounder. But it also interacts with x and so modifies its effect on y.
set.seed(234)
c <- runif(10000)
x <- c + rnorm(10000, 0, 0.1)
y <- 3*x + 2*x*c + rnorm(10000)
So we know the true causal mechanism is $y = 3*x + 2*x*c$. Clearly, $c$ modifies the effect of $x$. However, when we run the regression of $y$ on $x$ only, we also see the confounding kicking in:
lm(y ~ x)
Coefficients:
(Intercept) x
-0.258 4.856
Finally, as pointed out in my comment, the definition given by oisyutat is wrong. It mirrors what Judea Pearl calls "the associational criterion" for a confounder, and he gives multiple reasons for why this definition fails. See Pearl (2009), Causality, section 6.3. | Is it possible to have a variable that acts as both an effect modifier and a confounder? | Yes, it is absolutely possible that a variable is both a confounder and an effect modifier. We can run a quick simulation in R to verify this: Consider the following mechanism with $x$ being the treat | Is it possible to have a variable that acts as both an effect modifier and a confounder?
Yes, it is absolutely possible that a variable is both a confounder and an effect modifier. We can run a quick simulation in R to verify this: Consider the following mechanism with $x$ being the treatment and $y$ the outcome. $c$ influences both $x$ and $y$ and, therefore, it is a confounder. But it also interacts with x and so modifies its effect on y.
set.seed(234)
c <- runif(10000)
x <- c + rnorm(10000, 0, 0.1)
y <- 3*x + 2*x*c + rnorm(10000)
So we know the true causal mechanism is $y = 3*x + 2*x*c$. Clearly, $c$ modifies the effect of $x$. However, when we run the regression of $y$ on $x$ only, we also see the confounding kicking in:
lm(y ~ x)
Coefficients:
(Intercept) x
-0.258 4.856
Finally, as pointed out in my comment, the definition given by oisyutat is wrong. It mirrors what Judea Pearl calls "the associational criterion" for a confounder, and he gives multiple reasons for why this definition fails. See Pearl (2009), Causality, section 6.3. | Is it possible to have a variable that acts as both an effect modifier and a confounder?
Yes, it is absolutely possible that a variable is both a confounder and an effect modifier. We can run a quick simulation in R to verify this: Consider the following mechanism with $x$ being the treat |
25,813 | If $X$ has a log-normal distribution, does $X-c$ also have a log-normal distribution? | The answer to your question is (essentially) no and your argument has the right idea. Below, we formalize it a bit. (For an explanation of the caveat above, see @whuber's comment below.)
If $X$ has a lognormal distribution this means that $\log(X)$ has a normal distribution. Another way of saying this is that $X = e^{Z}$ where $Z$ has a $N(\mu, \sigma^2)$ distribution for some $\mu \in \mathbb{R}, \sigma^2 >0$. Note that by construction, this implies that $X \geq 0$ with probability one.
Now, $X-c = e^Z - c$ cannot have a lognormal distribution because
$$ P(e^Z - c < 0 ) = P(e^Z < c) = P(Z < \log(c)) = \Phi \left( \frac{ \log(c) - \mu }{\sigma} \right) $$
which is strictly positive for any $c > 0$. Therefore, $e^Z - c$ has a positive probability of taking on negative values, which precludes $e^Z - c$ from being lognormally distributed.
In summary, the lognormal distribution is not closed under subtraction of a positive constant. It is, however, closed under multiplication by a (positive) constant, but that's an entirely different question. | If $X$ has a log-normal distribution, does $X-c$ also have a log-normal distribution? | The answer to your question is (essentially) no and your argument has the right idea. Below, we formalize it a bit. (For an explanation of the caveat above, see @whuber's comment below.)
If $X$ has a | If $X$ has a log-normal distribution, does $X-c$ also have a log-normal distribution?
The answer to your question is (essentially) no and your argument has the right idea. Below, we formalize it a bit. (For an explanation of the caveat above, see @whuber's comment below.)
If $X$ has a lognormal distribution this means that $\log(X)$ has a normal distribution. Another way of saying this is that $X = e^{Z}$ where $Z$ has a $N(\mu, \sigma^2)$ distribution for some $\mu \in \mathbb{R}, \sigma^2 >0$. Note that by construction, this implies that $X \geq 0$ with probability one.
Now, $X-c = e^Z - c$ cannot have a lognormal distribution because
$$ P(e^Z - c < 0 ) = P(e^Z < c) = P(Z < \log(c)) = \Phi \left( \frac{ \log(c) - \mu }{\sigma} \right) $$
which is strictly positive for any $c > 0$. Therefore, $e^Z - c$ has a positive probability of taking on negative values, which precludes $e^Z - c$ from being lognormally distributed.
In summary, the lognormal distribution is not closed under subtraction of a positive constant. It is, however, closed under multiplication by a (positive) constant, but that's an entirely different question. | If $X$ has a log-normal distribution, does $X-c$ also have a log-normal distribution?
The answer to your question is (essentially) no and your argument has the right idea. Below, we formalize it a bit. (For an explanation of the caveat above, see @whuber's comment below.)
If $X$ has a |
25,814 | Multilabel logistic regression | I principle, yes - I'm not sure that these techniques are still called logistic regression, though.
Actually your question can refer to two independent extensions to the usual classifiers:
You can require the sum of all memberships for each case being one ("closed world" = the usual case)
or drop this constraint (sometimes called "one-class classifiers")
This could be trained by multiple independent LR models although one-class problems are often ill-posed (this class vs. all kinds of exceptions which could lie in all directions) and then LR is not particularly well suited.
partial class memberships: each case belongs with membership $\in [0, 1]^{n_{classes}}$ to each class, similar to memberships in fuzzy cluster analysis:
Assume there are 3 classes A, B, C. Then a sample may be labelled as belonging to class B. This can also be written as membership vector $[A = 0, B = 1, C = 0]$. In this notation, the partial memberships would be e.g. $[A = 0.05, B = 0.95, C = 0]$ etc.
different interpretations can apply, depending on the problem (fuzzy memberships or probabilities):
fuzzy: a case can belong half to class A and half to class C: [0.5, 0, 0.5]
probability: the reference (e.g. an expert classifying samples) is 80 % certain that it belongs to class A but says a 20 % chance exists that it is class C while being sure it is not class B (0 %): [0.8, 0, 0.2].
another probability: expert panel votes: 4 out of 5 experts say "A", 1 says "C": again [0.8, 0, 0.2]
for prediction, e.g. the posterior probabilities are not only possible but actually fairly common
it is also possible to use this for training
and even validation
The whole idea of this is that for borderline cases it may not be possible to assign them unambiguously to one class.
Whether and how you want to "harden" a soft prediction (e.g. posterior probability) into a "normal" class label that corresponds to 100% membership to that class is entirely up to you. You may even return the result "ambiguous" for intermediate posterior probabilities. Which is sensible depends on your application.
In R e.g. nnet:::multinom which is part of MASS does accept such data for training. An ANN with logistic sigmoid and without any hidden layer is used behind the scenes.
I developed package softclassval for the validation part.
One-class classifiers are nicely explained in Richard G. Brereton: Chemometrics for Pattern Recognition, Wiley, 2009.
We give a more detailed discussion of the partial memberships in this paper:
Claudia Beleites, Kathrin Geiger, Matthias Kirsch, Stephan B Sobottka, Gabriele Schackert & Reiner Salzer:
Raman spectroscopic grading of astrocytoma tissues: using soft reference information.
Anal Bioanal Chem, 2011, Vol. 400(9), pp. 2801-2816 | Multilabel logistic regression | I principle, yes - I'm not sure that these techniques are still called logistic regression, though.
Actually your question can refer to two independent extensions to the usual classifiers:
You can re | Multilabel logistic regression
I principle, yes - I'm not sure that these techniques are still called logistic regression, though.
Actually your question can refer to two independent extensions to the usual classifiers:
You can require the sum of all memberships for each case being one ("closed world" = the usual case)
or drop this constraint (sometimes called "one-class classifiers")
This could be trained by multiple independent LR models although one-class problems are often ill-posed (this class vs. all kinds of exceptions which could lie in all directions) and then LR is not particularly well suited.
partial class memberships: each case belongs with membership $\in [0, 1]^{n_{classes}}$ to each class, similar to memberships in fuzzy cluster analysis:
Assume there are 3 classes A, B, C. Then a sample may be labelled as belonging to class B. This can also be written as membership vector $[A = 0, B = 1, C = 0]$. In this notation, the partial memberships would be e.g. $[A = 0.05, B = 0.95, C = 0]$ etc.
different interpretations can apply, depending on the problem (fuzzy memberships or probabilities):
fuzzy: a case can belong half to class A and half to class C: [0.5, 0, 0.5]
probability: the reference (e.g. an expert classifying samples) is 80 % certain that it belongs to class A but says a 20 % chance exists that it is class C while being sure it is not class B (0 %): [0.8, 0, 0.2].
another probability: expert panel votes: 4 out of 5 experts say "A", 1 says "C": again [0.8, 0, 0.2]
for prediction, e.g. the posterior probabilities are not only possible but actually fairly common
it is also possible to use this for training
and even validation
The whole idea of this is that for borderline cases it may not be possible to assign them unambiguously to one class.
Whether and how you want to "harden" a soft prediction (e.g. posterior probability) into a "normal" class label that corresponds to 100% membership to that class is entirely up to you. You may even return the result "ambiguous" for intermediate posterior probabilities. Which is sensible depends on your application.
In R e.g. nnet:::multinom which is part of MASS does accept such data for training. An ANN with logistic sigmoid and without any hidden layer is used behind the scenes.
I developed package softclassval for the validation part.
One-class classifiers are nicely explained in Richard G. Brereton: Chemometrics for Pattern Recognition, Wiley, 2009.
We give a more detailed discussion of the partial memberships in this paper:
Claudia Beleites, Kathrin Geiger, Matthias Kirsch, Stephan B Sobottka, Gabriele Schackert & Reiner Salzer:
Raman spectroscopic grading of astrocytoma tissues: using soft reference information.
Anal Bioanal Chem, 2011, Vol. 400(9), pp. 2801-2816 | Multilabel logistic regression
I principle, yes - I'm not sure that these techniques are still called logistic regression, though.
Actually your question can refer to two independent extensions to the usual classifiers:
You can re |
25,815 | Multilabel logistic regression | One straightforward way to do multi-label classification with a multi-class classifier (such as multinomial logistic regression) is to assign each possible assignment of labels to its own class. For example, if you were doing binary multi-label classification and had 3 labels, you could assign
[0 0 0] = 0
[0 0 1] = 1
[0 1 0] = 2
and so on, resulting in $2^3 = 8$ classes.
The most obvious problem with this approach is you can end up with a huge number of classes even with a relatively small number of labels (if you have $n$ labels you'll need $2^n$ classes). You also won't be able to predict label assignments that aren't present in your dataset, and you'll be making rather poor use of your data, but if you have a lot of data, and good coverage of the possible label assignments, these things may not matter.
Moving beyond this and what was suggested by others, you'll probably want to look at structured prediction algorithms such as conditional random fields. | Multilabel logistic regression | One straightforward way to do multi-label classification with a multi-class classifier (such as multinomial logistic regression) is to assign each possible assignment of labels to its own class. For e | Multilabel logistic regression
One straightforward way to do multi-label classification with a multi-class classifier (such as multinomial logistic regression) is to assign each possible assignment of labels to its own class. For example, if you were doing binary multi-label classification and had 3 labels, you could assign
[0 0 0] = 0
[0 0 1] = 1
[0 1 0] = 2
and so on, resulting in $2^3 = 8$ classes.
The most obvious problem with this approach is you can end up with a huge number of classes even with a relatively small number of labels (if you have $n$ labels you'll need $2^n$ classes). You also won't be able to predict label assignments that aren't present in your dataset, and you'll be making rather poor use of your data, but if you have a lot of data, and good coverage of the possible label assignments, these things may not matter.
Moving beyond this and what was suggested by others, you'll probably want to look at structured prediction algorithms such as conditional random fields. | Multilabel logistic regression
One straightforward way to do multi-label classification with a multi-class classifier (such as multinomial logistic regression) is to assign each possible assignment of labels to its own class. For e |
25,816 | Multilabel logistic regression | This problem is also related to cost sensitive learning where predicting a label for a sample can have a cost. For multi-label samples the costs for those labels is low while the cost for other labels is higher.
You can take a look at this tutorial which you can also find the corresponding slides here. | Multilabel logistic regression | This problem is also related to cost sensitive learning where predicting a label for a sample can have a cost. For multi-label samples the costs for those labels is low while the cost for other labels | Multilabel logistic regression
This problem is also related to cost sensitive learning where predicting a label for a sample can have a cost. For multi-label samples the costs for those labels is low while the cost for other labels is higher.
You can take a look at this tutorial which you can also find the corresponding slides here. | Multilabel logistic regression
This problem is also related to cost sensitive learning where predicting a label for a sample can have a cost. For multi-label samples the costs for those labels is low while the cost for other labels |
25,817 | How to correctly treat multiple data points per each subject | It would be a violation of independence to "group the data by conditions and not care that multiple data points come from one subject". So that is a no go. One approach is to "to take the mean of all measurements for each condition from each subject and then compare the means". You could do it that way, you wouldn't violate independence, but you are losing some information in the aggregation to subject level means.
On the face of it, this sounds like a mixed design with conditions between subjects and multiple time periods measured within subjects. However, that raises the question, why did you collect data at multiple time points? Is the effect of time, or the progression of a variable over time expected to be different between conditions? If the answer is yes to either of those questions, then given the structure of the data, I would expect that what you are interested in is a mixed ANOVA. The mixed ANOVA will partition the subject variance out of the SSTotal "behind your back" as it were. But whether that partitioning helps out your between subjects test of conditions depends on several other factors.
Anyway, in SPSS/PASW 18 Analyze -> General Linear Model -> Repeated Measures. You'll have one row for each subject and one column for each time point as well as one as their condition identifier. The condition identifier will go into the "between" section and the repeated measures will be taken care of when you define the repeated measure factor. | How to correctly treat multiple data points per each subject | It would be a violation of independence to "group the data by conditions and not care that multiple data points come from one subject". So that is a no go. One approach is to "to take the mean of al | How to correctly treat multiple data points per each subject
It would be a violation of independence to "group the data by conditions and not care that multiple data points come from one subject". So that is a no go. One approach is to "to take the mean of all measurements for each condition from each subject and then compare the means". You could do it that way, you wouldn't violate independence, but you are losing some information in the aggregation to subject level means.
On the face of it, this sounds like a mixed design with conditions between subjects and multiple time periods measured within subjects. However, that raises the question, why did you collect data at multiple time points? Is the effect of time, or the progression of a variable over time expected to be different between conditions? If the answer is yes to either of those questions, then given the structure of the data, I would expect that what you are interested in is a mixed ANOVA. The mixed ANOVA will partition the subject variance out of the SSTotal "behind your back" as it were. But whether that partitioning helps out your between subjects test of conditions depends on several other factors.
Anyway, in SPSS/PASW 18 Analyze -> General Linear Model -> Repeated Measures. You'll have one row for each subject and one column for each time point as well as one as their condition identifier. The condition identifier will go into the "between" section and the repeated measures will be taken care of when you define the repeated measure factor. | How to correctly treat multiple data points per each subject
It would be a violation of independence to "group the data by conditions and not care that multiple data points come from one subject". So that is a no go. One approach is to "to take the mean of al |
25,818 | How to correctly treat multiple data points per each subject | Repeated measures design is the traditional way to handle this, as drknexus mentions. When doing that kind of analysis you have to aggregate to one score/condition/subject. It's sensitive to violations of assumptions of sphericity and other issues. However, the more modern technique is to use multi-level modelling or linear mixed effects. Using this technique you do not aggregate the data. There are several treatments of this available but I don't currently know the best basic tutorial. Baayen (2008) Chapter 7 is good. Pinheiro & Bates (2000) is very good but from the sounds of things follow their advice in the intro and read the bits recommended for beginners.
If you want to just get an ANOVA style result, assuming all of your data are in long format (one line / data point) and you have columns indicating subject, response (y), and a condition variable (x), you could try looking at something like this in R (make sure the lme4 package is installed).
library(lme4)
dat <- read.table('myGreatData.txt', header = TRUE)
m <- lmer( y ~ x + (1|subject), data = dat)
summary(m)
anova(m)
You could of course have many more conditions variable columns, perhaps interacting. Then you might change the lmer command to something like...
m <- lmer( y ~ x1 * x2 + (1|subject), data = dat)
(BTW, I believe that not aggregating in repeated measures in order to increase power is a formal fallacy. Anyone remember the name?) | How to correctly treat multiple data points per each subject | Repeated measures design is the traditional way to handle this, as drknexus mentions. When doing that kind of analysis you have to aggregate to one score/condition/subject. It's sensitive to violati | How to correctly treat multiple data points per each subject
Repeated measures design is the traditional way to handle this, as drknexus mentions. When doing that kind of analysis you have to aggregate to one score/condition/subject. It's sensitive to violations of assumptions of sphericity and other issues. However, the more modern technique is to use multi-level modelling or linear mixed effects. Using this technique you do not aggregate the data. There are several treatments of this available but I don't currently know the best basic tutorial. Baayen (2008) Chapter 7 is good. Pinheiro & Bates (2000) is very good but from the sounds of things follow their advice in the intro and read the bits recommended for beginners.
If you want to just get an ANOVA style result, assuming all of your data are in long format (one line / data point) and you have columns indicating subject, response (y), and a condition variable (x), you could try looking at something like this in R (make sure the lme4 package is installed).
library(lme4)
dat <- read.table('myGreatData.txt', header = TRUE)
m <- lmer( y ~ x + (1|subject), data = dat)
summary(m)
anova(m)
You could of course have many more conditions variable columns, perhaps interacting. Then you might change the lmer command to something like...
m <- lmer( y ~ x1 * x2 + (1|subject), data = dat)
(BTW, I believe that not aggregating in repeated measures in order to increase power is a formal fallacy. Anyone remember the name?) | How to correctly treat multiple data points per each subject
Repeated measures design is the traditional way to handle this, as drknexus mentions. When doing that kind of analysis you have to aggregate to one score/condition/subject. It's sensitive to violati |
25,819 | Covariance matrix of uniform spherical distribution | According to @whuber's answer posted here, the spherical distribution is best seen as
$$ \left( Y_1 = \frac{X_1}{\sqrt{X_1^2+...+X_n^2}}, ... , Y_n = \frac{X_n}{\sqrt{X_1^2+...+X_n^2}}\right)$$
where all the $X_i$ are independent Gaussian $(0,1)$.
If $(Y_1, ..., Y_i, ... Y_n)$ is uniform on the unit sphere, then so is $(Y_1, ..., -Y_i, ... Y_n)$, so they have the same distribution. In particular this implies that $E(Y_i)=-E(Y_i)$ and also that $E(Y_iY_j) = - E(Y_iY_j)$ for all $j \neq i$. Therefore the means and the covariance terms are equal to 0, as mentions @whuber in the comments.
For the variance, notice that
$$E \left( Y_1^2 \right) + ... + E \left( Y_n^2 \right) =
E \left( Y_1^2 + ... + Y_n^2 \right) = 1.$$
For reasons of symmetry, the $Y_i$ are obviously exchangeable (but not independent), so that $E \left( Y_1^2 \right) = ... = E \left( Y_n^2 \right)$ and thus each of them is equal to $1/n$.
Im summary, the variance terms are equal to $1/n$ and the covariance terms are equal to $0$, so the covariance matrix is $\frac{1}{n} \mathbf{I}$. This is a great example of uncorrelated dependent variables (for example if $Y_1 = 1$ then all other values have to be $0$). | Covariance matrix of uniform spherical distribution | According to @whuber's answer posted here, the spherical distribution is best seen as
$$ \left( Y_1 = \frac{X_1}{\sqrt{X_1^2+...+X_n^2}}, ... , Y_n = \frac{X_n}{\sqrt{X_1^2+...+X_n^2}}\right)$$
where | Covariance matrix of uniform spherical distribution
According to @whuber's answer posted here, the spherical distribution is best seen as
$$ \left( Y_1 = \frac{X_1}{\sqrt{X_1^2+...+X_n^2}}, ... , Y_n = \frac{X_n}{\sqrt{X_1^2+...+X_n^2}}\right)$$
where all the $X_i$ are independent Gaussian $(0,1)$.
If $(Y_1, ..., Y_i, ... Y_n)$ is uniform on the unit sphere, then so is $(Y_1, ..., -Y_i, ... Y_n)$, so they have the same distribution. In particular this implies that $E(Y_i)=-E(Y_i)$ and also that $E(Y_iY_j) = - E(Y_iY_j)$ for all $j \neq i$. Therefore the means and the covariance terms are equal to 0, as mentions @whuber in the comments.
For the variance, notice that
$$E \left( Y_1^2 \right) + ... + E \left( Y_n^2 \right) =
E \left( Y_1^2 + ... + Y_n^2 \right) = 1.$$
For reasons of symmetry, the $Y_i$ are obviously exchangeable (but not independent), so that $E \left( Y_1^2 \right) = ... = E \left( Y_n^2 \right)$ and thus each of them is equal to $1/n$.
Im summary, the variance terms are equal to $1/n$ and the covariance terms are equal to $0$, so the covariance matrix is $\frac{1}{n} \mathbf{I}$. This is a great example of uncorrelated dependent variables (for example if $Y_1 = 1$ then all other values have to be $0$). | Covariance matrix of uniform spherical distribution
According to @whuber's answer posted here, the spherical distribution is best seen as
$$ \left( Y_1 = \frac{X_1}{\sqrt{X_1^2+...+X_n^2}}, ... , Y_n = \frac{X_n}{\sqrt{X_1^2+...+X_n^2}}\right)$$
where |
25,820 | Vector multiplication in BUGS and JAGS | Unlike JAGS, WinBUGS and OpenBUGS does not do this form of vectorization; you have to write a loop, and compute each element 'by hand', as described above. | Vector multiplication in BUGS and JAGS | Unlike JAGS, WinBUGS and OpenBUGS does not do this form of vectorization; you have to write a loop, and compute each element 'by hand', as described above. | Vector multiplication in BUGS and JAGS
Unlike JAGS, WinBUGS and OpenBUGS does not do this form of vectorization; you have to write a loop, and compute each element 'by hand', as described above. | Vector multiplication in BUGS and JAGS
Unlike JAGS, WinBUGS and OpenBUGS does not do this form of vectorization; you have to write a loop, and compute each element 'by hand', as described above. |
25,821 | Vector multiplication in BUGS and JAGS | Martyn Plummer points out that this is implemented in JAGS, which I missed when reading the manual. From Ch 5:
Scalar functions taking scalar arguments are automatically vectorized.
They can also be called when the arguments are arrays with conforming
dimensions, or scalars. So, for example, the scalar $c$ can be added
to the matrix $A$ using
B <- A + c
instead of the more verbose form
D <- dim(A)
for (i in 1:D[1])
for (j in 1:D[2]) {
B[i,j] <- A[i,j] + c
}
} | Vector multiplication in BUGS and JAGS | Martyn Plummer points out that this is implemented in JAGS, which I missed when reading the manual. From Ch 5:
Scalar functions taking scalar arguments are automatically vectorized.
They can also b | Vector multiplication in BUGS and JAGS
Martyn Plummer points out that this is implemented in JAGS, which I missed when reading the manual. From Ch 5:
Scalar functions taking scalar arguments are automatically vectorized.
They can also be called when the arguments are arrays with conforming
dimensions, or scalars. So, for example, the scalar $c$ can be added
to the matrix $A$ using
B <- A + c
instead of the more verbose form
D <- dim(A)
for (i in 1:D[1])
for (j in 1:D[2]) {
B[i,j] <- A[i,j] + c
}
} | Vector multiplication in BUGS and JAGS
Martyn Plummer points out that this is implemented in JAGS, which I missed when reading the manual. From Ch 5:
Scalar functions taking scalar arguments are automatically vectorized.
They can also b |
25,822 | Vector multiplication in BUGS and JAGS | To do element-wise multiplication you can just make a for loop in those languages and that's it! I've used for loops in WinBUGS with no problems. | Vector multiplication in BUGS and JAGS | To do element-wise multiplication you can just make a for loop in those languages and that's it! I've used for loops in WinBUGS with no problems. | Vector multiplication in BUGS and JAGS
To do element-wise multiplication you can just make a for loop in those languages and that's it! I've used for loops in WinBUGS with no problems. | Vector multiplication in BUGS and JAGS
To do element-wise multiplication you can just make a for loop in those languages and that's it! I've used for loops in WinBUGS with no problems. |
25,823 | Vector multiplication in BUGS and JAGS | Incidentally, element-wise multiplication of two equal length vectors is called the Hadamard product (aka the Schur product). | Vector multiplication in BUGS and JAGS | Incidentally, element-wise multiplication of two equal length vectors is called the Hadamard product (aka the Schur product). | Vector multiplication in BUGS and JAGS
Incidentally, element-wise multiplication of two equal length vectors is called the Hadamard product (aka the Schur product). | Vector multiplication in BUGS and JAGS
Incidentally, element-wise multiplication of two equal length vectors is called the Hadamard product (aka the Schur product). |
25,824 | Can random effects apply only to categorical variables? | This is a good and a very basic question.
The interpretation of random effects is very domain-specific and is dependent on the modeling choice (the statistical model or being a Bayesian or frequentist). For a very good discussion, see page 245, Gelman and Hill (2007). For a Bayesian everything is random (even though parameters may have a true fixed value, they are modeled as random), and a frequentist can also choose a parameter value to be a fixed effect that would have been otherwise modeled as random (see Casella, 2008, discussion about blocks to be fixed or random in example 3.2).
Edit (after comment)
Data are fixed after you observe them. If they are continuous, they should be modeled as continuous. You can model categorical variables as categorical and sometimes as continuous (like in an ordinal variable setting). The parameters are unknown and they may be modeled as fixed or random. The parameters essentially relate response to predictors. If you want individual predictor's slope (or its coefficient in a linear model) to vary for each response, model it as random, otherwise model it as fixed. Similarly, if you want the intercept to vary regarding groups, then they should be modeled as random; otherwise they should be fixed. | Can random effects apply only to categorical variables? | This is a good and a very basic question.
The interpretation of random effects is very domain-specific and is dependent on the modeling choice (the statistical model or being a Bayesian or frequentist | Can random effects apply only to categorical variables?
This is a good and a very basic question.
The interpretation of random effects is very domain-specific and is dependent on the modeling choice (the statistical model or being a Bayesian or frequentist). For a very good discussion, see page 245, Gelman and Hill (2007). For a Bayesian everything is random (even though parameters may have a true fixed value, they are modeled as random), and a frequentist can also choose a parameter value to be a fixed effect that would have been otherwise modeled as random (see Casella, 2008, discussion about blocks to be fixed or random in example 3.2).
Edit (after comment)
Data are fixed after you observe them. If they are continuous, they should be modeled as continuous. You can model categorical variables as categorical and sometimes as continuous (like in an ordinal variable setting). The parameters are unknown and they may be modeled as fixed or random. The parameters essentially relate response to predictors. If you want individual predictor's slope (or its coefficient in a linear model) to vary for each response, model it as random, otherwise model it as fixed. Similarly, if you want the intercept to vary regarding groups, then they should be modeled as random; otherwise they should be fixed. | Can random effects apply only to categorical variables?
This is a good and a very basic question.
The interpretation of random effects is very domain-specific and is dependent on the modeling choice (the statistical model or being a Bayesian or frequentist |
25,825 | Can random effects apply only to categorical variables? | Your question may have already been solved, but it is actually written in a text book;
Random effects are categorical variables whose levels are viewed as a sample from some larger population, as opposite to fixed effects, whose levels are of interest in their own right,
on the page 232 of: Alan Grafen and Rosie Hails (2002) "Modern statistics for the life sciences", Oxford University Press. | Can random effects apply only to categorical variables? | Your question may have already been solved, but it is actually written in a text book;
Random effects are categorical variables whose levels are viewed as a sample from some larger population, as opp | Can random effects apply only to categorical variables?
Your question may have already been solved, but it is actually written in a text book;
Random effects are categorical variables whose levels are viewed as a sample from some larger population, as opposite to fixed effects, whose levels are of interest in their own right,
on the page 232 of: Alan Grafen and Rosie Hails (2002) "Modern statistics for the life sciences", Oxford University Press. | Can random effects apply only to categorical variables?
Your question may have already been solved, but it is actually written in a text book;
Random effects are categorical variables whose levels are viewed as a sample from some larger population, as opp |
25,826 | Can random effects apply only to categorical variables? | I think the issue is that there are two things involved here. A typical example of random effects might be predicting the grade point average (GPA) of a college student based on a number of factors including their average score in a series of tests during high school.
The average score is continuous. You would typically have a varying intercept, or intercept and slope, for the average score for each individual. The individual is obviously categorical.
So when you say "only applies to categorical variables" it's a little vague. Say you only consider a random intercept for the average score. In this case, your random intercept for a continuous quantity and in fact is probably modeled as something like a gaussian variable with a mean and standard deviation to be determined by the procedure. But this random intercept is determined across a population of students where each student is identified by a categorical variable.
You could use a "continuous" variable instead of student ID. Maybe you could choose a student's height. But it would essentially have to be treated as if it were categorical. If your height measurements were very precise you'd again end up with a unique height for every student so would have accomplished nothing different. If your height measurements were not very precise, you'd end up lumping multiple students together at each height. (Mixing their scores in a possibly ill-defined way.)
This is sort-of the opposite of interactions. In an interaction, you're multiplying two variables and essentially treating both as continuous. A categorical variable would be broken up into a set of 0/1 dummy variables and the 0 or 1 would be multiplied times the other variable in the interaction.
The bottom line is that a "random effect" is in some sense just a coefficient which has a distribution (is modeled) rather than a fixed value. | Can random effects apply only to categorical variables? | I think the issue is that there are two things involved here. A typical example of random effects might be predicting the grade point average (GPA) of a college student based on a number of factors in | Can random effects apply only to categorical variables?
I think the issue is that there are two things involved here. A typical example of random effects might be predicting the grade point average (GPA) of a college student based on a number of factors including their average score in a series of tests during high school.
The average score is continuous. You would typically have a varying intercept, or intercept and slope, for the average score for each individual. The individual is obviously categorical.
So when you say "only applies to categorical variables" it's a little vague. Say you only consider a random intercept for the average score. In this case, your random intercept for a continuous quantity and in fact is probably modeled as something like a gaussian variable with a mean and standard deviation to be determined by the procedure. But this random intercept is determined across a population of students where each student is identified by a categorical variable.
You could use a "continuous" variable instead of student ID. Maybe you could choose a student's height. But it would essentially have to be treated as if it were categorical. If your height measurements were very precise you'd again end up with a unique height for every student so would have accomplished nothing different. If your height measurements were not very precise, you'd end up lumping multiple students together at each height. (Mixing their scores in a possibly ill-defined way.)
This is sort-of the opposite of interactions. In an interaction, you're multiplying two variables and essentially treating both as continuous. A categorical variable would be broken up into a set of 0/1 dummy variables and the 0 or 1 would be multiplied times the other variable in the interaction.
The bottom line is that a "random effect" is in some sense just a coefficient which has a distribution (is modeled) rather than a fixed value. | Can random effects apply only to categorical variables?
I think the issue is that there are two things involved here. A typical example of random effects might be predicting the grade point average (GPA) of a college student based on a number of factors in |
25,827 | Highly irregular time series | I have spent quite some time building a general framework for unevenly-spaced time series: http://www.eckner.com/research.html
In addition, I have written a paper is about trend and seasonality estimation for unevenly-spaced time series.
I hope you will find the results helpful! | Highly irregular time series | I have spent quite some time building a general framework for unevenly-spaced time series: http://www.eckner.com/research.html
In addition, I have written a paper is about trend and seasonality estima | Highly irregular time series
I have spent quite some time building a general framework for unevenly-spaced time series: http://www.eckner.com/research.html
In addition, I have written a paper is about trend and seasonality estimation for unevenly-spaced time series.
I hope you will find the results helpful! | Highly irregular time series
I have spent quite some time building a general framework for unevenly-spaced time series: http://www.eckner.com/research.html
In addition, I have written a paper is about trend and seasonality estima |
25,828 | Highly irregular time series | I don't know if a mixed model is very appropriate (using the standard packages where the random effect structure is a linear predictor), unless you think the data at all time points should be exchangeable with each other in some sense (in which case the irregular intervals are a non-issue) - it wouldn't really be modeling the temporal autocorrelation in a reasonable way. It's possible you could trick lmer() into doing some sort of autogressive thing but how exactly you'd do that escapes me right now (I may not be thinking straight). Also, I'm not sure what the "grouping variable" would be that induces autocorrelation in the mixed model scenario.
If the temporal autocorrelation is a nuisance parameter and you don't expect it to be too large, then you could bin the data into epochs that are essentially disjoint from each other in terms of correlation (e.g. separate the time series at points where there are months of no data) and view those as independent replicates. You could then do something like an GEE on this modified data set where the "cluster" is defined by which epoch you are in, and the entries of the working correlation matrix are a function of how far apart the observations were made. If your regression function is correct, then you will still get consistent estimates of the regression coefficients, even if the correlation structure is misspecified. This would also allow you to model it as count data using, for example, the log-link (as one usually would in poisson regression). You could also build in some differential correlation between species, where each time point is viewed as a multivariate vector of species counts with some temporally decaying association between time points. This would require some pre-processing to trick the standard GEE packages into doing this.
If the temporal autocorrelation is not a nuisance parameter, I would try something more like a structured covariance model where you view the entire dataset as one observation of a big multivariate vector such that covariance between observations $Y_{s},Y_{t}$ on species $u,v$ is
$$ {\rm cov}(Y_{s}, Y_{t}) = f_{\theta}(s,t,u,v) $$
where $f$ is some parametric function known up to a finite number of parameters, $\theta$, along with a number of parameters to govern the mean structure. You might need to "build your own" for a model like this, but I'd also not be surprised if there are MPLUS packages to do things like this for count data. | Highly irregular time series | I don't know if a mixed model is very appropriate (using the standard packages where the random effect structure is a linear predictor), unless you think the data at all time points should be exchange | Highly irregular time series
I don't know if a mixed model is very appropriate (using the standard packages where the random effect structure is a linear predictor), unless you think the data at all time points should be exchangeable with each other in some sense (in which case the irregular intervals are a non-issue) - it wouldn't really be modeling the temporal autocorrelation in a reasonable way. It's possible you could trick lmer() into doing some sort of autogressive thing but how exactly you'd do that escapes me right now (I may not be thinking straight). Also, I'm not sure what the "grouping variable" would be that induces autocorrelation in the mixed model scenario.
If the temporal autocorrelation is a nuisance parameter and you don't expect it to be too large, then you could bin the data into epochs that are essentially disjoint from each other in terms of correlation (e.g. separate the time series at points where there are months of no data) and view those as independent replicates. You could then do something like an GEE on this modified data set where the "cluster" is defined by which epoch you are in, and the entries of the working correlation matrix are a function of how far apart the observations were made. If your regression function is correct, then you will still get consistent estimates of the regression coefficients, even if the correlation structure is misspecified. This would also allow you to model it as count data using, for example, the log-link (as one usually would in poisson regression). You could also build in some differential correlation between species, where each time point is viewed as a multivariate vector of species counts with some temporally decaying association between time points. This would require some pre-processing to trick the standard GEE packages into doing this.
If the temporal autocorrelation is not a nuisance parameter, I would try something more like a structured covariance model where you view the entire dataset as one observation of a big multivariate vector such that covariance between observations $Y_{s},Y_{t}$ on species $u,v$ is
$$ {\rm cov}(Y_{s}, Y_{t}) = f_{\theta}(s,t,u,v) $$
where $f$ is some parametric function known up to a finite number of parameters, $\theta$, along with a number of parameters to govern the mean structure. You might need to "build your own" for a model like this, but I'd also not be surprised if there are MPLUS packages to do things like this for count data. | Highly irregular time series
I don't know if a mixed model is very appropriate (using the standard packages where the random effect structure is a linear predictor), unless you think the data at all time points should be exchange |
25,829 | Is it allowed to include time as a predictor in mixed models? | Time is allowed; whether it is needed will depend on what you are trying to model? The problem you have is that you have covariates that together appear to fit the trend in the data, which Time can do just as well but using fewer degrees of freedom - hence they get dropped out instead of Time.
If the interest is to model the system, the relationship between the response and the covariates over time, rather than model how the response varies over time, then do not include Time as a covariate. If the aim is to model the change in the mean level of the response, include Time but do not include the covariate. From what you say, it would appear that you want the former, not the latter, and should not include Time in your model. (But do consider the extra info below.)
There are a couple of caveats though. For theory to hold, the residuals should be i.i.d. (or i.d. if you relax the independence assumption using a correlation structure). If you are modelling the response as a function of covariates and they do not adequately model any trend in the data, then the residuals will have a trend, which violates the assumptions of theory, unless the correlation structure fitted can cope with this trend.
Conversely, if you are modelling the trend in the response alone (just including Time), there may be systematic variation in the residuals (about the fitted trend) that is not explained by the trend (Time), and this might also violate the assumptions for the residuals. In such cases you might need to include other covariates to render the residuals i.i.d.
Why is this an issue? Well when you are testing if the trend component, for example, is significant, or whether the effects of covariates are significant, the theory used will assume the residuals are i.i.d. If they aren't i.i.d. then the assumptions won't be met and the p-values will be biased.
The point of all this is that you need to model all the various components of the data such that the residuals are i.i.d. for the theory you use, to test if the fitted components are significant, to be valid.
As an example, consider seasonal data and we want to fit a model that describes the long-term variation in the data, the trend. If we only model the trend and not the seasonal cyclic variation, we are unable to test whether the fitted trend is significant because the residuals will not be i.i.d. For such data, we would need to fit a model with both a seasonal component and a trend component, and a null model that contained just the seasonal component. We would then compare the two models using a generalized likelihood ratio test to assess the significance of the fitted trend. This is done using anova() on the $lme components of the two models fitted using gamm(). | Is it allowed to include time as a predictor in mixed models? | Time is allowed; whether it is needed will depend on what you are trying to model? The problem you have is that you have covariates that together appear to fit the trend in the data, which Time can do | Is it allowed to include time as a predictor in mixed models?
Time is allowed; whether it is needed will depend on what you are trying to model? The problem you have is that you have covariates that together appear to fit the trend in the data, which Time can do just as well but using fewer degrees of freedom - hence they get dropped out instead of Time.
If the interest is to model the system, the relationship between the response and the covariates over time, rather than model how the response varies over time, then do not include Time as a covariate. If the aim is to model the change in the mean level of the response, include Time but do not include the covariate. From what you say, it would appear that you want the former, not the latter, and should not include Time in your model. (But do consider the extra info below.)
There are a couple of caveats though. For theory to hold, the residuals should be i.i.d. (or i.d. if you relax the independence assumption using a correlation structure). If you are modelling the response as a function of covariates and they do not adequately model any trend in the data, then the residuals will have a trend, which violates the assumptions of theory, unless the correlation structure fitted can cope with this trend.
Conversely, if you are modelling the trend in the response alone (just including Time), there may be systematic variation in the residuals (about the fitted trend) that is not explained by the trend (Time), and this might also violate the assumptions for the residuals. In such cases you might need to include other covariates to render the residuals i.i.d.
Why is this an issue? Well when you are testing if the trend component, for example, is significant, or whether the effects of covariates are significant, the theory used will assume the residuals are i.i.d. If they aren't i.i.d. then the assumptions won't be met and the p-values will be biased.
The point of all this is that you need to model all the various components of the data such that the residuals are i.i.d. for the theory you use, to test if the fitted components are significant, to be valid.
As an example, consider seasonal data and we want to fit a model that describes the long-term variation in the data, the trend. If we only model the trend and not the seasonal cyclic variation, we are unable to test whether the fitted trend is significant because the residuals will not be i.i.d. For such data, we would need to fit a model with both a seasonal component and a trend component, and a null model that contained just the seasonal component. We would then compare the two models using a generalized likelihood ratio test to assess the significance of the fitted trend. This is done using anova() on the $lme components of the two models fitted using gamm(). | Is it allowed to include time as a predictor in mixed models?
Time is allowed; whether it is needed will depend on what you are trying to model? The problem you have is that you have covariates that together appear to fit the trend in the data, which Time can do |
25,830 | Organizing a classification tree (in rpart) into a set of rules? | Such a functionality (or a close one) seems to be available in the rattle package, as described in RJournal 1/2 2009 (p. 50), although I only checked it from the command-line.
For your example, it yields the following output:
Rule number: 3 [Kyphosis=present cover=19 (23%) prob=0.58]
Start< 8.5
Rule number: 23 [Kyphosis=present cover=7 (9%) prob=0.57]
Start>=8.5
Start< 14.5
Age>=55
Age< 111
Rule number: 22 [Kyphosis=absent cover=14 (17%) prob=0.14]
Start>=8.5
Start< 14.5
Age>=55
Age>=111
Rule number: 10 [Kyphosis=absent cover=12 (15%) prob=0.00]
Start>=8.5
Start< 14.5
Age< 55
Rule number: 4 [Kyphosis=absent cover=29 (36%) prob=0.00]
Start>=8.5
Start>=14.5
To get this output, I source the rattle/R/rpart.R source file (from the source package) in my workspace, after having removed the two calls to Rtxt() in the asRules.rpart() function (you can also replace it with print). Then, I just type
> asRules(fit) | Organizing a classification tree (in rpart) into a set of rules? | Such a functionality (or a close one) seems to be available in the rattle package, as described in RJournal 1/2 2009 (p. 50), although I only checked it from the command-line.
For your example, it yi | Organizing a classification tree (in rpart) into a set of rules?
Such a functionality (or a close one) seems to be available in the rattle package, as described in RJournal 1/2 2009 (p. 50), although I only checked it from the command-line.
For your example, it yields the following output:
Rule number: 3 [Kyphosis=present cover=19 (23%) prob=0.58]
Start< 8.5
Rule number: 23 [Kyphosis=present cover=7 (9%) prob=0.57]
Start>=8.5
Start< 14.5
Age>=55
Age< 111
Rule number: 22 [Kyphosis=absent cover=14 (17%) prob=0.14]
Start>=8.5
Start< 14.5
Age>=55
Age>=111
Rule number: 10 [Kyphosis=absent cover=12 (15%) prob=0.00]
Start>=8.5
Start< 14.5
Age< 55
Rule number: 4 [Kyphosis=absent cover=29 (36%) prob=0.00]
Start>=8.5
Start>=14.5
To get this output, I source the rattle/R/rpart.R source file (from the source package) in my workspace, after having removed the two calls to Rtxt() in the asRules.rpart() function (you can also replace it with print). Then, I just type
> asRules(fit) | Organizing a classification tree (in rpart) into a set of rules?
Such a functionality (or a close one) seems to be available in the rattle package, as described in RJournal 1/2 2009 (p. 50), although I only checked it from the command-line.
For your example, it yi |
25,831 | Organizing a classification tree (in rpart) into a set of rules? | The
rpart.plot
package version 3.0 (July 2018) has a function
rpart.rules for generating a set of rules for a tree. For example
library(rpart.plot)
fit <- rpart(Kyphosis ~ Age + Number + Start, data=kyphosis)
rpart.rules(fit)
gives
Kyphosis
0.00 when Start >= 15
0.00 when Start is 9 to 15 & Age < 55
0.14 when Start is 9 to 15 & Age >= 111
0.57 when Start is 9 to 15 & Age is 55 to 111
0.58 when Start < 9
For more examples see Chapter 4 of the
rpart.plot vignette. | Organizing a classification tree (in rpart) into a set of rules? | The
rpart.plot
package version 3.0 (July 2018) has a function
rpart.rules for generating a set of rules for a tree. For example
library(rpart.plot)
fit <- rpart(Kyphosis ~ Age + Number + Start, data=k | Organizing a classification tree (in rpart) into a set of rules?
The
rpart.plot
package version 3.0 (July 2018) has a function
rpart.rules for generating a set of rules for a tree. For example
library(rpart.plot)
fit <- rpart(Kyphosis ~ Age + Number + Start, data=kyphosis)
rpart.rules(fit)
gives
Kyphosis
0.00 when Start >= 15
0.00 when Start is 9 to 15 & Age < 55
0.14 when Start is 9 to 15 & Age >= 111
0.57 when Start is 9 to 15 & Age is 55 to 111
0.58 when Start < 9
For more examples see Chapter 4 of the
rpart.plot vignette. | Organizing a classification tree (in rpart) into a set of rules?
The
rpart.plot
package version 3.0 (July 2018) has a function
rpart.rules for generating a set of rules for a tree. For example
library(rpart.plot)
fit <- rpart(Kyphosis ~ Age + Number + Start, data=k |
25,832 | How can one plot continuous by continuous interactions in ggplot2? [closed] | Here's my version with your simulated data set:
x1 <- rnorm(100,2,10)
x2 <- rnorm(100,2,10)
y <- x1+x2+x1*x2+rnorm(100,1,2)
dat <- data.frame(y=y,x1=x1,x2=x2)
res <- lm(y~x1*x2,data=dat)
z1 <- z2 <- seq(-1,1)
newdf <- expand.grid(x1=z1,x2=z2)
library(ggplot2)
p <- ggplot(data=transform(newdf, yp=predict(res, newdf)),
aes(y=yp, x=x1, color=factor(x2))) + stat_smooth(method=lm)
p + scale_colour_discrete(name="x2") +
labs(x="x1", y="mean of resp") +
scale_x_continuous(breaks=seq(-1,1)) + theme_bw()
I let you manage the details about x/y-axis labels and legend positioning. | How can one plot continuous by continuous interactions in ggplot2? [closed] | Here's my version with your simulated data set:
x1 <- rnorm(100,2,10)
x2 <- rnorm(100,2,10)
y <- x1+x2+x1*x2+rnorm(100,1,2)
dat <- data.frame(y=y,x1=x1,x2=x2)
res <- lm(y~x1*x2,data=dat)
z1 <- z2 <- s | How can one plot continuous by continuous interactions in ggplot2? [closed]
Here's my version with your simulated data set:
x1 <- rnorm(100,2,10)
x2 <- rnorm(100,2,10)
y <- x1+x2+x1*x2+rnorm(100,1,2)
dat <- data.frame(y=y,x1=x1,x2=x2)
res <- lm(y~x1*x2,data=dat)
z1 <- z2 <- seq(-1,1)
newdf <- expand.grid(x1=z1,x2=z2)
library(ggplot2)
p <- ggplot(data=transform(newdf, yp=predict(res, newdf)),
aes(y=yp, x=x1, color=factor(x2))) + stat_smooth(method=lm)
p + scale_colour_discrete(name="x2") +
labs(x="x1", y="mean of resp") +
scale_x_continuous(breaks=seq(-1,1)) + theme_bw()
I let you manage the details about x/y-axis labels and legend positioning. | How can one plot continuous by continuous interactions in ggplot2? [closed]
Here's my version with your simulated data set:
x1 <- rnorm(100,2,10)
x2 <- rnorm(100,2,10)
y <- x1+x2+x1*x2+rnorm(100,1,2)
dat <- data.frame(y=y,x1=x1,x2=x2)
res <- lm(y~x1*x2,data=dat)
z1 <- z2 <- s |
25,833 | How can one plot continuous by continuous interactions in ggplot2? [closed] | Computing the estimates for y with Z-score of 0 (y0 column), -1 (y1m column) and 1 (y1p column):
dat$y0 <- res$coefficients[[1]] + res$coefficients[[2]]*dat$x1 + res$coefficients[[3]]*0 + res$coefficients[[4]]*dat$x1*0
dat$y1m <- res$coefficients[[1]] + res$coefficients[[2]]*dat$x1 + res$coefficients[[3]]*-1 + res$coefficients[[4]]*dat$x1*-1
dat$y1p <- res$coefficients[[1]] + res$coefficients[[2]]*dat$x1 + res$coefficients[[3]]*1 + res$coefficients[[4]]*dat$x1*1
Plotting the lines with base plot():
plot(dat$x1, dat$y0, type="l", xlab="x1", ylab="Estimates")
lines(dat$x1, dat$y1m, col="red")
lines(dat$x1, dat$y1p, col="blue")
To use ggplot, you may call geom_line:
ggplot(dat, aes(x1, y0)) + geom_line() +
geom_line(aes(x1, y1m), color="red") +
geom_line(aes(x1, y1p), color="blue") +
theme_bw() + opts(title="") + xlab("x1") + ylab("Estimates") | How can one plot continuous by continuous interactions in ggplot2? [closed] | Computing the estimates for y with Z-score of 0 (y0 column), -1 (y1m column) and 1 (y1p column):
dat$y0 <- res$coefficients[[1]] + res$coefficients[[2]]*dat$x1 + res$coefficients[[3]]*0 + res$coeffici | How can one plot continuous by continuous interactions in ggplot2? [closed]
Computing the estimates for y with Z-score of 0 (y0 column), -1 (y1m column) and 1 (y1p column):
dat$y0 <- res$coefficients[[1]] + res$coefficients[[2]]*dat$x1 + res$coefficients[[3]]*0 + res$coefficients[[4]]*dat$x1*0
dat$y1m <- res$coefficients[[1]] + res$coefficients[[2]]*dat$x1 + res$coefficients[[3]]*-1 + res$coefficients[[4]]*dat$x1*-1
dat$y1p <- res$coefficients[[1]] + res$coefficients[[2]]*dat$x1 + res$coefficients[[3]]*1 + res$coefficients[[4]]*dat$x1*1
Plotting the lines with base plot():
plot(dat$x1, dat$y0, type="l", xlab="x1", ylab="Estimates")
lines(dat$x1, dat$y1m, col="red")
lines(dat$x1, dat$y1p, col="blue")
To use ggplot, you may call geom_line:
ggplot(dat, aes(x1, y0)) + geom_line() +
geom_line(aes(x1, y1m), color="red") +
geom_line(aes(x1, y1p), color="blue") +
theme_bw() + opts(title="") + xlab("x1") + ylab("Estimates") | How can one plot continuous by continuous interactions in ggplot2? [closed]
Computing the estimates for y with Z-score of 0 (y0 column), -1 (y1m column) and 1 (y1p column):
dat$y0 <- res$coefficients[[1]] + res$coefficients[[2]]*dat$x1 + res$coefficients[[3]]*0 + res$coeffici |
25,834 | What kinds of things can I predict with a naive Bayesian classifier? | The Elements of Statistical Learning, by Hastie et al. has a lot of illustrations of Machine Learning applications, and all data sets are available on the companion website, including data on spam as on the Ruby Classifier webpage.
As for a gentle introduction to Bayes classifier, I would suggest to look at the following tutorial from Andrew Moore: A Short Intro to Naive Bayesian Classifiers (many other tutorials are also available). | What kinds of things can I predict with a naive Bayesian classifier? | The Elements of Statistical Learning, by Hastie et al. has a lot of illustrations of Machine Learning applications, and all data sets are available on the companion website, including data on spam as | What kinds of things can I predict with a naive Bayesian classifier?
The Elements of Statistical Learning, by Hastie et al. has a lot of illustrations of Machine Learning applications, and all data sets are available on the companion website, including data on spam as on the Ruby Classifier webpage.
As for a gentle introduction to Bayes classifier, I would suggest to look at the following tutorial from Andrew Moore: A Short Intro to Naive Bayesian Classifiers (many other tutorials are also available). | What kinds of things can I predict with a naive Bayesian classifier?
The Elements of Statistical Learning, by Hastie et al. has a lot of illustrations of Machine Learning applications, and all data sets are available on the companion website, including data on spam as |
25,835 | What kinds of things can I predict with a naive Bayesian classifier? | You can try playing with spam filtering, that's quite a common use of Naive Bayesian Classifiers. | What kinds of things can I predict with a naive Bayesian classifier? | You can try playing with spam filtering, that's quite a common use of Naive Bayesian Classifiers. | What kinds of things can I predict with a naive Bayesian classifier?
You can try playing with spam filtering, that's quite a common use of Naive Bayesian Classifiers. | What kinds of things can I predict with a naive Bayesian classifier?
You can try playing with spam filtering, that's quite a common use of Naive Bayesian Classifiers. |
25,836 | Expectation of product of Gaussian random variables | Yes, there is a well-known result. Based on your edit, we can focus first on individual entries of the array $E[x_1 x_2^T]$. Such an entry is the product of two variables of zero mean and finite variances, say $\sigma_1^2$ and $\sigma_2^2$. The Cauchy-Schwarz Inequality implies the absolute value of the expectation of the product cannot exceed $|\sigma_1 \sigma_2|$. In fact, every value in the interval $[-|\sigma_1 \sigma_2|, |\sigma_1 \sigma_2|]$ is possible because it arises for some binormal distribution. Therefore, the $i,j$ entry of $E[x_1 x_2^T]$ must be less than or equal to $\sqrt{\Sigma_{1_{i,i}} \Sigma_{2_{j,j}}}$ in absolute value.
If we now assume all variables are normal and that $(x_1; x_2)$ is multinormal, there will be further restrictions because the covariance matrix of $(x_1; x_2)$ must be positive semidefinite. Rather than belabor the point, I will illustrate. Suppose $x_1$ has two components $x$ and $y$ and that $x_2$ has one component $z$. Let $x$ and $y$ have unit variance and correlation $\rho$ (thus specifying $\Sigma_1$) and suppose $z$ has unit variance ($\Sigma_2$). Let the expectation of $x z$ be $\alpha$ and that of $y z$ be $\beta$. We have established that $|\alpha| \le 1$ and $|\beta| \le 1$. However, not all combinations are possible: at a minimum, the determinant of the covariance matrix of $(x_1; x_2)$ cannot be negative. This imposes the non-trivial condition
$$1-\alpha ^2-\beta ^2+2 \alpha \beta \rho -\rho ^2 \ge 0.$$
For any $-1 \lt \rho \lt 1$ this is an ellipse (along with its interior) inscribed within the $\alpha, \beta$ square $[-1, 1] \times [-1, 1]$.
To obtain further restrictions, additional assumptions about the variables are necessary.
Plot of the permissible region $(\rho, \alpha, \beta)$ | Expectation of product of Gaussian random variables | Yes, there is a well-known result. Based on your edit, we can focus first on individual entries of the array $E[x_1 x_2^T]$. Such an entry is the product of two variables of zero mean and finite var | Expectation of product of Gaussian random variables
Yes, there is a well-known result. Based on your edit, we can focus first on individual entries of the array $E[x_1 x_2^T]$. Such an entry is the product of two variables of zero mean and finite variances, say $\sigma_1^2$ and $\sigma_2^2$. The Cauchy-Schwarz Inequality implies the absolute value of the expectation of the product cannot exceed $|\sigma_1 \sigma_2|$. In fact, every value in the interval $[-|\sigma_1 \sigma_2|, |\sigma_1 \sigma_2|]$ is possible because it arises for some binormal distribution. Therefore, the $i,j$ entry of $E[x_1 x_2^T]$ must be less than or equal to $\sqrt{\Sigma_{1_{i,i}} \Sigma_{2_{j,j}}}$ in absolute value.
If we now assume all variables are normal and that $(x_1; x_2)$ is multinormal, there will be further restrictions because the covariance matrix of $(x_1; x_2)$ must be positive semidefinite. Rather than belabor the point, I will illustrate. Suppose $x_1$ has two components $x$ and $y$ and that $x_2$ has one component $z$. Let $x$ and $y$ have unit variance and correlation $\rho$ (thus specifying $\Sigma_1$) and suppose $z$ has unit variance ($\Sigma_2$). Let the expectation of $x z$ be $\alpha$ and that of $y z$ be $\beta$. We have established that $|\alpha| \le 1$ and $|\beta| \le 1$. However, not all combinations are possible: at a minimum, the determinant of the covariance matrix of $(x_1; x_2)$ cannot be negative. This imposes the non-trivial condition
$$1-\alpha ^2-\beta ^2+2 \alpha \beta \rho -\rho ^2 \ge 0.$$
For any $-1 \lt \rho \lt 1$ this is an ellipse (along with its interior) inscribed within the $\alpha, \beta$ square $[-1, 1] \times [-1, 1]$.
To obtain further restrictions, additional assumptions about the variables are necessary.
Plot of the permissible region $(\rho, \alpha, \beta)$ | Expectation of product of Gaussian random variables
Yes, there is a well-known result. Based on your edit, we can focus first on individual entries of the array $E[x_1 x_2^T]$. Such an entry is the product of two variables of zero mean and finite var |
25,837 | Expectation of product of Gaussian random variables | There are no strong results and it does not depend on Gaussianity. In the case where $x_1$ and $x_2$ are scalars, you are asking if knowing the variance of the variables implies something about their covariance. whuber’s answer is right. The Cauchy-Schwarz Inequality and positive semidefiniteness constrain the possible values.
The simplest example is that the squared covariance of a pair of variables can never exceed the product of their variances. For covariance matrices there is a generalization.
Consider the block partitioned covariance matrix of $[x_1 \ x_2]$,
$$
\left[
\begin{array}{cc}
\Sigma_{11} & \Sigma_{12} \\
\Sigma_{21} & \Sigma_{22}
\end{array}
\right].
$$
Then
$$\Vert \Sigma_{12} \Vert_q^2 \leq \Vert \Sigma_{11} \Vert_q \Vert \Sigma_{22} \Vert_q$$
for all Schatten q-norms. Positive (semi)definiteness of the covariance matrix also provides the constraint that
$$
\Sigma_{11} - \Sigma_{12} \Sigma_{22}^{-1} \Sigma_{21}
$$
must be positive (semi)definite. $\Sigma_{22}^{-1}$ is the (Moore-Penrose) inverse of $\Sigma_{22}$. | Expectation of product of Gaussian random variables | There are no strong results and it does not depend on Gaussianity. In the case where $x_1$ and $x_2$ are scalars, you are asking if knowing the variance of the variables implies something about their | Expectation of product of Gaussian random variables
There are no strong results and it does not depend on Gaussianity. In the case where $x_1$ and $x_2$ are scalars, you are asking if knowing the variance of the variables implies something about their covariance. whuber’s answer is right. The Cauchy-Schwarz Inequality and positive semidefiniteness constrain the possible values.
The simplest example is that the squared covariance of a pair of variables can never exceed the product of their variances. For covariance matrices there is a generalization.
Consider the block partitioned covariance matrix of $[x_1 \ x_2]$,
$$
\left[
\begin{array}{cc}
\Sigma_{11} & \Sigma_{12} \\
\Sigma_{21} & \Sigma_{22}
\end{array}
\right].
$$
Then
$$\Vert \Sigma_{12} \Vert_q^2 \leq \Vert \Sigma_{11} \Vert_q \Vert \Sigma_{22} \Vert_q$$
for all Schatten q-norms. Positive (semi)definiteness of the covariance matrix also provides the constraint that
$$
\Sigma_{11} - \Sigma_{12} \Sigma_{22}^{-1} \Sigma_{21}
$$
must be positive (semi)definite. $\Sigma_{22}^{-1}$ is the (Moore-Penrose) inverse of $\Sigma_{22}$. | Expectation of product of Gaussian random variables
There are no strong results and it does not depend on Gaussianity. In the case where $x_1$ and $x_2$ are scalars, you are asking if knowing the variance of the variables implies something about their |
25,838 | Expectation of product of Gaussian random variables | suppose $(X,Y)$ is bivariate normal with zero means and correlation $\rho$. then
${\mathrm E} XY= cov(X,Y)= \rho\sigma_X\sigma_Y$.
all of the entries in the matrix $x_1x_2^T$ are of the form $XY$. | Expectation of product of Gaussian random variables | suppose $(X,Y)$ is bivariate normal with zero means and correlation $\rho$. then
${\mathrm E} XY= cov(X,Y)= \rho\sigma_X\sigma_Y$.
all of the entries in the matrix $x_1x_2^T$ are of the form $XY$. | Expectation of product of Gaussian random variables
suppose $(X,Y)$ is bivariate normal with zero means and correlation $\rho$. then
${\mathrm E} XY= cov(X,Y)= \rho\sigma_X\sigma_Y$.
all of the entries in the matrix $x_1x_2^T$ are of the form $XY$. | Expectation of product of Gaussian random variables
suppose $(X,Y)$ is bivariate normal with zero means and correlation $\rho$. then
${\mathrm E} XY= cov(X,Y)= \rho\sigma_X\sigma_Y$.
all of the entries in the matrix $x_1x_2^T$ are of the form $XY$. |
25,839 | Omega squared for measure of effect in R? | A function to compute omega squared is straightforward to write. This function takes the object returned by the aov test, and calculates and returns and omega squared:
omega_sq <- function(aovm){
sum_stats <- summary(aovm)[[1]]
SSm <- sum_stats[["Sum Sq"]][1]
SSr <- sum_stats[["Sum Sq"]][2]
DFm <- sum_stats[["Df"]][1]
MSr <- sum_stats[["Mean Sq"]][2]
W2 <- (SSm-DFm*MSr)/(SSm+SSr+MSr)
return(W2)
}
edit: updated function for n-way aov models:
omega_sq <- function(aov_in, neg2zero=T){
aovtab <- summary(aov_in)[[1]]
n_terms <- length(aovtab[["Sum Sq"]]) - 1
output <- rep(-1, n_terms)
SSr <- aovtab[["Sum Sq"]][n_terms + 1]
MSr <- aovtab[["Mean Sq"]][n_terms + 1]
SSt <- sum(aovtab[["Sum Sq"]])
for(i in 1:n_terms){
SSm <- aovtab[["Sum Sq"]][i]
DFm <- aovtab[["Df"]][i]
output[i] <- (SSm-DFm*MSr)/(SSt+MSr)
if(neg2zero & output[i] < 0){output[i] <- 0}
}
names(output) <- rownames(aovtab)[1:n_terms]
return(output)
} | Omega squared for measure of effect in R? | A function to compute omega squared is straightforward to write. This function takes the object returned by the aov test, and calculates and returns and omega squared:
omega_sq <- function(aovm){
| Omega squared for measure of effect in R?
A function to compute omega squared is straightforward to write. This function takes the object returned by the aov test, and calculates and returns and omega squared:
omega_sq <- function(aovm){
sum_stats <- summary(aovm)[[1]]
SSm <- sum_stats[["Sum Sq"]][1]
SSr <- sum_stats[["Sum Sq"]][2]
DFm <- sum_stats[["Df"]][1]
MSr <- sum_stats[["Mean Sq"]][2]
W2 <- (SSm-DFm*MSr)/(SSm+SSr+MSr)
return(W2)
}
edit: updated function for n-way aov models:
omega_sq <- function(aov_in, neg2zero=T){
aovtab <- summary(aov_in)[[1]]
n_terms <- length(aovtab[["Sum Sq"]]) - 1
output <- rep(-1, n_terms)
SSr <- aovtab[["Sum Sq"]][n_terms + 1]
MSr <- aovtab[["Mean Sq"]][n_terms + 1]
SSt <- sum(aovtab[["Sum Sq"]])
for(i in 1:n_terms){
SSm <- aovtab[["Sum Sq"]][i]
DFm <- aovtab[["Df"]][i]
output[i] <- (SSm-DFm*MSr)/(SSt+MSr)
if(neg2zero & output[i] < 0){output[i] <- 0}
}
names(output) <- rownames(aovtab)[1:n_terms]
return(output)
} | Omega squared for measure of effect in R?
A function to compute omega squared is straightforward to write. This function takes the object returned by the aov test, and calculates and returns and omega squared:
omega_sq <- function(aovm){
|
25,840 | Omega squared for measure of effect in R? | I had to recently report an $\omega^2$.
partialOmegas <- function(mod){
aovMod <- mod
if(!any(class(aovMod) %in% 'aov')) aovMod <- aov(mod)
sumAov <- summary(aovMod)[[1]]
residRow <- nrow(sumAov)
dfError <- sumAov[residRow,1]
msError <- sumAov[residRow,3]
nTotal <- nrow(model.frame(aovMod))
dfEffects <- sumAov[1:{residRow-1},1]
ssEffects <- sumAov[1:{residRow-1},2]
msEffects <- sumAov[1:{residRow-1},3]
partOmegas <- abs((dfEffects*(msEffects-msError)) /
(ssEffects + (nTotal -dfEffects)*msError))
names(partOmegas) <- rownames(sumAov)[1:{residRow-1}]
partOmegas
}
It is a messy function that can easily be cleaned up. It computes the partial $\omega^2$, and should probably only be used on between-subjects factorial designs. | Omega squared for measure of effect in R? | I had to recently report an $\omega^2$.
partialOmegas <- function(mod){
aovMod <- mod
if(!any(class(aovMod) %in% 'aov')) aovMod <- aov(mod)
sumAov <- summary(aovMod)[[1]]
residRow | Omega squared for measure of effect in R?
I had to recently report an $\omega^2$.
partialOmegas <- function(mod){
aovMod <- mod
if(!any(class(aovMod) %in% 'aov')) aovMod <- aov(mod)
sumAov <- summary(aovMod)[[1]]
residRow <- nrow(sumAov)
dfError <- sumAov[residRow,1]
msError <- sumAov[residRow,3]
nTotal <- nrow(model.frame(aovMod))
dfEffects <- sumAov[1:{residRow-1},1]
ssEffects <- sumAov[1:{residRow-1},2]
msEffects <- sumAov[1:{residRow-1},3]
partOmegas <- abs((dfEffects*(msEffects-msError)) /
(ssEffects + (nTotal -dfEffects)*msError))
names(partOmegas) <- rownames(sumAov)[1:{residRow-1}]
partOmegas
}
It is a messy function that can easily be cleaned up. It computes the partial $\omega^2$, and should probably only be used on between-subjects factorial designs. | Omega squared for measure of effect in R?
I had to recently report an $\omega^2$.
partialOmegas <- function(mod){
aovMod <- mod
if(!any(class(aovMod) %in% 'aov')) aovMod <- aov(mod)
sumAov <- summary(aovMod)[[1]]
residRow |
25,841 | Omega squared for measure of effect in R? | I found an omega squared function in somebody's .Rprofile that they made available online:
http://www.estudiosfonicos.cchs.csic.es/metodolo/1/.Rprofile | Omega squared for measure of effect in R? | I found an omega squared function in somebody's .Rprofile that they made available online:
http://www.estudiosfonicos.cchs.csic.es/metodolo/1/.Rprofile | Omega squared for measure of effect in R?
I found an omega squared function in somebody's .Rprofile that they made available online:
http://www.estudiosfonicos.cchs.csic.es/metodolo/1/.Rprofile | Omega squared for measure of effect in R?
I found an omega squared function in somebody's .Rprofile that they made available online:
http://www.estudiosfonicos.cchs.csic.es/metodolo/1/.Rprofile |
25,842 | Omega squared for measure of effect in R? | I'd suggest that generalized eta square is considered (ref, ref) a more appropriate measure of effect size. It is included in the ANOVA output in the ez package for R. | Omega squared for measure of effect in R? | I'd suggest that generalized eta square is considered (ref, ref) a more appropriate measure of effect size. It is included in the ANOVA output in the ez package for R. | Omega squared for measure of effect in R?
I'd suggest that generalized eta square is considered (ref, ref) a more appropriate measure of effect size. It is included in the ANOVA output in the ez package for R. | Omega squared for measure of effect in R?
I'd suggest that generalized eta square is considered (ref, ref) a more appropriate measure of effect size. It is included in the ANOVA output in the ez package for R. |
25,843 | Omega squared for measure of effect in R? | Daniel "strengejacke" Lüdecke's package sjstats can not do omega-squared, partial-omega-squared etc for ANOVA models. Check it out.
Here is a vignette that demonstrates that:
https://cran.r-project.org/web/packages/sjstats/vignettes/anova-statistics.html
install.packages("sjstats")
library(sjstats)
mod1 <- aov(y~x, data= d.frame)
anova_stats(mod1) | Omega squared for measure of effect in R? | Daniel "strengejacke" Lüdecke's package sjstats can not do omega-squared, partial-omega-squared etc for ANOVA models. Check it out.
Here is a vignette that demonstrates that:
https://cran.r-project | Omega squared for measure of effect in R?
Daniel "strengejacke" Lüdecke's package sjstats can not do omega-squared, partial-omega-squared etc for ANOVA models. Check it out.
Here is a vignette that demonstrates that:
https://cran.r-project.org/web/packages/sjstats/vignettes/anova-statistics.html
install.packages("sjstats")
library(sjstats)
mod1 <- aov(y~x, data= d.frame)
anova_stats(mod1) | Omega squared for measure of effect in R?
Daniel "strengejacke" Lüdecke's package sjstats can not do omega-squared, partial-omega-squared etc for ANOVA models. Check it out.
Here is a vignette that demonstrates that:
https://cran.r-project |
25,844 | Are these formulas for transforming P, LSD, MSD, HSD, CI, to SE as an exact or inflated/conservative estimate of $\hat{\sigma}$ correct? | Your LSD equation looks fine. If you want to get back to variance and you have a summary statistic that says something about variability or significance of an effect then you can almost always get back to variance—-you just need to know the formula. For example, in your equation for LSD you want to solve for MSE, MSE = (LSD/t_)^2 / 2 * b | Are these formulas for transforming P, LSD, MSD, HSD, CI, to SE as an exact or inflated/conservative | Your LSD equation looks fine. If you want to get back to variance and you have a summary statistic that says something about variability or significance of an effect then you can almost always get ba | Are these formulas for transforming P, LSD, MSD, HSD, CI, to SE as an exact or inflated/conservative estimate of $\hat{\sigma}$ correct?
Your LSD equation looks fine. If you want to get back to variance and you have a summary statistic that says something about variability or significance of an effect then you can almost always get back to variance—-you just need to know the formula. For example, in your equation for LSD you want to solve for MSE, MSE = (LSD/t_)^2 / 2 * b | Are these formulas for transforming P, LSD, MSD, HSD, CI, to SE as an exact or inflated/conservative
Your LSD equation looks fine. If you want to get back to variance and you have a summary statistic that says something about variability or significance of an effect then you can almost always get ba |
25,845 | Are these formulas for transforming P, LSD, MSD, HSD, CI, to SE as an exact or inflated/conservative estimate of $\hat{\sigma}$ correct? | I can only agree with John. Furthermore, perhaps this paper by David Saville helps you with some formula to recalculate variability measures from LSDs et al.:
Saville D.J. (2003). Basic statistics and the inconsistency of multiple comparison procedures. Canadian Journal of Experimental Psychology, 57, 167–175
UPDATE:
If you are looking for more formulas to convert between various effect sizes, books on meta-analysis should provide a lot of these. However, I am not an expert in this area and can't recommend one.
But, I remember that the book by Rosenthal and Rosnow once helped with some formula:
Essentials of Behavioral Research: Methods and Data Analysis
Furthermore, I have heard a lot of good things about the formulas in this book by Rosenthal, Rosnow & Rubin (although I have never used it):
Contrasts and Effect Sizes in Behavioral Research: A Correlational Approach (You should definitely give it a try if a nearby library has it).
If this is not enough, perhaps ask another question on literature for converting effect sizes for meta-analyses. Perhaps someone more into meta-analysis has more grounded recommendations. | Are these formulas for transforming P, LSD, MSD, HSD, CI, to SE as an exact or inflated/conservative | I can only agree with John. Furthermore, perhaps this paper by David Saville helps you with some formula to recalculate variability measures from LSDs et al.:
Saville D.J. (2003). Basic statistics and | Are these formulas for transforming P, LSD, MSD, HSD, CI, to SE as an exact or inflated/conservative estimate of $\hat{\sigma}$ correct?
I can only agree with John. Furthermore, perhaps this paper by David Saville helps you with some formula to recalculate variability measures from LSDs et al.:
Saville D.J. (2003). Basic statistics and the inconsistency of multiple comparison procedures. Canadian Journal of Experimental Psychology, 57, 167–175
UPDATE:
If you are looking for more formulas to convert between various effect sizes, books on meta-analysis should provide a lot of these. However, I am not an expert in this area and can't recommend one.
But, I remember that the book by Rosenthal and Rosnow once helped with some formula:
Essentials of Behavioral Research: Methods and Data Analysis
Furthermore, I have heard a lot of good things about the formulas in this book by Rosenthal, Rosnow & Rubin (although I have never used it):
Contrasts and Effect Sizes in Behavioral Research: A Correlational Approach (You should definitely give it a try if a nearby library has it).
If this is not enough, perhaps ask another question on literature for converting effect sizes for meta-analyses. Perhaps someone more into meta-analysis has more grounded recommendations. | Are these formulas for transforming P, LSD, MSD, HSD, CI, to SE as an exact or inflated/conservative
I can only agree with John. Furthermore, perhaps this paper by David Saville helps you with some formula to recalculate variability measures from LSDs et al.:
Saville D.J. (2003). Basic statistics and |
25,846 | Are these formulas for transforming P, LSD, MSD, HSD, CI, to SE as an exact or inflated/conservative estimate of $\hat{\sigma}$ correct? | You may consider trying the R package compute.es. There are several functions for deriving effect size estimates and the variance of the effect size. | Are these formulas for transforming P, LSD, MSD, HSD, CI, to SE as an exact or inflated/conservative | You may consider trying the R package compute.es. There are several functions for deriving effect size estimates and the variance of the effect size. | Are these formulas for transforming P, LSD, MSD, HSD, CI, to SE as an exact or inflated/conservative estimate of $\hat{\sigma}$ correct?
You may consider trying the R package compute.es. There are several functions for deriving effect size estimates and the variance of the effect size. | Are these formulas for transforming P, LSD, MSD, HSD, CI, to SE as an exact or inflated/conservative
You may consider trying the R package compute.es. There are several functions for deriving effect size estimates and the variance of the effect size. |
25,847 | What is the name of the percentage that defines a prediction interval? | This is an issue that has been bugging me for the many years I have been active in forecasting (which you seem to be interested in, given your mention of prediction intervals, PIs): there does not seem to be a standard term.
For instance, the M5 uncertainty competition (see also the guidelines here) - one of the largest forecasting competitions, run by experts - requested nine different quantile forecasts, noting that they would give rise to four different central PIs. However, there is no term used for the proportion of data to be contained in such a PI, like 95%.
I personally sometimes simply use "level". I agree with whuber that "confidence level" would be bad, if only because it feeds into the common confusion between confidence intervals and PIs. Contra whuber, I would avoid "coverage", or at least qualify it as in "target coverage", because "coverage" alone is too close to the realized coverage rather than the target coverage or level for my taste. (Until the M5 competition, it was accepted wisdom among forecasters that PIs are usually too narrow, so the realized coverage is usually smaller than the target coverage or level.) Also, in my experience among forecasters, I can't recall having seen "coverage" in the sense of the target level - but other statistical subcultures may have other conventions. | What is the name of the percentage that defines a prediction interval? | This is an issue that has been bugging me for the many years I have been active in forecasting (which you seem to be interested in, given your mention of prediction intervals, PIs): there does not see | What is the name of the percentage that defines a prediction interval?
This is an issue that has been bugging me for the many years I have been active in forecasting (which you seem to be interested in, given your mention of prediction intervals, PIs): there does not seem to be a standard term.
For instance, the M5 uncertainty competition (see also the guidelines here) - one of the largest forecasting competitions, run by experts - requested nine different quantile forecasts, noting that they would give rise to four different central PIs. However, there is no term used for the proportion of data to be contained in such a PI, like 95%.
I personally sometimes simply use "level". I agree with whuber that "confidence level" would be bad, if only because it feeds into the common confusion between confidence intervals and PIs. Contra whuber, I would avoid "coverage", or at least qualify it as in "target coverage", because "coverage" alone is too close to the realized coverage rather than the target coverage or level for my taste. (Until the M5 competition, it was accepted wisdom among forecasters that PIs are usually too narrow, so the realized coverage is usually smaller than the target coverage or level.) Also, in my experience among forecasters, I can't recall having seen "coverage" in the sense of the target level - but other statistical subcultures may have other conventions. | What is the name of the percentage that defines a prediction interval?
This is an issue that has been bugging me for the many years I have been active in forecasting (which you seem to be interested in, given your mention of prediction intervals, PIs): there does not see |
25,848 | Is there a standard measure of the sufficiency of a statistic? | Fisher's information associated with a statistic $T$ is the Fisher information associated with the distribution of that statistic
$$I_T(\theta) = \mathbb E_\theta\Big[\frac{\partial}{\partial \theta}\log f^T_\theta(T(X))^\prime \frac{\partial}{\partial \theta}\log f^T_\theta(T(X))\Big]$$
It is thus possible to compare Fisher's informations between statistics. For instance, Fisher's information associated with a sufficient statistic is the same as that of the entire sample X. On the other end of the spectrum, Fisher's information provided by an ancillary statistic is null.
Finding Fisher's information provided by the sample median is somewhat of a challenge. However, running a Monte Carlo experiment with $n$ large shows that the variance of the median is approximately 1.5-1.7 times larger than the variance of the empirical mean, which implies that the Fisher information is approximately 1,5-1.7 times smaller for the median. The exact expression of the (constant) Fisher information about $\theta$ attached with the median statistic $X_{(n/2)}$ of a $\mathcal N(\theta,1)$ sample is
$$1 − \mathbb E_0\left[\frac{∂^2}{∂θ^2}
\left\{ (n/2 − 1) \log \Phi (X_{(n/2)}) + (n − n/2) \log\Phi (-X_{(n/2)} )\right\}\right]
$$
where expectation is under $\theta=0$. It also writes as
$$1+n\mathbb E[Z_{n/2:n}\varphi(Z_{n/2:n})]-n\mathbb E[Z_{n/2:n-1}\varphi(Z_{n/2:n-1})]+\\
\frac{n(n-1)}{n/2-2}\varphi(Z_{n/2-2:n-2})^2+
\frac{n(n-1)}{n-n/2-1}\varphi(Z_{n/2:n-2})^2\tag{1}$$
(after correction of a typo in the thesis).
As stated in this same thesis
The median order statistics contain the most information about θ.
(...) For n = 10, the X 5:10 and X 6:10 each contain 0.6622 times the
total information in the sample. For n = 20, the proportion of
information contained in the median statistic is 0.6498.
Since $1/0.6498=
1.5389$, it is already close to $1/4\varphi(0)^2=1.5707$. While a Monte Carlo approximation of (1) returns $1.5706$ for $n=10^4$. | Is there a standard measure of the sufficiency of a statistic? | Fisher's information associated with a statistic $T$ is the Fisher information associated with the distribution of that statistic
$$I_T(\theta) = \mathbb E_\theta\Big[\frac{\partial}{\partial \theta}\ | Is there a standard measure of the sufficiency of a statistic?
Fisher's information associated with a statistic $T$ is the Fisher information associated with the distribution of that statistic
$$I_T(\theta) = \mathbb E_\theta\Big[\frac{\partial}{\partial \theta}\log f^T_\theta(T(X))^\prime \frac{\partial}{\partial \theta}\log f^T_\theta(T(X))\Big]$$
It is thus possible to compare Fisher's informations between statistics. For instance, Fisher's information associated with a sufficient statistic is the same as that of the entire sample X. On the other end of the spectrum, Fisher's information provided by an ancillary statistic is null.
Finding Fisher's information provided by the sample median is somewhat of a challenge. However, running a Monte Carlo experiment with $n$ large shows that the variance of the median is approximately 1.5-1.7 times larger than the variance of the empirical mean, which implies that the Fisher information is approximately 1,5-1.7 times smaller for the median. The exact expression of the (constant) Fisher information about $\theta$ attached with the median statistic $X_{(n/2)}$ of a $\mathcal N(\theta,1)$ sample is
$$1 − \mathbb E_0\left[\frac{∂^2}{∂θ^2}
\left\{ (n/2 − 1) \log \Phi (X_{(n/2)}) + (n − n/2) \log\Phi (-X_{(n/2)} )\right\}\right]
$$
where expectation is under $\theta=0$. It also writes as
$$1+n\mathbb E[Z_{n/2:n}\varphi(Z_{n/2:n})]-n\mathbb E[Z_{n/2:n-1}\varphi(Z_{n/2:n-1})]+\\
\frac{n(n-1)}{n/2-2}\varphi(Z_{n/2-2:n-2})^2+
\frac{n(n-1)}{n-n/2-1}\varphi(Z_{n/2:n-2})^2\tag{1}$$
(after correction of a typo in the thesis).
As stated in this same thesis
The median order statistics contain the most information about θ.
(...) For n = 10, the X 5:10 and X 6:10 each contain 0.6622 times the
total information in the sample. For n = 20, the proportion of
information contained in the median statistic is 0.6498.
Since $1/0.6498=
1.5389$, it is already close to $1/4\varphi(0)^2=1.5707$. While a Monte Carlo approximation of (1) returns $1.5706$ for $n=10^4$. | Is there a standard measure of the sufficiency of a statistic?
Fisher's information associated with a statistic $T$ is the Fisher information associated with the distribution of that statistic
$$I_T(\theta) = \mathbb E_\theta\Big[\frac{\partial}{\partial \theta}\ |
25,849 | What does it mean that a Gaussian process is 'infinite dimensional?' | Suppose we have $X \sim \mathcal N_n(\mu, \Sigma)$. We can think of $X$ as giving us a random function from $\{1, \dots, n\}$ to $\mathbb R$, which we evaluate by indexing so e.g. $X(1) = X_1$. The space of random functions with this domain is $n$-dimensional since it is spanned by the functions $\{e_1, \dots, e_n\}$ where $e_i(t) = \mathbf 1_{t=i}$ are just the standard basis vectors thought of as functions.
The stochastic process view of this is that we have an index set $T = \{1,\dots, n\}$ and then we have random variables $X_t : \Omega\to\mathbb R$ for each $t \in T$. A single realization of this process yields a sequence $(x_1, \dots, x_n)$ which can be thought of as a particular random function. More formally, if $(\Omega, \mathscr F, P)$ is our probability space, then a single realization of the stochastic process is the function from $T$ to $\mathbb R$ given by $t \mapsto X_t(\omega)$ where $\omega\in\Omega$ is the sample outcome.
If we want random functions with an infinite support (so what we more typically think of as functions, like $f : \mathbb R\to\mathbb R$ with
$f(x)=x^2$) we can get those by using stochastic processes with larger index sets like $T=\mathbb N$ or $T = [0,\infty)$. A single realization of one of these processes gives us a function from $T$ to $\mathbb R$, but now these functions live in an infinite dimension space (typically). In other words, the space of functions that can be realized by this process is an infinite dimension function space, as opposed to $\mathcal N_n(\mu,\Sigma)$ where the space of realizable functions is finite dimensional.
If we further make the requirement that the outputs of our random functions have a multivariate Gaussian distribution for every finite collection of index points, then it turns out that this usefully generalizes the idea of a Gaussian distribution over a finite dimension space to an infinite dimension one.
In summary: a multivariate Gaussian gives us random functions in finite dimension spaces, a Gaussian Process can give us random functions from infinite dimension spaces. | What does it mean that a Gaussian process is 'infinite dimensional?' | Suppose we have $X \sim \mathcal N_n(\mu, \Sigma)$. We can think of $X$ as giving us a random function from $\{1, \dots, n\}$ to $\mathbb R$, which we evaluate by indexing so e.g. $X(1) = X_1$. The sp | What does it mean that a Gaussian process is 'infinite dimensional?'
Suppose we have $X \sim \mathcal N_n(\mu, \Sigma)$. We can think of $X$ as giving us a random function from $\{1, \dots, n\}$ to $\mathbb R$, which we evaluate by indexing so e.g. $X(1) = X_1$. The space of random functions with this domain is $n$-dimensional since it is spanned by the functions $\{e_1, \dots, e_n\}$ where $e_i(t) = \mathbf 1_{t=i}$ are just the standard basis vectors thought of as functions.
The stochastic process view of this is that we have an index set $T = \{1,\dots, n\}$ and then we have random variables $X_t : \Omega\to\mathbb R$ for each $t \in T$. A single realization of this process yields a sequence $(x_1, \dots, x_n)$ which can be thought of as a particular random function. More formally, if $(\Omega, \mathscr F, P)$ is our probability space, then a single realization of the stochastic process is the function from $T$ to $\mathbb R$ given by $t \mapsto X_t(\omega)$ where $\omega\in\Omega$ is the sample outcome.
If we want random functions with an infinite support (so what we more typically think of as functions, like $f : \mathbb R\to\mathbb R$ with
$f(x)=x^2$) we can get those by using stochastic processes with larger index sets like $T=\mathbb N$ or $T = [0,\infty)$. A single realization of one of these processes gives us a function from $T$ to $\mathbb R$, but now these functions live in an infinite dimension space (typically). In other words, the space of functions that can be realized by this process is an infinite dimension function space, as opposed to $\mathcal N_n(\mu,\Sigma)$ where the space of realizable functions is finite dimensional.
If we further make the requirement that the outputs of our random functions have a multivariate Gaussian distribution for every finite collection of index points, then it turns out that this usefully generalizes the idea of a Gaussian distribution over a finite dimension space to an infinite dimension one.
In summary: a multivariate Gaussian gives us random functions in finite dimension spaces, a Gaussian Process can give us random functions from infinite dimension spaces. | What does it mean that a Gaussian process is 'infinite dimensional?'
Suppose we have $X \sim \mathcal N_n(\mu, \Sigma)$. We can think of $X$ as giving us a random function from $\{1, \dots, n\}$ to $\mathbb R$, which we evaluate by indexing so e.g. $X(1) = X_1$. The sp |
25,850 | What does it mean that a Gaussian process is 'infinite dimensional?' | While a sample from a multivariate Gaussian distribution produces a vector with a discrete number of elements, a sample from a Gaussian Process is a continuous function, which is "infinite-dimensional" in the sense that it is "indexed" by a continuously varying coordinate.
I'm not an expert in GPs, but I've found this page helpful. | What does it mean that a Gaussian process is 'infinite dimensional?' | While a sample from a multivariate Gaussian distribution produces a vector with a discrete number of elements, a sample from a Gaussian Process is a continuous function, which is "infinite-dimensional | What does it mean that a Gaussian process is 'infinite dimensional?'
While a sample from a multivariate Gaussian distribution produces a vector with a discrete number of elements, a sample from a Gaussian Process is a continuous function, which is "infinite-dimensional" in the sense that it is "indexed" by a continuously varying coordinate.
I'm not an expert in GPs, but I've found this page helpful. | What does it mean that a Gaussian process is 'infinite dimensional?'
While a sample from a multivariate Gaussian distribution produces a vector with a discrete number of elements, a sample from a Gaussian Process is a continuous function, which is "infinite-dimensional |
25,851 | What does it mean that a Gaussian process is 'infinite dimensional?' | Infinite dimensional Gaussian processes have sample functions which span an infinite dimensional space (subspace of Hilbert space of mean square integrable functions). Equivalently, the kernel expansion requires an infinite number of terms (Mercer's theorem). It is possible to have Gaussian random processes with countable or even uncountable index sets whose sample functions span a finite dimensional space, equivalently a kernel expansion with a finite number of terms, and hence are not infinite dimensional. So the previous answers are not correct | What does it mean that a Gaussian process is 'infinite dimensional?' | Infinite dimensional Gaussian processes have sample functions which span an infinite dimensional space (subspace of Hilbert space of mean square integrable functions). Equivalently, the kernel expansi | What does it mean that a Gaussian process is 'infinite dimensional?'
Infinite dimensional Gaussian processes have sample functions which span an infinite dimensional space (subspace of Hilbert space of mean square integrable functions). Equivalently, the kernel expansion requires an infinite number of terms (Mercer's theorem). It is possible to have Gaussian random processes with countable or even uncountable index sets whose sample functions span a finite dimensional space, equivalently a kernel expansion with a finite number of terms, and hence are not infinite dimensional. So the previous answers are not correct | What does it mean that a Gaussian process is 'infinite dimensional?'
Infinite dimensional Gaussian processes have sample functions which span an infinite dimensional space (subspace of Hilbert space of mean square integrable functions). Equivalently, the kernel expansi |
25,852 | SVD : Why right singular matrix is written as transpose | $V^T$ is the Hermitian transpose (the complex conjugate transpose) of $V$.
$V$ itself holds the right-singular vectors of $A$ that are the (orthonormal) eigenvectors of $A^TA$; to that extent: $A^TA = VS^2V^T$. If we wrote $W = V^T$, then $W$ would no longer represent the eigenvectors of $A^TA$.
Additionally, defining the SVD as: $A = USV^T$ allows us to directly use $U$ and $V$ to diagonalise the matrix in the sense of $Av_i = s_iu_i$, for $i\leq r$ where $r$ is the rank of $A$ (i.e. $AV = US$). Finally using $USV^T$ also simplifies our calculation in the case of a symmetric matrix $A$ in which case $U$ and $V$ will coincide (up to a sign) and it will allows us to directly link the singular decomposition to eigen-decomposition $A = Q \Lambda Q^T$. Just to be clear: "yes, using $V^T$ instead of $W = V^T$ is a bit of convention" but is a helpful one. | SVD : Why right singular matrix is written as transpose | $V^T$ is the Hermitian transpose (the complex conjugate transpose) of $V$.
$V$ itself holds the right-singular vectors of $A$ that are the (orthonormal) eigenvectors of $A^TA$; to that extent: $A^TA = | SVD : Why right singular matrix is written as transpose
$V^T$ is the Hermitian transpose (the complex conjugate transpose) of $V$.
$V$ itself holds the right-singular vectors of $A$ that are the (orthonormal) eigenvectors of $A^TA$; to that extent: $A^TA = VS^2V^T$. If we wrote $W = V^T$, then $W$ would no longer represent the eigenvectors of $A^TA$.
Additionally, defining the SVD as: $A = USV^T$ allows us to directly use $U$ and $V$ to diagonalise the matrix in the sense of $Av_i = s_iu_i$, for $i\leq r$ where $r$ is the rank of $A$ (i.e. $AV = US$). Finally using $USV^T$ also simplifies our calculation in the case of a symmetric matrix $A$ in which case $U$ and $V$ will coincide (up to a sign) and it will allows us to directly link the singular decomposition to eigen-decomposition $A = Q \Lambda Q^T$. Just to be clear: "yes, using $V^T$ instead of $W = V^T$ is a bit of convention" but is a helpful one. | SVD : Why right singular matrix is written as transpose
$V^T$ is the Hermitian transpose (the complex conjugate transpose) of $V$.
$V$ itself holds the right-singular vectors of $A$ that are the (orthonormal) eigenvectors of $A^TA$; to that extent: $A^TA = |
25,853 | SVD : Why right singular matrix is written as transpose | It's written as a transpose for linear algebraic reasons.
Consider the trivial rank-one case $A = uv^T$, where $u$ and $v$ are, say, unit vectors. This expression tells you that, as a linear transformation, $A$ takes the vector $v$ to $u$, and the orthogonal complement of $v$ to zero. You can see how the transpose shows up naturally.
This is generalized by the SVD, which tells you that any linear transformation is a sum of such rank-one maps, and, what's more, you can arrange for the summands to be orthogonal.
Specifically, the decomposition
$$
A = U\Sigma V^T = \sum_{i = 1}^k \sigma_i u_i v_i^T
$$
says that, for any linear transformation $A$ on $\mathbb{R}^n$ for some $n$ (more generally, any compact operator on separable Hilbert space), you can find orthonormal sets $\{v_i\}$ and $\{u_i\}$ such that
$\{v_i\}$ spans $\ker(A)^{\perp}$.
$A$ takes $v_i$ to $\sigma_i u_i$, for each $i$.
A special case of this is the spectral decomposition for a positive semidefinite matrix $A$, where $U = V$ and the $u_i$'s are the eigenvectors of $A$---the summands $u_i u_i^T$ are rank-one orthogonal projections. For Hermitian $A$, $U$ is "almost equal" to $V$---if the corresponding eigenvalue is negative, one has to take $u_i = -v_i$ so that $\sigma_i \geq 0$. | SVD : Why right singular matrix is written as transpose | It's written as a transpose for linear algebraic reasons.
Consider the trivial rank-one case $A = uv^T$, where $u$ and $v$ are, say, unit vectors. This expression tells you that, as a linear transform | SVD : Why right singular matrix is written as transpose
It's written as a transpose for linear algebraic reasons.
Consider the trivial rank-one case $A = uv^T$, where $u$ and $v$ are, say, unit vectors. This expression tells you that, as a linear transformation, $A$ takes the vector $v$ to $u$, and the orthogonal complement of $v$ to zero. You can see how the transpose shows up naturally.
This is generalized by the SVD, which tells you that any linear transformation is a sum of such rank-one maps, and, what's more, you can arrange for the summands to be orthogonal.
Specifically, the decomposition
$$
A = U\Sigma V^T = \sum_{i = 1}^k \sigma_i u_i v_i^T
$$
says that, for any linear transformation $A$ on $\mathbb{R}^n$ for some $n$ (more generally, any compact operator on separable Hilbert space), you can find orthonormal sets $\{v_i\}$ and $\{u_i\}$ such that
$\{v_i\}$ spans $\ker(A)^{\perp}$.
$A$ takes $v_i$ to $\sigma_i u_i$, for each $i$.
A special case of this is the spectral decomposition for a positive semidefinite matrix $A$, where $U = V$ and the $u_i$'s are the eigenvectors of $A$---the summands $u_i u_i^T$ are rank-one orthogonal projections. For Hermitian $A$, $U$ is "almost equal" to $V$---if the corresponding eigenvalue is negative, one has to take $u_i = -v_i$ so that $\sigma_i \geq 0$. | SVD : Why right singular matrix is written as transpose
It's written as a transpose for linear algebraic reasons.
Consider the trivial rank-one case $A = uv^T$, where $u$ and $v$ are, say, unit vectors. This expression tells you that, as a linear transform |
25,854 | SVD : Why right singular matrix is written as transpose | My answer is much dumber than the others...
lets say, W = V_Transpose
and then write SVD as A = U Σ W
with that you are asking the reader to memorize one more variable ($W$) but for a simple expression as $V^T$ is just not worth it, IMO. | SVD : Why right singular matrix is written as transpose | My answer is much dumber than the others...
lets say, W = V_Transpose
and then write SVD as A = U Σ W
with that you are asking the reader to memorize one more variable ($W$) but for a simple express | SVD : Why right singular matrix is written as transpose
My answer is much dumber than the others...
lets say, W = V_Transpose
and then write SVD as A = U Σ W
with that you are asking the reader to memorize one more variable ($W$) but for a simple expression as $V^T$ is just not worth it, IMO. | SVD : Why right singular matrix is written as transpose
My answer is much dumber than the others...
lets say, W = V_Transpose
and then write SVD as A = U Σ W
with that you are asking the reader to memorize one more variable ($W$) but for a simple express |
25,855 | Why is information about the validation data leaked if I evaluate model performance on validation data when tuning hyperparameters? | Information is leaked because you're using the validation data to make hyper-parameter choices. Essentially, you're creating a complicated optimization problem: minimize the loss over hyper-parameters $\phi$ as evaluated against the validation data, where these hyper-parameters regularize a neural network model that has parameters $\theta$ trained by use of a specific training set.
Even though the parameters $\theta$ are directly informed by the training data, the hyper-parameters $\phi$ are selected on the basis of the validation data. Moreover, because the hyper-parameters $\phi$ implicitly influence $\theta$, the information from the validation data is indirectly influencing the model that you choose. | Why is information about the validation data leaked if I evaluate model performance on validation da | Information is leaked because you're using the validation data to make hyper-parameter choices. Essentially, you're creating a complicated optimization problem: minimize the loss over hyper-parameters | Why is information about the validation data leaked if I evaluate model performance on validation data when tuning hyperparameters?
Information is leaked because you're using the validation data to make hyper-parameter choices. Essentially, you're creating a complicated optimization problem: minimize the loss over hyper-parameters $\phi$ as evaluated against the validation data, where these hyper-parameters regularize a neural network model that has parameters $\theta$ trained by use of a specific training set.
Even though the parameters $\theta$ are directly informed by the training data, the hyper-parameters $\phi$ are selected on the basis of the validation data. Moreover, because the hyper-parameters $\phi$ implicitly influence $\theta$, the information from the validation data is indirectly influencing the model that you choose. | Why is information about the validation data leaked if I evaluate model performance on validation da
Information is leaked because you're using the validation data to make hyper-parameter choices. Essentially, you're creating a complicated optimization problem: minimize the loss over hyper-parameters |
25,856 | The linear transformation of the normal gaussian vectors | Since you have not linked to the paper, I don't know the context of this quote. However, it is a well-known property of the normal distribution that linear transformations of normal random vectors are normal random vectors. If $\boldsymbol{S} \sim \text{N}(\boldsymbol{\mu}, \boldsymbol{\Sigma})$ then it can be shown that $\boldsymbol{A} \boldsymbol{S} \sim \text{N}(\boldsymbol{A} \boldsymbol{\mu}, \boldsymbol{A} \boldsymbol{\Sigma} \boldsymbol{A}^\text{T})$. Formal proof of this result can be undertaken quite easily using characteristic functions. | The linear transformation of the normal gaussian vectors | Since you have not linked to the paper, I don't know the context of this quote. However, it is a well-known property of the normal distribution that linear transformations of normal random vectors ar | The linear transformation of the normal gaussian vectors
Since you have not linked to the paper, I don't know the context of this quote. However, it is a well-known property of the normal distribution that linear transformations of normal random vectors are normal random vectors. If $\boldsymbol{S} \sim \text{N}(\boldsymbol{\mu}, \boldsymbol{\Sigma})$ then it can be shown that $\boldsymbol{A} \boldsymbol{S} \sim \text{N}(\boldsymbol{A} \boldsymbol{\mu}, \boldsymbol{A} \boldsymbol{\Sigma} \boldsymbol{A}^\text{T})$. Formal proof of this result can be undertaken quite easily using characteristic functions. | The linear transformation of the normal gaussian vectors
Since you have not linked to the paper, I don't know the context of this quote. However, it is a well-known property of the normal distribution that linear transformations of normal random vectors ar |
25,857 | The linear transformation of the normal gaussian vectors | For a little bit of visualisation, consider that the Gaussian distribution is scaled by r^2, so multiple independent axes form a Pythagorean relation when scaled by their standard deviations, from which follows that the re-scaled distribution fuzz ball becomes spherical (in n dimensions) and can be rotated about its centre at your convenience.
One of the radial measures is the Mahalanobis distance and is useful in many practical cases where the central limit is applied... | The linear transformation of the normal gaussian vectors | For a little bit of visualisation, consider that the Gaussian distribution is scaled by r^2, so multiple independent axes form a Pythagorean relation when scaled by their standard deviations, from whi | The linear transformation of the normal gaussian vectors
For a little bit of visualisation, consider that the Gaussian distribution is scaled by r^2, so multiple independent axes form a Pythagorean relation when scaled by their standard deviations, from which follows that the re-scaled distribution fuzz ball becomes spherical (in n dimensions) and can be rotated about its centre at your convenience.
One of the radial measures is the Mahalanobis distance and is useful in many practical cases where the central limit is applied... | The linear transformation of the normal gaussian vectors
For a little bit of visualisation, consider that the Gaussian distribution is scaled by r^2, so multiple independent axes form a Pythagorean relation when scaled by their standard deviations, from whi |
25,858 | Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables | That the pdf is correct can be checked by a simple simulation
samps=sqrt(runif(1e5)^2+runif(1e5)^2)
hist(samps,prob=TRUE,nclass=143,col="wheat")
df=function(x){pi*x/2-2*x*(x>1)*acos(1/(x+(1-x)*(x<1)))}
curve(df,add=TRUE,col="sienna",lwd=3)
Finding the cdf without the polar change of variables goes through
\begin{align*}
\mathrm{Pr}(\sqrt{X^2+Y^2}\le z) &= \mathrm{Pr}(X^2+Y^2\le z^2)\\
&= \mathrm{Pr}(Y^2\le z^2-X^2)\\
&=\mathrm{Pr}(Y\le \sqrt{z^2-X^2}\,,X\le z)\\
&=\mathbb{E}^X[\sqrt{z^2-X^2}\mathbb{I}_{[0,\min(1,z)]}(X)]\\
&=\int_0^{\min(1,z)} \sqrt{z^2-x^2}\,\text{d}x\\
&=z^2\int_0^{\min(1,z^{-1})} \sqrt{1-y^2}\,\text{d}y\qquad [x=yz\,,\ \text{d}x=z\text{d}y]\\
&=z^2\int_0^{\min(\pi/2,\cos^{-1} z^{-1})} \sin^2{\theta} \,\text{d}\theta\qquad [y=\cos(\theta)\,,\ \text{d}y=\sin(\theta)\text{d}\theta]\\
&=\frac{z^2}{2}\left[ \min(\pi/2,\cos^{-1} z^{-1}) - \sin\{\min(\pi/2,\cos^{-1} z^{-1})\}\cos\{\min(\pi/2,\cos^{-1} z^{-1}\}\right]\\
&=\frac{z^2}{2}\begin{cases}
\pi/2 &\text{ if }z<1\\
\cos^{-1} z^{-1}-\sin\{\cos^{-1} z^{-1})\}z^{-1}&\text{ if }z\ge 1\\
\end{cases}\\
&=\frac{z^2}{2}\begin{cases}
\pi/2 &\text{ if }z<1\\
\cos^{-1} z^{-1}-\sqrt{1-z^{-2}}z^{-1}&\text{ if }z\ge 1\\
\end{cases}
\end{align*}
which ends up with the same complexity! (Plus potential mistakes of mine along the way!) | Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables | That the pdf is correct can be checked by a simple simulation
samps=sqrt(runif(1e5)^2+runif(1e5)^2)
hist(samps,prob=TRUE,nclass=143,col="wheat")
df=function(x){pi*x/2-2*x*(x>1)*acos(1/(x+(1-x)*(x<1))) | Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables
That the pdf is correct can be checked by a simple simulation
samps=sqrt(runif(1e5)^2+runif(1e5)^2)
hist(samps,prob=TRUE,nclass=143,col="wheat")
df=function(x){pi*x/2-2*x*(x>1)*acos(1/(x+(1-x)*(x<1)))}
curve(df,add=TRUE,col="sienna",lwd=3)
Finding the cdf without the polar change of variables goes through
\begin{align*}
\mathrm{Pr}(\sqrt{X^2+Y^2}\le z) &= \mathrm{Pr}(X^2+Y^2\le z^2)\\
&= \mathrm{Pr}(Y^2\le z^2-X^2)\\
&=\mathrm{Pr}(Y\le \sqrt{z^2-X^2}\,,X\le z)\\
&=\mathbb{E}^X[\sqrt{z^2-X^2}\mathbb{I}_{[0,\min(1,z)]}(X)]\\
&=\int_0^{\min(1,z)} \sqrt{z^2-x^2}\,\text{d}x\\
&=z^2\int_0^{\min(1,z^{-1})} \sqrt{1-y^2}\,\text{d}y\qquad [x=yz\,,\ \text{d}x=z\text{d}y]\\
&=z^2\int_0^{\min(\pi/2,\cos^{-1} z^{-1})} \sin^2{\theta} \,\text{d}\theta\qquad [y=\cos(\theta)\,,\ \text{d}y=\sin(\theta)\text{d}\theta]\\
&=\frac{z^2}{2}\left[ \min(\pi/2,\cos^{-1} z^{-1}) - \sin\{\min(\pi/2,\cos^{-1} z^{-1})\}\cos\{\min(\pi/2,\cos^{-1} z^{-1}\}\right]\\
&=\frac{z^2}{2}\begin{cases}
\pi/2 &\text{ if }z<1\\
\cos^{-1} z^{-1}-\sin\{\cos^{-1} z^{-1})\}z^{-1}&\text{ if }z\ge 1\\
\end{cases}\\
&=\frac{z^2}{2}\begin{cases}
\pi/2 &\text{ if }z<1\\
\cos^{-1} z^{-1}-\sqrt{1-z^{-2}}z^{-1}&\text{ if }z\ge 1\\
\end{cases}
\end{align*}
which ends up with the same complexity! (Plus potential mistakes of mine along the way!) | Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables
That the pdf is correct can be checked by a simple simulation
samps=sqrt(runif(1e5)^2+runif(1e5)^2)
hist(samps,prob=TRUE,nclass=143,col="wheat")
df=function(x){pi*x/2-2*x*(x>1)*acos(1/(x+(1-x)*(x<1))) |
25,859 | Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables | $f_z(z)$ :
So, for $1\le z<\sqrt 2$, we have
$\cos^{-1}\left(\frac{1}{z}\right)\le\theta\le\sin^{-1}\left(\frac{1}{z}\right)$
You can simplify your expressions when you use symmetry and evaluate the expressions for $\theta_{min} < \theta < \frac{\pi}{4}$. Thus, for half of the space and then double the result.
Then you get:
$$P(Z \leq r) = 2 \int_0^r z \left(\int_{\theta_{min}}^{\frac{\pi}{4}}d\theta\right) dz = \int_0^r z \left(\frac{\pi}{2}-2\theta_{min}\right) dz$$
and your $f_z(z)$ is
$$f_z(z) = z \left(\frac{\pi}{2}-2\theta_{min}\right) = \begin{cases} z\left(\frac{\pi}{2}\right) & \text{ if } 0 \leq z \leq 1 \\ z \left(\frac{\pi}{2} - 2 \cos^{-1}\left(\frac{1}{z}\right)\right) & \text{ if } 1 < z \leq \sqrt{2} \end{cases}$$
$F_z(z)$ :
You can use the indefinite integral:
$$\int z \cos^{-1}\left(\frac{1}{z}\right) = \frac{1}{2} z \left( z \cos^{-1}\left(\frac{1}{z}\right) - \sqrt{1-\frac{1}{z^2}} \right) + C $$
note $\frac{d}{du} \cos^{-1}(u) = - (1-u^2)^{-0.5}$
This leads straightforward to something similar as Xi'ans expression for $Pr(Z \leq z)$ namely
if $1 \leq z \leq \sqrt{2}$ then:
$$F_z(z) = {z^2} \left(\frac{\pi}{4}-\cos^{-1}\left(\frac{1}{z}\right) + z^{-1}\sqrt{1-\frac{1}{z^2}} \right)$$
The relation with your expression is seen when we split up the $cos^{-1}$ into two $cos^{-1}$ expressions, and then converted to different $sin^{-1}$ expressions.
for $z>1$ we have
$$\cos^{-1}\left(\frac{1}{z}\right) = \sin^{-1}\left(\sqrt{1-\frac{1}{z^2}}\right) = \sin^{-1}\left(\frac{\sqrt{z^2-1}}{z}\right) $$
and
$$\cos^{-1}\left(\frac{1}{z}\right) = \frac{\pi}{2} -\sin^{-1}\left(\frac{1}{z}\right) $$
so
$$\begin{array}\\
\cos^{-1}\left(\frac{1}{z}\right) & = 0.5 \cos^{-1}\left(\frac{1}{z}\right) + 0.5 \cos^{-1}\left(\frac{1}{z}\right) \\
& = \frac{\pi}{4} - 0.5 \sin^{-1}\left(\frac{1}{z}\right) + 0.5 \sin^{-1}\left(\frac{\sqrt{z^2-1}}{z}\right) \end{array} $$
which results in your expression when you plug this into the before mentioned $F_z(z)$ for $1<z<\sqrt{2}$ | Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables | $f_z(z)$ :
So, for $1\le z<\sqrt 2$, we have
$\cos^{-1}\left(\frac{1}{z}\right)\le\theta\le\sin^{-1}\left(\frac{1}{z}\right)$
You can simplify your expressions when you use symmetry and evaluate t | Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables
$f_z(z)$ :
So, for $1\le z<\sqrt 2$, we have
$\cos^{-1}\left(\frac{1}{z}\right)\le\theta\le\sin^{-1}\left(\frac{1}{z}\right)$
You can simplify your expressions when you use symmetry and evaluate the expressions for $\theta_{min} < \theta < \frac{\pi}{4}$. Thus, for half of the space and then double the result.
Then you get:
$$P(Z \leq r) = 2 \int_0^r z \left(\int_{\theta_{min}}^{\frac{\pi}{4}}d\theta\right) dz = \int_0^r z \left(\frac{\pi}{2}-2\theta_{min}\right) dz$$
and your $f_z(z)$ is
$$f_z(z) = z \left(\frac{\pi}{2}-2\theta_{min}\right) = \begin{cases} z\left(\frac{\pi}{2}\right) & \text{ if } 0 \leq z \leq 1 \\ z \left(\frac{\pi}{2} - 2 \cos^{-1}\left(\frac{1}{z}\right)\right) & \text{ if } 1 < z \leq \sqrt{2} \end{cases}$$
$F_z(z)$ :
You can use the indefinite integral:
$$\int z \cos^{-1}\left(\frac{1}{z}\right) = \frac{1}{2} z \left( z \cos^{-1}\left(\frac{1}{z}\right) - \sqrt{1-\frac{1}{z^2}} \right) + C $$
note $\frac{d}{du} \cos^{-1}(u) = - (1-u^2)^{-0.5}$
This leads straightforward to something similar as Xi'ans expression for $Pr(Z \leq z)$ namely
if $1 \leq z \leq \sqrt{2}$ then:
$$F_z(z) = {z^2} \left(\frac{\pi}{4}-\cos^{-1}\left(\frac{1}{z}\right) + z^{-1}\sqrt{1-\frac{1}{z^2}} \right)$$
The relation with your expression is seen when we split up the $cos^{-1}$ into two $cos^{-1}$ expressions, and then converted to different $sin^{-1}$ expressions.
for $z>1$ we have
$$\cos^{-1}\left(\frac{1}{z}\right) = \sin^{-1}\left(\sqrt{1-\frac{1}{z^2}}\right) = \sin^{-1}\left(\frac{\sqrt{z^2-1}}{z}\right) $$
and
$$\cos^{-1}\left(\frac{1}{z}\right) = \frac{\pi}{2} -\sin^{-1}\left(\frac{1}{z}\right) $$
so
$$\begin{array}\\
\cos^{-1}\left(\frac{1}{z}\right) & = 0.5 \cos^{-1}\left(\frac{1}{z}\right) + 0.5 \cos^{-1}\left(\frac{1}{z}\right) \\
& = \frac{\pi}{4} - 0.5 \sin^{-1}\left(\frac{1}{z}\right) + 0.5 \sin^{-1}\left(\frac{\sqrt{z^2-1}}{z}\right) \end{array} $$
which results in your expression when you plug this into the before mentioned $F_z(z)$ for $1<z<\sqrt{2}$ | Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables
$f_z(z)$ :
So, for $1\le z<\sqrt 2$, we have
$\cos^{-1}\left(\frac{1}{z}\right)\le\theta\le\sin^{-1}\left(\frac{1}{z}\right)$
You can simplify your expressions when you use symmetry and evaluate t |
25,860 | Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables | For $0 \leq z \leq 1$, $P\left(\sqrt{X^2+Y^2} \leq z\right)$ is just the area of the quarter-circle of radius $z$ which is $\frac 14 \pi z^2$. That is,
$$\text{For }0 \leq z \leq 1, ~\text{area of quarter-circle} = \frac{\pi z^2}{4} = P\left(\sqrt{X^2+Y^2} \leq z\right).$$
For $1 < z \leq \sqrt{2}$, the region over which we need to integrate to find $P\left(\sqrt{X^2+Y^2} \leq z\right)$can be divided into two right triangles $\big($one of them has vertices $(0,0), (0,1)$ and $(\sqrt{z^2-1}, 1)$ while the other has vertices $(0,0), (1,0)$ and $(1, \sqrt{z^2-1})$ $\big)$ together with a sector of a circle of radius $z$ and included angle $\frac{\pi}{2}-2\arccos\left(\frac{1}{z}\right)$. The area of this region (and hence the value of $\left( P(\sqrt{X^2+Y^2} \leq z\right)$) is easily found. We have that for $1 < z \leq \sqrt{2}$,
\begin{align}\text{area of region} &= \text{area of two triangles plus area of sector}\\
&=\sqrt{z^2-1} + \frac 12 z^2\left( \frac{\pi}{2}-2\arccos \left(\frac{1}{z}\right)\right)\\
&= \frac{\pi z^2}{4} + \sqrt{z^2-1} - z^2\arccos \frac{1}{z}\\
&= \left( P(\sqrt{X^2+Y^2} \leq z\right)\end{align}
which is the result in Martijn Wetering's answer. | Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables | For $0 \leq z \leq 1$, $P\left(\sqrt{X^2+Y^2} \leq z\right)$ is just the area of the quarter-circle of radius $z$ which is $\frac 14 \pi z^2$. That is,
$$\text{For }0 \leq z \leq 1, ~\text{area of q | Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables
For $0 \leq z \leq 1$, $P\left(\sqrt{X^2+Y^2} \leq z\right)$ is just the area of the quarter-circle of radius $z$ which is $\frac 14 \pi z^2$. That is,
$$\text{For }0 \leq z \leq 1, ~\text{area of quarter-circle} = \frac{\pi z^2}{4} = P\left(\sqrt{X^2+Y^2} \leq z\right).$$
For $1 < z \leq \sqrt{2}$, the region over which we need to integrate to find $P\left(\sqrt{X^2+Y^2} \leq z\right)$can be divided into two right triangles $\big($one of them has vertices $(0,0), (0,1)$ and $(\sqrt{z^2-1}, 1)$ while the other has vertices $(0,0), (1,0)$ and $(1, \sqrt{z^2-1})$ $\big)$ together with a sector of a circle of radius $z$ and included angle $\frac{\pi}{2}-2\arccos\left(\frac{1}{z}\right)$. The area of this region (and hence the value of $\left( P(\sqrt{X^2+Y^2} \leq z\right)$) is easily found. We have that for $1 < z \leq \sqrt{2}$,
\begin{align}\text{area of region} &= \text{area of two triangles plus area of sector}\\
&=\sqrt{z^2-1} + \frac 12 z^2\left( \frac{\pi}{2}-2\arccos \left(\frac{1}{z}\right)\right)\\
&= \frac{\pi z^2}{4} + \sqrt{z^2-1} - z^2\arccos \frac{1}{z}\\
&= \left( P(\sqrt{X^2+Y^2} \leq z\right)\end{align}
which is the result in Martijn Wetering's answer. | Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables
For $0 \leq z \leq 1$, $P\left(\sqrt{X^2+Y^2} \leq z\right)$ is just the area of the quarter-circle of radius $z$ which is $\frac 14 \pi z^2$. That is,
$$\text{For }0 \leq z \leq 1, ~\text{area of q |
25,861 | mutual information for feature selection | I think that your confusion about the results comes from the same problem, which I had when I was studying the mutual information: you are trying to interpret results having in mind (a) the ability to predict one signal from another and (b) forgetting the statistical ingredient, which is always present in the mutual information calculations. In that sense the "mutual information" name is to some extent misleading, since you might think: OK, if I know one signal and can predict the other with 100% certainty, these signals should have highest mutual information. But this is not the case. Let me give you two examples to illustrate this claim.
Consider two signals $x_i$ and $y_i$ which have a functional dependence
$$y_i = \sin(x_i)$$
If you know $x_i$ you can predict $y_i$ with 100% certainty, but if you know $y_i$, you cannot find $x_i$ since the equation has infinite number of solutions. So, if mutual information was a measure of mutual dependence, it would have been non-symmetric: it would have given high value for mutual dependence $y(x)$ and low value for mutual dependence $x(y)$. But the mutual information is symmetric
$$I(x_i, y_i) = I(y_i, x_i)$$
and therefore is not a measure of how certain is dependence of one variable from the other.
Another example. Imagine two signals with a perfect functional dependence
$$y_i = x_i$$
where $x_i = \{0, 1\}$. First, assume that $x_i$ is distributed uniformly
$$p(x_i = 0) = \frac{1}{2}\ , \quad p(x_i = 1) = \frac{1}{2}$$
Then the joint distribution matrix will be
$$p(x_i, y_i) = \pmatrix{\frac{1}{2} & 0 \\ 0 & \frac{1}{2}}$$
and the mutual information will give you the highest value for all 2 by 2 probability matrices
$$I(x_i, y_i) = {\rm entropy}(x_i) = {\rm entropy}(y_i) = \log(2) \approx 0.693$$
which is exactly what you expect from mutual information even if you consider it as a measure of mutual dependence. But what if we take non-uniformly distributed random variable $x_i$?
$$p(x_i = 0) = \frac{1}{10}\ , \quad p(x_i = 1) = \frac{9}{10}$$
The joint probability matrix is then
$$\pmatrix{\frac{1}{10} & 0 \\ 0 & \frac{9}{10}}$$
and mutual information is
$$I(x_i,y_i) = \frac{1}{10} \log(10) + \frac{9}{10} \log \left( \frac{10}{9} \right) = {\rm entropy}(x_i) = {\rm entropy}(y_i) \approx 0.325$$
Notice that we still have a perfect prediction ability: given $x_i$ we know for sure the value of $y_i$ and vice versa. But the mutual information is much less now. What have happened? The answer again is simple: the mutual information is not a measure of mutual dependence, but the measure of mutual entropy. The entropy decreased, so did the mutual information.
That is why I personally dislike the name "mutual information" and prefer my own naming convention: mutual entropy. But I do understand where the name comes from - the entropy is indeed the measure of information, contained in the signal. In the first example $x_i$ had maximal entropy $\log(2)$ and was perfectly correlated with $y_i$, that is why their mutual entropy was also $\log(2)$. In the second example the perfect correlation remained, but the entropy of each signal decreased and their mutual entropy decreased also.
Finally, to the example from your question. Signal response has only two possible values, $0$ and $1$. Its entropy is
$$S(response) = \frac{8}{10} \log\left(\frac{10}{8}\right) + \frac{2}{10} \log\left(\frac{10}{2}\right) = 0.5004$$
The signals var_1 and var_2 are much richer and have higher entropies, but the mutual entropies $I(response, var_1)$ and $I(response, var_2)$ cannot be higher than entropies of each ingredient - they can share only what they have. So we should look if var_1 or var_2 can decrease the entropy of response. But this is not the case: from var_1 and var_2 we can always infer the value of response. That is why the mutual information values are the same in both cases and equal to the entropy of response.
I said that I dislike the term "mutual information", but I am not saying that it is wrong - it is just not very intuitive and the reasoning in terms of mutual information is not very intuitive also. Imagine that we have two transmitters: one transmitter is sending only 0's and 1's (response), the other is sending values from 1 to 10 (var_1 or var_2). The second transmitter can send much more information, than the first one (because the entropy of the signal is larger). But we say: imagine that they are sending the same message to some distant planet, just encode it differently. Then the second transmitter is sending a lot of extra (i.e. useless) information: when it is enough to send only value $0$, var_1 sends 8 different values - $1,2,3,4,5,6,7,9$. This is extra and not needed, but still we can reconstruct signal response from var_1. The same is for var_2 - we can reconstruct the signal response from var_2, only that var_2 uses even more useless information - it uses 2 different values - $8$ and $10$ in order to encode the value of 1. Still we can reconstruct the signal response, and that is why the mutual information values are the same and are equal to the value of entropy (information) of response.
P.S. I do acknowledge that some of my arguments are pure hand-waving. | mutual information for feature selection | I think that your confusion about the results comes from the same problem, which I had when I was studying the mutual information: you are trying to interpret results having in mind (a) the ability to | mutual information for feature selection
I think that your confusion about the results comes from the same problem, which I had when I was studying the mutual information: you are trying to interpret results having in mind (a) the ability to predict one signal from another and (b) forgetting the statistical ingredient, which is always present in the mutual information calculations. In that sense the "mutual information" name is to some extent misleading, since you might think: OK, if I know one signal and can predict the other with 100% certainty, these signals should have highest mutual information. But this is not the case. Let me give you two examples to illustrate this claim.
Consider two signals $x_i$ and $y_i$ which have a functional dependence
$$y_i = \sin(x_i)$$
If you know $x_i$ you can predict $y_i$ with 100% certainty, but if you know $y_i$, you cannot find $x_i$ since the equation has infinite number of solutions. So, if mutual information was a measure of mutual dependence, it would have been non-symmetric: it would have given high value for mutual dependence $y(x)$ and low value for mutual dependence $x(y)$. But the mutual information is symmetric
$$I(x_i, y_i) = I(y_i, x_i)$$
and therefore is not a measure of how certain is dependence of one variable from the other.
Another example. Imagine two signals with a perfect functional dependence
$$y_i = x_i$$
where $x_i = \{0, 1\}$. First, assume that $x_i$ is distributed uniformly
$$p(x_i = 0) = \frac{1}{2}\ , \quad p(x_i = 1) = \frac{1}{2}$$
Then the joint distribution matrix will be
$$p(x_i, y_i) = \pmatrix{\frac{1}{2} & 0 \\ 0 & \frac{1}{2}}$$
and the mutual information will give you the highest value for all 2 by 2 probability matrices
$$I(x_i, y_i) = {\rm entropy}(x_i) = {\rm entropy}(y_i) = \log(2) \approx 0.693$$
which is exactly what you expect from mutual information even if you consider it as a measure of mutual dependence. But what if we take non-uniformly distributed random variable $x_i$?
$$p(x_i = 0) = \frac{1}{10}\ , \quad p(x_i = 1) = \frac{9}{10}$$
The joint probability matrix is then
$$\pmatrix{\frac{1}{10} & 0 \\ 0 & \frac{9}{10}}$$
and mutual information is
$$I(x_i,y_i) = \frac{1}{10} \log(10) + \frac{9}{10} \log \left( \frac{10}{9} \right) = {\rm entropy}(x_i) = {\rm entropy}(y_i) \approx 0.325$$
Notice that we still have a perfect prediction ability: given $x_i$ we know for sure the value of $y_i$ and vice versa. But the mutual information is much less now. What have happened? The answer again is simple: the mutual information is not a measure of mutual dependence, but the measure of mutual entropy. The entropy decreased, so did the mutual information.
That is why I personally dislike the name "mutual information" and prefer my own naming convention: mutual entropy. But I do understand where the name comes from - the entropy is indeed the measure of information, contained in the signal. In the first example $x_i$ had maximal entropy $\log(2)$ and was perfectly correlated with $y_i$, that is why their mutual entropy was also $\log(2)$. In the second example the perfect correlation remained, but the entropy of each signal decreased and their mutual entropy decreased also.
Finally, to the example from your question. Signal response has only two possible values, $0$ and $1$. Its entropy is
$$S(response) = \frac{8}{10} \log\left(\frac{10}{8}\right) + \frac{2}{10} \log\left(\frac{10}{2}\right) = 0.5004$$
The signals var_1 and var_2 are much richer and have higher entropies, but the mutual entropies $I(response, var_1)$ and $I(response, var_2)$ cannot be higher than entropies of each ingredient - they can share only what they have. So we should look if var_1 or var_2 can decrease the entropy of response. But this is not the case: from var_1 and var_2 we can always infer the value of response. That is why the mutual information values are the same in both cases and equal to the entropy of response.
I said that I dislike the term "mutual information", but I am not saying that it is wrong - it is just not very intuitive and the reasoning in terms of mutual information is not very intuitive also. Imagine that we have two transmitters: one transmitter is sending only 0's and 1's (response), the other is sending values from 1 to 10 (var_1 or var_2). The second transmitter can send much more information, than the first one (because the entropy of the signal is larger). But we say: imagine that they are sending the same message to some distant planet, just encode it differently. Then the second transmitter is sending a lot of extra (i.e. useless) information: when it is enough to send only value $0$, var_1 sends 8 different values - $1,2,3,4,5,6,7,9$. This is extra and not needed, but still we can reconstruct signal response from var_1. The same is for var_2 - we can reconstruct the signal response from var_2, only that var_2 uses even more useless information - it uses 2 different values - $8$ and $10$ in order to encode the value of 1. Still we can reconstruct the signal response, and that is why the mutual information values are the same and are equal to the value of entropy (information) of response.
P.S. I do acknowledge that some of my arguments are pure hand-waving. | mutual information for feature selection
I think that your confusion about the results comes from the same problem, which I had when I was studying the mutual information: you are trying to interpret results having in mind (a) the ability to |
25,862 | lightgbm: Understanding why it is fast | Since a more detailed explanation was asked:
There are three reasons why LightGBM is fast:
Histogram based splitting
Gradient-based One-Side Sampling (GOSS)
Exclusive Feature Bundling (EFB)
Histogram based splitting is in the literature since the late 1990's, but it became popular with Xgboost, that was the first publicly available package to implement it. Since finding the exact optimal split is very costly when there's a lot of data (since it involves testing every possible split point), using a quantile (or histogram) based approximate solution can make the splitting procedure much faster, without losing too much accuracy. This involves computing some optimal weighted quantiles of your feature (i.e. group data into bins), and chose the split points between these quantiles. The algorithm for this procedure can be found in Xgboost's paper. Xgboost proposed local and global histograms, meaning that they would be computed for every feature either at the beginning of the algorithm (global) or at every new split (local). LightGBM briefly says that it bases its work on histogram based splitting (there are many papers on this), but it does not clarify the way the histogram are computed nor how this is implemented together with GOSS.
Gradient-based One-Side Sampling (GOSS) is an exclusive feature of LightGBM, and it's some sort of advanced subsampling of the data. Since the computational time for split finding is proportional to the number of features and instances, subsampling the instances makes this problem faster, and this is also the idea behind Stochastic Gradient Boosting by Friedman. However, SGB samples the data randomly, often causing a decrease in accuracy of the model. What GOSS does instead is something similar to Adaboost - records are weighted by their pseudo-residuals - since instances with low residuals have little impact on the training as they are already well-trained. Therefore, high-residuals records are kept while low-residuals ones are heavily subsampled, and their weights are recalibrated in order to avoid inserting a bias in the distribution of the residuals. This greatly reduces the number of instances, while maintaining an extremely good performance, and it is one of the reasons why the algorithm is performing better than other histogram based packages such as H2O or XGboost.
Exclusive Feature Bundling (EFB) is used to deal with sparse features. I will not get into the details at all, mostly because I am not particularly familiar with them; however, suffice to say that EFB is used to bundle sparse features together (features that are never non-zero together), in a way that greatly reduces computational effort on big sparse datasets (as mentioned, finding splits is also proportional to the total number of features). The optimal bundling of the sparse features is usually an NP-hard problem, but it is solved with good approximation through a greedy algorithm.
In their documentation they also mention the leaf-growth first of the trees. This is not mentioned, as far as I know, in the paper, but it's supposed to be used to increase accuracy and not speed.
Source: LightGBM paper :) | lightgbm: Understanding why it is fast | Since a more detailed explanation was asked:
There are three reasons why LightGBM is fast:
Histogram based splitting
Gradient-based One-Side Sampling (GOSS)
Exclusive Feature Bundling (EFB)
Histogr | lightgbm: Understanding why it is fast
Since a more detailed explanation was asked:
There are three reasons why LightGBM is fast:
Histogram based splitting
Gradient-based One-Side Sampling (GOSS)
Exclusive Feature Bundling (EFB)
Histogram based splitting is in the literature since the late 1990's, but it became popular with Xgboost, that was the first publicly available package to implement it. Since finding the exact optimal split is very costly when there's a lot of data (since it involves testing every possible split point), using a quantile (or histogram) based approximate solution can make the splitting procedure much faster, without losing too much accuracy. This involves computing some optimal weighted quantiles of your feature (i.e. group data into bins), and chose the split points between these quantiles. The algorithm for this procedure can be found in Xgboost's paper. Xgboost proposed local and global histograms, meaning that they would be computed for every feature either at the beginning of the algorithm (global) or at every new split (local). LightGBM briefly says that it bases its work on histogram based splitting (there are many papers on this), but it does not clarify the way the histogram are computed nor how this is implemented together with GOSS.
Gradient-based One-Side Sampling (GOSS) is an exclusive feature of LightGBM, and it's some sort of advanced subsampling of the data. Since the computational time for split finding is proportional to the number of features and instances, subsampling the instances makes this problem faster, and this is also the idea behind Stochastic Gradient Boosting by Friedman. However, SGB samples the data randomly, often causing a decrease in accuracy of the model. What GOSS does instead is something similar to Adaboost - records are weighted by their pseudo-residuals - since instances with low residuals have little impact on the training as they are already well-trained. Therefore, high-residuals records are kept while low-residuals ones are heavily subsampled, and their weights are recalibrated in order to avoid inserting a bias in the distribution of the residuals. This greatly reduces the number of instances, while maintaining an extremely good performance, and it is one of the reasons why the algorithm is performing better than other histogram based packages such as H2O or XGboost.
Exclusive Feature Bundling (EFB) is used to deal with sparse features. I will not get into the details at all, mostly because I am not particularly familiar with them; however, suffice to say that EFB is used to bundle sparse features together (features that are never non-zero together), in a way that greatly reduces computational effort on big sparse datasets (as mentioned, finding splits is also proportional to the total number of features). The optimal bundling of the sparse features is usually an NP-hard problem, but it is solved with good approximation through a greedy algorithm.
In their documentation they also mention the leaf-growth first of the trees. This is not mentioned, as far as I know, in the paper, but it's supposed to be used to increase accuracy and not speed.
Source: LightGBM paper :) | lightgbm: Understanding why it is fast
Since a more detailed explanation was asked:
There are three reasons why LightGBM is fast:
Histogram based splitting
Gradient-based One-Side Sampling (GOSS)
Exclusive Feature Bundling (EFB)
Histogr |
25,863 | lightgbm: Understanding why it is fast | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Histogram based algorithms.
Read more here -
https://github.com/Microsoft/LightGBM/blob/master/docs/Features.rst | lightgbm: Understanding why it is fast | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| lightgbm: Understanding why it is fast
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Histogram based algorithms.
Read more here -
https://github.com/Microsoft/LightGBM/blob/master/docs/Features.rst | lightgbm: Understanding why it is fast
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
25,864 | How to generate Bernoulli random variables with common correlation $\rho$? | Because this correlation matrix is so symmetric, we might try to solve the problem with a symmetric distribution.
One of the simplest that gives sufficient flexibility in varying the correlation is the following. Given $d\ge 2$ variables, define a distribution on the set of $d$-dimensional binary vectors $X$ by assigning probability $q$ to $X=(1,1,\ldots, 1)$, probability $q$ to $X=(0,0,\ldots, 0)$, and distributing the remaining probability $1-2q$ equally among the $d$ vectors having exactly one $1$; thus, each of those gets probability $(1-2q)/d$. Note that this family of distributions depends on just one parameter $0\le q\le 1/2$.
It's easy to simulate from one of these distributions: output a vector of zeros with probability $q$, output a vector of ones with probability $q$, and otherwise select uniformly at random from the columns of the $d\times d$ identity matrix.
All the components of $X$ are identically distributed Bernoulli variables. They all have common parameter
$$p = \Pr(X_1 = 1) = q + \frac{1-2q}{d}.$$
Compute the covariance of $X_i$ and $X_j$ by observing they can both equal $1$ only when all the components are $1$, whence
$$\Pr(X_i=1=X_j) = \Pr(X=(1,1,\ldots,1)) = q.$$
This determines the mutual correlation as
$$\rho = \frac{d^2q - ((d-2)q + 1)^2}{(1 + (d-2)q)(d-1 - (d-2)q)}.$$
Given $d \ge 2$ and $-1/(d-1)\le \rho \le 1$ (which is the range of all possible correlations of any $d$-variate random variable), there is a unique solution $q(\rho)$ between $0$ and $1/2$.
Simulations bear this out. Beginning with a set of $21$ equally-spaced values of $\rho$, the corresponding values of $q$ were computed (for the case $d=8$) and used to generate $10,000$ independent values of $X$. The $\binom{8}{2}=28$ correlation coefficients were computed and plotted on the vertical axis. The agreement is good.
I carried out a range of such simulations for values of $d$ between $2$ and $99$, with comparably good results.
A generalization of this approach (namely, allowing for two, or three, or ... values of the $X_i$ simultaneously to equal $1$) would give greater flexibility in varying $E[X_i]$, which in this solution is determined by $\rho$. That combines the ideas related here with the ones in the fully general $d=2$ solution described at https://stats.stackexchange.com/a/285008/919.
The following R code features a function p to compute $q$ from $\rho$ and $d$ and exhibits a fairly efficient simulation mechanism within its main loop.
#
# Determine p(All zeros) = p(All ones) from rho and d.
#
p <- function(rho, d) {
if (rho==1) return(1/2)
if (rho <= -1/(d-1)) return(0)
if (d==2) return((1+rho)/4)
b <- d-2
(4 + 2*b + b^2*(1-rho) - (b+2)*sqrt(4 + b^2 * (1-rho)^2)) / (2 * b^2 * (1-rho))
}
#
# Simulate a range of correlations `rho`.
#
d <- 8 # The number of variables.
n.sim <- 1e4 # The number of draws of X in the simulation.
rho.limits <- c(-1/(d-1), 1)
rho <- seq(rho.limits[1], rho.limits[2], length.out=21)
rho.hat <- sapply(rho, function(rho) {
#
# Compute the probabilities from rho.
#
qd <- q0 <- p(rho, d)
q1 <- (1 - q0 - qd)
#
# First randomly select three kinds of events: all zero, one 1, all ones.
#
u <- sample.int(3, n.sim, prob=c(q0,q1,qd), replace=TRUE)
#
# Conditionally, when there is to be one 1, uniformly select which
# component will equal 1.
#
k <- diag(d)[, sample.int(d, n.sim, replace=TRUE)]
#
# When there are to be all zeros or all ones, make it so.
#
k[, u==1] <- 0
k[, u==3] <- 1
#
# The simulated values of X are the columns of `k`. Return all d*(d-1)/2 correlations.
#
cor(t(k))[lower.tri(diag(d))]
})
#
# Display the simulation results.
#
plot(rho, rho, type="n",
xlab="Intended Correlation",
ylab="Sample Correlation",
xlim=rho.limits, ylim=rho.limits,
main=paste(d, "Variables,", n.sim, "Iterations"))
abline(0, 1, col="Red", lwd=2)
invisible(apply(rho.hat, 1, function(y)
points(rho, y, pch=21, col="#00000010", bg="#00000004"))) | How to generate Bernoulli random variables with common correlation $\rho$? | Because this correlation matrix is so symmetric, we might try to solve the problem with a symmetric distribution.
One of the simplest that gives sufficient flexibility in varying the correlation is th | How to generate Bernoulli random variables with common correlation $\rho$?
Because this correlation matrix is so symmetric, we might try to solve the problem with a symmetric distribution.
One of the simplest that gives sufficient flexibility in varying the correlation is the following. Given $d\ge 2$ variables, define a distribution on the set of $d$-dimensional binary vectors $X$ by assigning probability $q$ to $X=(1,1,\ldots, 1)$, probability $q$ to $X=(0,0,\ldots, 0)$, and distributing the remaining probability $1-2q$ equally among the $d$ vectors having exactly one $1$; thus, each of those gets probability $(1-2q)/d$. Note that this family of distributions depends on just one parameter $0\le q\le 1/2$.
It's easy to simulate from one of these distributions: output a vector of zeros with probability $q$, output a vector of ones with probability $q$, and otherwise select uniformly at random from the columns of the $d\times d$ identity matrix.
All the components of $X$ are identically distributed Bernoulli variables. They all have common parameter
$$p = \Pr(X_1 = 1) = q + \frac{1-2q}{d}.$$
Compute the covariance of $X_i$ and $X_j$ by observing they can both equal $1$ only when all the components are $1$, whence
$$\Pr(X_i=1=X_j) = \Pr(X=(1,1,\ldots,1)) = q.$$
This determines the mutual correlation as
$$\rho = \frac{d^2q - ((d-2)q + 1)^2}{(1 + (d-2)q)(d-1 - (d-2)q)}.$$
Given $d \ge 2$ and $-1/(d-1)\le \rho \le 1$ (which is the range of all possible correlations of any $d$-variate random variable), there is a unique solution $q(\rho)$ between $0$ and $1/2$.
Simulations bear this out. Beginning with a set of $21$ equally-spaced values of $\rho$, the corresponding values of $q$ were computed (for the case $d=8$) and used to generate $10,000$ independent values of $X$. The $\binom{8}{2}=28$ correlation coefficients were computed and plotted on the vertical axis. The agreement is good.
I carried out a range of such simulations for values of $d$ between $2$ and $99$, with comparably good results.
A generalization of this approach (namely, allowing for two, or three, or ... values of the $X_i$ simultaneously to equal $1$) would give greater flexibility in varying $E[X_i]$, which in this solution is determined by $\rho$. That combines the ideas related here with the ones in the fully general $d=2$ solution described at https://stats.stackexchange.com/a/285008/919.
The following R code features a function p to compute $q$ from $\rho$ and $d$ and exhibits a fairly efficient simulation mechanism within its main loop.
#
# Determine p(All zeros) = p(All ones) from rho and d.
#
p <- function(rho, d) {
if (rho==1) return(1/2)
if (rho <= -1/(d-1)) return(0)
if (d==2) return((1+rho)/4)
b <- d-2
(4 + 2*b + b^2*(1-rho) - (b+2)*sqrt(4 + b^2 * (1-rho)^2)) / (2 * b^2 * (1-rho))
}
#
# Simulate a range of correlations `rho`.
#
d <- 8 # The number of variables.
n.sim <- 1e4 # The number of draws of X in the simulation.
rho.limits <- c(-1/(d-1), 1)
rho <- seq(rho.limits[1], rho.limits[2], length.out=21)
rho.hat <- sapply(rho, function(rho) {
#
# Compute the probabilities from rho.
#
qd <- q0 <- p(rho, d)
q1 <- (1 - q0 - qd)
#
# First randomly select three kinds of events: all zero, one 1, all ones.
#
u <- sample.int(3, n.sim, prob=c(q0,q1,qd), replace=TRUE)
#
# Conditionally, when there is to be one 1, uniformly select which
# component will equal 1.
#
k <- diag(d)[, sample.int(d, n.sim, replace=TRUE)]
#
# When there are to be all zeros or all ones, make it so.
#
k[, u==1] <- 0
k[, u==3] <- 1
#
# The simulated values of X are the columns of `k`. Return all d*(d-1)/2 correlations.
#
cor(t(k))[lower.tri(diag(d))]
})
#
# Display the simulation results.
#
plot(rho, rho, type="n",
xlab="Intended Correlation",
ylab="Sample Correlation",
xlim=rho.limits, ylim=rho.limits,
main=paste(d, "Variables,", n.sim, "Iterations"))
abline(0, 1, col="Red", lwd=2)
invisible(apply(rho.hat, 1, function(y)
points(rho, y, pch=21, col="#00000010", bg="#00000004"))) | How to generate Bernoulli random variables with common correlation $\rho$?
Because this correlation matrix is so symmetric, we might try to solve the problem with a symmetric distribution.
One of the simplest that gives sufficient flexibility in varying the correlation is th |
25,865 | What's a "patch" in CNN? | By reading around, a "patch" seems to be a subsection of an input image to the CNN, but what exactly is it?
It's exactly what you describe. The kernel (or filter or feature detector) only looks at one chunk of an image at a time, then the filter moves to another patch of the image, and so on.
When does a "patch" come into play when solving problems using CNN?
When you apply a CNN filter to an image, it looks at one patch at a time.
Why do we need "patches"?
CNN kernels/filters only process one patch at a time, rather than the whole image. This is because we want filters to process small pieces of the image in order to detect features (edges, etc). This also has a nice regularization property, since we're estimating a smaller number of parameters, and those parameters have to be "good" across many regions of each image, as well as many regions of all other training images.
What's the relation between a "patch" and a kernel (i.e. the feature detector)?
The patch is the input to the kernel. | What's a "patch" in CNN? | By reading around, a "patch" seems to be a subsection of an input image to the CNN, but what exactly is it?
It's exactly what you describe. The kernel (or filter or feature detector) only looks at on | What's a "patch" in CNN?
By reading around, a "patch" seems to be a subsection of an input image to the CNN, but what exactly is it?
It's exactly what you describe. The kernel (or filter or feature detector) only looks at one chunk of an image at a time, then the filter moves to another patch of the image, and so on.
When does a "patch" come into play when solving problems using CNN?
When you apply a CNN filter to an image, it looks at one patch at a time.
Why do we need "patches"?
CNN kernels/filters only process one patch at a time, rather than the whole image. This is because we want filters to process small pieces of the image in order to detect features (edges, etc). This also has a nice regularization property, since we're estimating a smaller number of parameters, and those parameters have to be "good" across many regions of each image, as well as many regions of all other training images.
What's the relation between a "patch" and a kernel (i.e. the feature detector)?
The patch is the input to the kernel. | What's a "patch" in CNN?
By reading around, a "patch" seems to be a subsection of an input image to the CNN, but what exactly is it?
It's exactly what you describe. The kernel (or filter or feature detector) only looks at on |
25,866 | Is feature normalisation needed prior to computing cosine distance? | The definition of the cosine similarity is:
$$
\text{similarity} = \cos(\theta) = {\mathbf{A} \cdot \mathbf{B} \over \|\mathbf{A}\|_2 \|\mathbf{B}\|_2} = \frac{ \sum\limits_{i=1}^{n}{A_i B_i} }{ \sqrt{\sum\limits_{i=1}^{n}{A_i^2}} \sqrt{\sum\limits_{i=1}^{n}{B_i^2}} }
$$
It is sensitive to the mean of features. To see this, choose some $j \in \{1, \ldots, n\}$, and add a very large positive number $k$ to the $j$th component of each vector. The similarity will then be
$$
\sim \frac{k^2}{\sqrt{k^2}\sqrt{k^2}} = 1.
$$
For this reason, the adjusted cosine similarity is often used. It is simply the cosine similarity applied to mean-removed features. | Is feature normalisation needed prior to computing cosine distance? | The definition of the cosine similarity is:
$$
\text{similarity} = \cos(\theta) = {\mathbf{A} \cdot \mathbf{B} \over \|\mathbf{A}\|_2 \|\mathbf{B}\|_2} = \frac{ \sum\limits_{i=1}^{n}{A_i B_i} }{ \sqr | Is feature normalisation needed prior to computing cosine distance?
The definition of the cosine similarity is:
$$
\text{similarity} = \cos(\theta) = {\mathbf{A} \cdot \mathbf{B} \over \|\mathbf{A}\|_2 \|\mathbf{B}\|_2} = \frac{ \sum\limits_{i=1}^{n}{A_i B_i} }{ \sqrt{\sum\limits_{i=1}^{n}{A_i^2}} \sqrt{\sum\limits_{i=1}^{n}{B_i^2}} }
$$
It is sensitive to the mean of features. To see this, choose some $j \in \{1, \ldots, n\}$, and add a very large positive number $k$ to the $j$th component of each vector. The similarity will then be
$$
\sim \frac{k^2}{\sqrt{k^2}\sqrt{k^2}} = 1.
$$
For this reason, the adjusted cosine similarity is often used. It is simply the cosine similarity applied to mean-removed features. | Is feature normalisation needed prior to computing cosine distance?
The definition of the cosine similarity is:
$$
\text{similarity} = \cos(\theta) = {\mathbf{A} \cdot \mathbf{B} \over \|\mathbf{A}\|_2 \|\mathbf{B}\|_2} = \frac{ \sum\limits_{i=1}^{n}{A_i B_i} }{ \sqr |
25,867 | Is feature normalisation needed prior to computing cosine distance? | This is frequently why features are one-hot encoded. By normalizing all features to a 0-1 range, it prevents certain features from having strong importance than others. Conversely, if you want some features to have stronger importance than others in terms of defining similarity between vectors, then it is acceptable to use non-normalized figures. It is important to understand how that impacts the similarity measure in such cases, however. | Is feature normalisation needed prior to computing cosine distance? | This is frequently why features are one-hot encoded. By normalizing all features to a 0-1 range, it prevents certain features from having strong importance than others. Conversely, if you want some fe | Is feature normalisation needed prior to computing cosine distance?
This is frequently why features are one-hot encoded. By normalizing all features to a 0-1 range, it prevents certain features from having strong importance than others. Conversely, if you want some features to have stronger importance than others in terms of defining similarity between vectors, then it is acceptable to use non-normalized figures. It is important to understand how that impacts the similarity measure in such cases, however. | Is feature normalisation needed prior to computing cosine distance?
This is frequently why features are one-hot encoded. By normalizing all features to a 0-1 range, it prevents certain features from having strong importance than others. Conversely, if you want some fe |
25,868 | Can we accept the null in noninferiority tests? | Your logic applies in exactly the same way to the good old one-sided tests (i.e. with $x=0$) that may be more familiar to the readers. For concreteness, imagine we are testing the null $H_0:\mu\le0$ against the alternative that $\mu$ is positive. Then if true $\mu$ is negative, increasing sample size will not yield a significant result, i.e., to use your words, it is not true that "if we got more evidence, the same effect size would become significant".
If we test $H_0:\mu\le 0$, we can have three possible outcomes:
First, $(1-\alpha)\cdot100\%$ confidence interval can be entirely above zero; then we reject the null and accept the alternative (that $\mu$ is positive).
Second, confidence interval can be entirely below zero. In this case we do not reject the null. However, in this case I think it is fine to say that we "accept the null", because we could consider $H_1$ as another null and reject that one.
Third, confidence interval can contain zero. Then we cannot reject $H_0$ and we cannot reject $H_1$ either, so there is nothing to accept.
So I would say that in one-sided situations one can accept the null, yes. But we cannot accept it simply because we failed to reject it; there are three possibilities, not two.
(Exactly the same applies to tests of equivalence aka "two one sided tests" (TOST), tests of non-inferiority, etc. One can reject the null, accept the null, or obtain an inconclusive result.)
In contrast, when $H_0$ is a point null such as $H_0:\mu=0$, we can never accept it, because $H_1:\mu\ne 0$ does not constitute a valid null hypothesis.
(Unless $\mu$ can have only discrete values, e.g. must be integer; then it seems that we could accept $H_0:\mu=0$ because $H_1:\mu\in\mathbb Z,\mu\ne 0$ now does constitute a valid null hypothesis. This is a bit of special case though.)
This issue was discussed some time ago in the comments under @gung's answer here: Why do statisticians say a non-significant result means "you can't reject the null" as opposed to accepting the null hypothesis?
See also an interesting (and under-voted) thread Does failure to reject the null in Neyman-Pearson approach mean that one should "accept" it?, where @Scortchi explains that in the Neyman-Pearson framework some authors have no problem talking about "accepting the null". That is also what @Alexis means in the last paragraph of her answer here. | Can we accept the null in noninferiority tests? | Your logic applies in exactly the same way to the good old one-sided tests (i.e. with $x=0$) that may be more familiar to the readers. For concreteness, imagine we are testing the null $H_0:\mu\le0$ a | Can we accept the null in noninferiority tests?
Your logic applies in exactly the same way to the good old one-sided tests (i.e. with $x=0$) that may be more familiar to the readers. For concreteness, imagine we are testing the null $H_0:\mu\le0$ against the alternative that $\mu$ is positive. Then if true $\mu$ is negative, increasing sample size will not yield a significant result, i.e., to use your words, it is not true that "if we got more evidence, the same effect size would become significant".
If we test $H_0:\mu\le 0$, we can have three possible outcomes:
First, $(1-\alpha)\cdot100\%$ confidence interval can be entirely above zero; then we reject the null and accept the alternative (that $\mu$ is positive).
Second, confidence interval can be entirely below zero. In this case we do not reject the null. However, in this case I think it is fine to say that we "accept the null", because we could consider $H_1$ as another null and reject that one.
Third, confidence interval can contain zero. Then we cannot reject $H_0$ and we cannot reject $H_1$ either, so there is nothing to accept.
So I would say that in one-sided situations one can accept the null, yes. But we cannot accept it simply because we failed to reject it; there are three possibilities, not two.
(Exactly the same applies to tests of equivalence aka "two one sided tests" (TOST), tests of non-inferiority, etc. One can reject the null, accept the null, or obtain an inconclusive result.)
In contrast, when $H_0$ is a point null such as $H_0:\mu=0$, we can never accept it, because $H_1:\mu\ne 0$ does not constitute a valid null hypothesis.
(Unless $\mu$ can have only discrete values, e.g. must be integer; then it seems that we could accept $H_0:\mu=0$ because $H_1:\mu\in\mathbb Z,\mu\ne 0$ now does constitute a valid null hypothesis. This is a bit of special case though.)
This issue was discussed some time ago in the comments under @gung's answer here: Why do statisticians say a non-significant result means "you can't reject the null" as opposed to accepting the null hypothesis?
See also an interesting (and under-voted) thread Does failure to reject the null in Neyman-Pearson approach mean that one should "accept" it?, where @Scortchi explains that in the Neyman-Pearson framework some authors have no problem talking about "accepting the null". That is also what @Alexis means in the last paragraph of her answer here. | Can we accept the null in noninferiority tests?
Your logic applies in exactly the same way to the good old one-sided tests (i.e. with $x=0$) that may be more familiar to the readers. For concreteness, imagine we are testing the null $H_0:\mu\le0$ a |
25,869 | Can we accept the null in noninferiority tests? | We never "accept the null hypothesis" (without also giving consideration to power and minimum relevant effect size). With a single hypothesis test, we pose a state of nature, $H_{0}$, and then answer some variation of the question "how unlikely are we to have observed the data underlying our test statistic, assuming $H_{0}$ (and our distributional assumption) is true?" We will then reject or fail to reject our $H_{0}$ based on a preferred Type I error rate, and draw a conclusion that is always about $H_{A}$… that is we found evidence to conclude $H_{A}$, or we did not find evidence to conclude $H_{A}$. We do not accept $H_{0}$ because we did not look for evidence for it. Absence of evidence (e.g., of a difference), is not the same thing as evidence of absence (e.g., of a difference). .
This is true for one-sided tests, just as it is for two-sided tests: we only look for evidence in favor of $H_{A}$ and find it, or do not find it.
If we only pose a single $H_{0}$ (without giving serious attention to both minimum relevant effect size, and statistical power), we are effectively making an a priori commitment to confirmation bias, because we have not looked for evidence for $H_{0}$, only evidence for $H_{A}$. Of course, we can (and, dare I say, should) pose null hypotheses for and against a position (relevance tests that combine tests for difference ($H_{0}^{+}$) with tests for equivalence ($H^{-}_{0}$) do just this).
It seems to me that there is no reason why you cannot combine inference from a one-sided test for inferiority with a one-sided test for non-inferiority to provide evidence (or lack of evidence) in both directions simultaneously.
Of course, if one is considering power and effect size, and one fails to reject $H_{0}$, but knows that there is (a) some minimum relevant effect size $\delta$, and (b) that their data are powerful enough to detect it for a given test, then one can interpret that as evidence of $H_{0}$. | Can we accept the null in noninferiority tests? | We never "accept the null hypothesis" (without also giving consideration to power and minimum relevant effect size). With a single hypothesis test, we pose a state of nature, $H_{0}$, and then answer | Can we accept the null in noninferiority tests?
We never "accept the null hypothesis" (without also giving consideration to power and minimum relevant effect size). With a single hypothesis test, we pose a state of nature, $H_{0}$, and then answer some variation of the question "how unlikely are we to have observed the data underlying our test statistic, assuming $H_{0}$ (and our distributional assumption) is true?" We will then reject or fail to reject our $H_{0}$ based on a preferred Type I error rate, and draw a conclusion that is always about $H_{A}$… that is we found evidence to conclude $H_{A}$, or we did not find evidence to conclude $H_{A}$. We do not accept $H_{0}$ because we did not look for evidence for it. Absence of evidence (e.g., of a difference), is not the same thing as evidence of absence (e.g., of a difference). .
This is true for one-sided tests, just as it is for two-sided tests: we only look for evidence in favor of $H_{A}$ and find it, or do not find it.
If we only pose a single $H_{0}$ (without giving serious attention to both minimum relevant effect size, and statistical power), we are effectively making an a priori commitment to confirmation bias, because we have not looked for evidence for $H_{0}$, only evidence for $H_{A}$. Of course, we can (and, dare I say, should) pose null hypotheses for and against a position (relevance tests that combine tests for difference ($H_{0}^{+}$) with tests for equivalence ($H^{-}_{0}$) do just this).
It seems to me that there is no reason why you cannot combine inference from a one-sided test for inferiority with a one-sided test for non-inferiority to provide evidence (or lack of evidence) in both directions simultaneously.
Of course, if one is considering power and effect size, and one fails to reject $H_{0}$, but knows that there is (a) some minimum relevant effect size $\delta$, and (b) that their data are powerful enough to detect it for a given test, then one can interpret that as evidence of $H_{0}$. | Can we accept the null in noninferiority tests?
We never "accept the null hypothesis" (without also giving consideration to power and minimum relevant effect size). With a single hypothesis test, we pose a state of nature, $H_{0}$, and then answer |
25,870 | A reverse birthday problem: no pair out of 1 million aliens shares a birthday; what is their year length? | Assuming all birthdays are equally likely and the birthdays are independent, the chance that $k+1$ aliens do not share a birthday is
$$p(k;N) = 1\left(1-\frac{1}{N}\right)\left(1-\frac{2}{N}\right)\cdots\left(1-\frac{k}{N}\right).$$
Its logarithm can be summed asymptotically provided $k$ is much smaller than $N$:
$$\log(p(k;N)) = -\frac{k(k+1)}{2N} - \frac{k + 3k^2 + 2k^3}{12N^2} - O(k^4 N^{-3}).\tag{1}$$
To be $100-100\alpha\%$ confident that $N$ is no less than some value $N^{*}$, we need $(1)$ to be greater than $\log(1-\alpha)$. Small $\alpha$ ensure $N$ is much larger than $k$, whence we may approximate $(1)$ accurately as $-k^2/(2N)$. This yields
$$-\frac{k^2}{2N} \gt \log(1-\alpha),$$
implying
$$N \gt\frac{-k^2}{2\log(1-\alpha)} \approx \frac{k^2}{2\alpha}=N^{*}\tag{2}$$
for small $\alpha$.
For instance, with $k=10^6-1$ as in the question and $\alpha=0.05$ (a conventional value corresponding to $95\%$ confidence), $(2)$ gives $N \gt 10^{13}$.
Here's a more expansive interpretation of this result. Without approximating in formula $(2)$, we obtain $N=9.74786\times 10^{12}$. For this $N$ the chance of no collision in a million birthdays is $p(10^6-1, 9.74786\times 10^{12})=95.0000\ldots\%$ (computed without approximation), essentially equal to our threshold of $95\%$. Thus for any $N$ this large or larger it's $95\%$ or more likely there will be no collisions, which is consistent with what we know, but for any smaller $N$ the chance of a collision gets above $100 - 95= 5\%$, which starts to make us fear we might have underestimated $N$.
As another example, in the traditional Birthday problem there is a $4\%$ chance of no collision in $k=6$ people and $5.6\%$ chance of no collision in $k=7$ people. These numbers suggest $N$ ought to exceed $360$ and $490$, respectively, right in the range of the correct value of $366$. This shows how accurate these approximate, asymptotic results can be even for very small $k$ (provided we stick to small $\alpha$). | A reverse birthday problem: no pair out of 1 million aliens shares a birthday; what is their year le | Assuming all birthdays are equally likely and the birthdays are independent, the chance that $k+1$ aliens do not share a birthday is
$$p(k;N) = 1\left(1-\frac{1}{N}\right)\left(1-\frac{2}{N}\right)\cd | A reverse birthday problem: no pair out of 1 million aliens shares a birthday; what is their year length?
Assuming all birthdays are equally likely and the birthdays are independent, the chance that $k+1$ aliens do not share a birthday is
$$p(k;N) = 1\left(1-\frac{1}{N}\right)\left(1-\frac{2}{N}\right)\cdots\left(1-\frac{k}{N}\right).$$
Its logarithm can be summed asymptotically provided $k$ is much smaller than $N$:
$$\log(p(k;N)) = -\frac{k(k+1)}{2N} - \frac{k + 3k^2 + 2k^3}{12N^2} - O(k^4 N^{-3}).\tag{1}$$
To be $100-100\alpha\%$ confident that $N$ is no less than some value $N^{*}$, we need $(1)$ to be greater than $\log(1-\alpha)$. Small $\alpha$ ensure $N$ is much larger than $k$, whence we may approximate $(1)$ accurately as $-k^2/(2N)$. This yields
$$-\frac{k^2}{2N} \gt \log(1-\alpha),$$
implying
$$N \gt\frac{-k^2}{2\log(1-\alpha)} \approx \frac{k^2}{2\alpha}=N^{*}\tag{2}$$
for small $\alpha$.
For instance, with $k=10^6-1$ as in the question and $\alpha=0.05$ (a conventional value corresponding to $95\%$ confidence), $(2)$ gives $N \gt 10^{13}$.
Here's a more expansive interpretation of this result. Without approximating in formula $(2)$, we obtain $N=9.74786\times 10^{12}$. For this $N$ the chance of no collision in a million birthdays is $p(10^6-1, 9.74786\times 10^{12})=95.0000\ldots\%$ (computed without approximation), essentially equal to our threshold of $95\%$. Thus for any $N$ this large or larger it's $95\%$ or more likely there will be no collisions, which is consistent with what we know, but for any smaller $N$ the chance of a collision gets above $100 - 95= 5\%$, which starts to make us fear we might have underestimated $N$.
As another example, in the traditional Birthday problem there is a $4\%$ chance of no collision in $k=6$ people and $5.6\%$ chance of no collision in $k=7$ people. These numbers suggest $N$ ought to exceed $360$ and $490$, respectively, right in the range of the correct value of $366$. This shows how accurate these approximate, asymptotic results can be even for very small $k$ (provided we stick to small $\alpha$). | A reverse birthday problem: no pair out of 1 million aliens shares a birthday; what is their year le
Assuming all birthdays are equally likely and the birthdays are independent, the chance that $k+1$ aliens do not share a birthday is
$$p(k;N) = 1\left(1-\frac{1}{N}\right)\left(1-\frac{2}{N}\right)\cd |
25,871 | Why is the continuous uniform distribution not an exponential family? | Since the indicator function $\mathbb{I}_{(0,\theta)}(x)$ is part of the definition of the Uniform density, one cannot enter it inside the exponential product part $\exp\{\eta_1(\theta)T_1(x)+\ldots\}$. | Why is the continuous uniform distribution not an exponential family? | Since the indicator function $\mathbb{I}_{(0,\theta)}(x)$ is part of the definition of the Uniform density, one cannot enter it inside the exponential product part $\exp\{\eta_1(\theta)T_1(x)+\ldots\} | Why is the continuous uniform distribution not an exponential family?
Since the indicator function $\mathbb{I}_{(0,\theta)}(x)$ is part of the definition of the Uniform density, one cannot enter it inside the exponential product part $\exp\{\eta_1(\theta)T_1(x)+\ldots\}$. | Why is the continuous uniform distribution not an exponential family?
Since the indicator function $\mathbb{I}_{(0,\theta)}(x)$ is part of the definition of the Uniform density, one cannot enter it inside the exponential product part $\exp\{\eta_1(\theta)T_1(x)+\ldots\} |
25,872 | If multi-collinearity is high, would LASSO coefficients shrink to 0? | Notice that
\begin{align*}
\|y-X\beta\|_2^2 + \lambda \|\beta\|_1
& = \|y - \beta_1 x_1 - \beta_2 x_2 \|_2^2 + \lambda \left( |\beta_1| + |\beta_2| \right) \\
& = \|y - (\beta_1 + 2 \beta_2) x_1 \|_2^2 + \lambda \left( |\beta_1| + |\beta_2| \right).
\end{align*}
For any fixed value of the coefficient $\beta_1 + 2\beta_2$, the penalty $|\beta_1| + |\beta_2|$ is minimized when $\beta_1 = 0$. This is because the penalty on $\beta_1$ is twice as weighted! To put this in notation, $$\tilde\beta = \arg\min_{\beta \, : \, \beta_1 + 2\beta_2 = K}|\beta_1| + |\beta_2|$$ satisfies $\tilde\beta_1 = 0$ for any $K$. Therefore, the lasso estimator
\begin{align*}
\hat\beta
& = \arg\min_{\beta \in \mathbb{R}^p} \|y - X \beta\|_2^2 + \lambda \|\beta\|_1 \\
& = \arg\min_{\beta \in \mathbb{R}^p} \|y - (\beta_1 + 2 \beta_2) x_1 \|_2^2 + \lambda \left( |\beta_1| + |\beta_2| \right) \\
& = \arg_\beta \min_{K \in \mathbb{R}} \, \min_{\beta \in \mathbb{R}^p \, : \, \beta_1 + 2 \beta_2 = K} \, \|y - K x_1 \|_2^2 + \lambda \left( |\beta_1| + |\beta_2| \right) \\
& = \arg_\beta \min_{K \in \mathbb{R}} \, \left\{ \|y - K x_1 \|_2^2 + \lambda \min_{\beta \in \mathbb{R}^p \, : \, \beta_1 + 2 \beta_2 = K} \, \left\{ \left( |\beta_1| + |\beta_2| \right) \right\} \right\}
\end{align*}
satisfies $\hat\beta_1 = 0$. The reason why the comments to OP's question are misleading is because there's a penalty on the model: those $(0, 50)$ and $(100,0)$ coefficients give the same error, but different $\ell_1$ norm! Further, it's not necessary to look at anything like LARs: this result follows immediately from the first principles.
As pointed out by Firebug, the reason why your simulation shows a contradictory result is that glmnet automatically scales to unit variance the features. That is, due to the use of glmnet, we're effectively in the case that $x_1 = x_2$. There, the estimator is no longer unique: $(100,0)$ and $(0,100)$ are both in the arg min. Indeed, $(a,b)$ is in the $\arg\min$ for any $a,b \geq 0$ such that $a+b = 100$.
In this case of equal features, glmnet will converge in exactly one iteration: it soft-thresholds the first coefficient, and then the second coefficient is soft-thresholded to zero.
This explains why the the simulation found $\hat\beta_2 = 0$ in particular. Indeed, the second coefficient will always be zero, regardless of the ordering of the features.
Proof: Assume WLOG that the feature $x \in \mathbb{R}^n$ satisfies $\|x\|_2 = 1$. Coordinate descent (the algorithm used by glmnet) computes for it's first iteration: $$\hat\beta_1^{(1)} = S_\lambda(x^T y)$$ followed by
\begin{align*}
\hat\beta_2^{(1)}
& = S_\lambda \left[ x^T \left( y - x S_\lambda (x^T y) \right) \right] \\
& = S_\lambda \left[ x^T y - x^T x \left( x^T y + T \right) \right] \\
& = S_\lambda \left[ - T \right] \\
& = 0,
\end{align*}
where $T = \begin{cases} - \lambda & \textrm{ if } x^T y > \lambda \\ \lambda & \textrm{ if } x^T y < -\lambda \\ 0 & \textrm{ otherwise} \end{cases}$. Then, since $\hat\beta_2^{(1)}= 0$, the second iteration of coordinate descent will repeat the computations above. Inductively, we see that $\hat\beta_j^{(i)} = \hat\beta_j^{(i)}$ for all iterations $i$ and $j \in \{1,2\}$. Therefore glmnet will report $\hat\beta_1 = \hat\beta_1^{(1)}$ and $\hat\beta_2 = \hat\beta_2^{(1)}$ since the stopping criterion is immediately reached. | If multi-collinearity is high, would LASSO coefficients shrink to 0? | Notice that
\begin{align*}
\|y-X\beta\|_2^2 + \lambda \|\beta\|_1
& = \|y - \beta_1 x_1 - \beta_2 x_2 \|_2^2 + \lambda \left( |\beta_1| + |\beta_2| \right) \\
& = \|y - (\beta_1 + 2 \beta_2) x_1 | If multi-collinearity is high, would LASSO coefficients shrink to 0?
Notice that
\begin{align*}
\|y-X\beta\|_2^2 + \lambda \|\beta\|_1
& = \|y - \beta_1 x_1 - \beta_2 x_2 \|_2^2 + \lambda \left( |\beta_1| + |\beta_2| \right) \\
& = \|y - (\beta_1 + 2 \beta_2) x_1 \|_2^2 + \lambda \left( |\beta_1| + |\beta_2| \right).
\end{align*}
For any fixed value of the coefficient $\beta_1 + 2\beta_2$, the penalty $|\beta_1| + |\beta_2|$ is minimized when $\beta_1 = 0$. This is because the penalty on $\beta_1$ is twice as weighted! To put this in notation, $$\tilde\beta = \arg\min_{\beta \, : \, \beta_1 + 2\beta_2 = K}|\beta_1| + |\beta_2|$$ satisfies $\tilde\beta_1 = 0$ for any $K$. Therefore, the lasso estimator
\begin{align*}
\hat\beta
& = \arg\min_{\beta \in \mathbb{R}^p} \|y - X \beta\|_2^2 + \lambda \|\beta\|_1 \\
& = \arg\min_{\beta \in \mathbb{R}^p} \|y - (\beta_1 + 2 \beta_2) x_1 \|_2^2 + \lambda \left( |\beta_1| + |\beta_2| \right) \\
& = \arg_\beta \min_{K \in \mathbb{R}} \, \min_{\beta \in \mathbb{R}^p \, : \, \beta_1 + 2 \beta_2 = K} \, \|y - K x_1 \|_2^2 + \lambda \left( |\beta_1| + |\beta_2| \right) \\
& = \arg_\beta \min_{K \in \mathbb{R}} \, \left\{ \|y - K x_1 \|_2^2 + \lambda \min_{\beta \in \mathbb{R}^p \, : \, \beta_1 + 2 \beta_2 = K} \, \left\{ \left( |\beta_1| + |\beta_2| \right) \right\} \right\}
\end{align*}
satisfies $\hat\beta_1 = 0$. The reason why the comments to OP's question are misleading is because there's a penalty on the model: those $(0, 50)$ and $(100,0)$ coefficients give the same error, but different $\ell_1$ norm! Further, it's not necessary to look at anything like LARs: this result follows immediately from the first principles.
As pointed out by Firebug, the reason why your simulation shows a contradictory result is that glmnet automatically scales to unit variance the features. That is, due to the use of glmnet, we're effectively in the case that $x_1 = x_2$. There, the estimator is no longer unique: $(100,0)$ and $(0,100)$ are both in the arg min. Indeed, $(a,b)$ is in the $\arg\min$ for any $a,b \geq 0$ such that $a+b = 100$.
In this case of equal features, glmnet will converge in exactly one iteration: it soft-thresholds the first coefficient, and then the second coefficient is soft-thresholded to zero.
This explains why the the simulation found $\hat\beta_2 = 0$ in particular. Indeed, the second coefficient will always be zero, regardless of the ordering of the features.
Proof: Assume WLOG that the feature $x \in \mathbb{R}^n$ satisfies $\|x\|_2 = 1$. Coordinate descent (the algorithm used by glmnet) computes for it's first iteration: $$\hat\beta_1^{(1)} = S_\lambda(x^T y)$$ followed by
\begin{align*}
\hat\beta_2^{(1)}
& = S_\lambda \left[ x^T \left( y - x S_\lambda (x^T y) \right) \right] \\
& = S_\lambda \left[ x^T y - x^T x \left( x^T y + T \right) \right] \\
& = S_\lambda \left[ - T \right] \\
& = 0,
\end{align*}
where $T = \begin{cases} - \lambda & \textrm{ if } x^T y > \lambda \\ \lambda & \textrm{ if } x^T y < -\lambda \\ 0 & \textrm{ otherwise} \end{cases}$. Then, since $\hat\beta_2^{(1)}= 0$, the second iteration of coordinate descent will repeat the computations above. Inductively, we see that $\hat\beta_j^{(i)} = \hat\beta_j^{(i)}$ for all iterations $i$ and $j \in \{1,2\}$. Therefore glmnet will report $\hat\beta_1 = \hat\beta_1^{(1)}$ and $\hat\beta_2 = \hat\beta_2^{(1)}$ since the stopping criterion is immediately reached. | If multi-collinearity is high, would LASSO coefficients shrink to 0?
Notice that
\begin{align*}
\|y-X\beta\|_2^2 + \lambda \|\beta\|_1
& = \|y - \beta_1 x_1 - \beta_2 x_2 \|_2^2 + \lambda \left( |\beta_1| + |\beta_2| \right) \\
& = \|y - (\beta_1 + 2 \beta_2) x_1 |
25,873 | If multi-collinearity is high, would LASSO coefficients shrink to 0? | When I re-run your code, I get that the coefficient of $x_2$ is numerically indistinguishable from zero.
To understand better why LASSO sets that coefficient to zero, you should look at the relationship between LASSO and Least Angle Regression (LAR). LASSO can be seen as a LAR with a special modification.
The algorithm of LAR is roughly like this: Start with an empty model (except for an intercept). Then add the predictor variable that is the most correlated with $y$, say $x_j$. Incrementally change that predictor's coefficient $\beta_j$, until the residual $y - c - x_j\beta_j$ is equally correlated with $x_j$ and another predictor variable $x_k$. Then change the coefficients of both $x_j$ and $x_k$ until a third predictor $x_l$ is equally correlated with the residual
$y - c - x_j\beta_j -x_k\beta_k$
and so on.
LASSO can be seen as LAR with the following twist: as soon as the coefficient of a predictor in your model (an "active" predictor) hits zero, drop that predictor from the model. This is what happens when you regress $y$ on the collinear predictors: both will get added to the model at the same time and, as their coefficients are changed, their respective correlation with the residuals will change proportionately, but one of the predictors will get dropped from the active set first because it hits zero first. As for which of the two collinear predictors it will be, I don't know. [EDIT: When you reverse the order of $x_1$ and $x_2$, you can see that the coefficient of $x_1$ is set to zero. So the glmnet algorithm simply seems to set those coefficients to zero first that are ordered later in the design matrix.]
A source that explains these things more in detail is Chapter 3 in "The Elements of Statistical Learning" by Friedman, Hastie and Tibshirani. | If multi-collinearity is high, would LASSO coefficients shrink to 0? | When I re-run your code, I get that the coefficient of $x_2$ is numerically indistinguishable from zero.
To understand better why LASSO sets that coefficient to zero, you should look at the relationsh | If multi-collinearity is high, would LASSO coefficients shrink to 0?
When I re-run your code, I get that the coefficient of $x_2$ is numerically indistinguishable from zero.
To understand better why LASSO sets that coefficient to zero, you should look at the relationship between LASSO and Least Angle Regression (LAR). LASSO can be seen as a LAR with a special modification.
The algorithm of LAR is roughly like this: Start with an empty model (except for an intercept). Then add the predictor variable that is the most correlated with $y$, say $x_j$. Incrementally change that predictor's coefficient $\beta_j$, until the residual $y - c - x_j\beta_j$ is equally correlated with $x_j$ and another predictor variable $x_k$. Then change the coefficients of both $x_j$ and $x_k$ until a third predictor $x_l$ is equally correlated with the residual
$y - c - x_j\beta_j -x_k\beta_k$
and so on.
LASSO can be seen as LAR with the following twist: as soon as the coefficient of a predictor in your model (an "active" predictor) hits zero, drop that predictor from the model. This is what happens when you regress $y$ on the collinear predictors: both will get added to the model at the same time and, as their coefficients are changed, their respective correlation with the residuals will change proportionately, but one of the predictors will get dropped from the active set first because it hits zero first. As for which of the two collinear predictors it will be, I don't know. [EDIT: When you reverse the order of $x_1$ and $x_2$, you can see that the coefficient of $x_1$ is set to zero. So the glmnet algorithm simply seems to set those coefficients to zero first that are ordered later in the design matrix.]
A source that explains these things more in detail is Chapter 3 in "The Elements of Statistical Learning" by Friedman, Hastie and Tibshirani. | If multi-collinearity is high, would LASSO coefficients shrink to 0?
When I re-run your code, I get that the coefficient of $x_2$ is numerically indistinguishable from zero.
To understand better why LASSO sets that coefficient to zero, you should look at the relationsh |
25,874 | How is the generator in a GAN trained? | It helps to think of this process in pseudocode. Let generator(z) be a function that takes a uniformly sampled noise vector z and returns a vector of same size as input vector X; let's call this length d. Let discriminator(x) be a function that takes a d dimensional vector and returns a scalar probability that x belongs to true data distribution. For training:
G_sample = generator(Z)
D_real = discriminator(X)
D_fake = discriminator(G_sample)
D_loss = maximize mean of (log(D_real) + log(1 - D_fake))
G_loss = maximize mean of log(D_fake)
# Only update D(X)'s parameters
D_solver = Optimizer().minimize(D_loss, theta_D)
# Only update G(X)'s parameters
G_solver = Optimizer().minimize(G_loss, theta_G)
# theta_D and theta_G are the weights and biases of D and G respectively
Repeat the above for a number of epochs
So, yes, you are right that we essentially think of the generator and discriminator as one giant network for alternating minibatches as we use fake data. The generator's loss function takes care of the gradients for this half. If you think of this network training in isolation, then it is trained just as you would usually train a MLP with its input being the last layer's output of the generator network.
You can follow a detailed explanation with code in Tensorflow here(among many places):
http://wiseodd.github.io/techblog/2016/09/17/gan-tensorflow/
It should be easy to follow once you look at the code. | How is the generator in a GAN trained? | It helps to think of this process in pseudocode. Let generator(z) be a function that takes a uniformly sampled noise vector z and returns a vector of same size as input vector X; let's call this leng | How is the generator in a GAN trained?
It helps to think of this process in pseudocode. Let generator(z) be a function that takes a uniformly sampled noise vector z and returns a vector of same size as input vector X; let's call this length d. Let discriminator(x) be a function that takes a d dimensional vector and returns a scalar probability that x belongs to true data distribution. For training:
G_sample = generator(Z)
D_real = discriminator(X)
D_fake = discriminator(G_sample)
D_loss = maximize mean of (log(D_real) + log(1 - D_fake))
G_loss = maximize mean of log(D_fake)
# Only update D(X)'s parameters
D_solver = Optimizer().minimize(D_loss, theta_D)
# Only update G(X)'s parameters
G_solver = Optimizer().minimize(G_loss, theta_G)
# theta_D and theta_G are the weights and biases of D and G respectively
Repeat the above for a number of epochs
So, yes, you are right that we essentially think of the generator and discriminator as one giant network for alternating minibatches as we use fake data. The generator's loss function takes care of the gradients for this half. If you think of this network training in isolation, then it is trained just as you would usually train a MLP with its input being the last layer's output of the generator network.
You can follow a detailed explanation with code in Tensorflow here(among many places):
http://wiseodd.github.io/techblog/2016/09/17/gan-tensorflow/
It should be easy to follow once you look at the code. | How is the generator in a GAN trained?
It helps to think of this process in pseudocode. Let generator(z) be a function that takes a uniformly sampled noise vector z and returns a vector of same size as input vector X; let's call this leng |
25,875 | How is the generator in a GAN trained? | Do you essentially attach the generator's outputs to the discriminator's inputs ?>and then treat the entire thing like one giant network where the weights in the >discriminator portion are constant?
Shortly: Yes.(I dug some of the GAN's sources to double check this)
There are also a lot of more into GAN training like: should we update D and G every time or D on odd iterations and G on even, and a lot more.
There is also a very nice paper about this topic:
"Improved Techniques for Training GANs"
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen
https://arxiv.org/abs/1606.03498 | How is the generator in a GAN trained? | Do you essentially attach the generator's outputs to the discriminator's inputs ?>and then treat the entire thing like one giant network where the weights in the >discriminator portion are constant?
| How is the generator in a GAN trained?
Do you essentially attach the generator's outputs to the discriminator's inputs ?>and then treat the entire thing like one giant network where the weights in the >discriminator portion are constant?
Shortly: Yes.(I dug some of the GAN's sources to double check this)
There are also a lot of more into GAN training like: should we update D and G every time or D on odd iterations and G on even, and a lot more.
There is also a very nice paper about this topic:
"Improved Techniques for Training GANs"
Tim Salimans, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, Xi Chen
https://arxiv.org/abs/1606.03498 | How is the generator in a GAN trained?
Do you essentially attach the generator's outputs to the discriminator's inputs ?>and then treat the entire thing like one giant network where the weights in the >discriminator portion are constant?
|
25,876 | How is the generator in a GAN trained? | I have found this great resource:
https://developers.google.com/machine-learning/gan/training
This is part of what came in it
So we train the generator with the following procedure:
1- Sample random noise.
2- Produce generator output from sampled random noise.
3- Get discriminator "Real" or "Fake" classification for generator output.
4- Calculate loss from discriminator classification.
5- Backpropagate through both the discriminator and generator to obtain gradients.
6- Use gradients to change only the generator weights. | How is the generator in a GAN trained? | I have found this great resource:
https://developers.google.com/machine-learning/gan/training
This is part of what came in it
So we train the generator with the following procedure:
1- Sample random n | How is the generator in a GAN trained?
I have found this great resource:
https://developers.google.com/machine-learning/gan/training
This is part of what came in it
So we train the generator with the following procedure:
1- Sample random noise.
2- Produce generator output from sampled random noise.
3- Get discriminator "Real" or "Fake" classification for generator output.
4- Calculate loss from discriminator classification.
5- Backpropagate through both the discriminator and generator to obtain gradients.
6- Use gradients to change only the generator weights. | How is the generator in a GAN trained?
I have found this great resource:
https://developers.google.com/machine-learning/gan/training
This is part of what came in it
So we train the generator with the following procedure:
1- Sample random n |
25,877 | How is the generator in a GAN trained? | Recently I have uploaded collection of various GAN models on github repo. It is torch7 based, and very easy to run. The code is simple enough to understand with experimental results. Hope this will help
https://github.com/nashory/gans-collection.torch | How is the generator in a GAN trained? | Recently I have uploaded collection of various GAN models on github repo. It is torch7 based, and very easy to run. The code is simple enough to understand with experimental results. Hope this will he | How is the generator in a GAN trained?
Recently I have uploaded collection of various GAN models on github repo. It is torch7 based, and very easy to run. The code is simple enough to understand with experimental results. Hope this will help
https://github.com/nashory/gans-collection.torch | How is the generator in a GAN trained?
Recently I have uploaded collection of various GAN models on github repo. It is torch7 based, and very easy to run. The code is simple enough to understand with experimental results. Hope this will he |
25,878 | How to calculate the confidence interval of the x-intercept in a linear regression? | How to calculate the confidence interval of the x-intercept in a linear regression?
Asumptions
Use the simple regression model $y_i = \alpha + \beta x_i + \varepsilon_i$.
Errors have normal distribution conditional on the regressors $\epsilon | X \sim \mathcal{N}(0, \sigma^2 I_n)$
Fit using ordinary least square
3 procedures to calculate confidence interval on x-intercept
Taylor expansion (easy to use)
Marc in the box method (MIB)
CAPITANI-POLLASTRI (https://boa.unimib.it/retrieve/handle/10281/43053/64388/DECAPITANI_Pollastri.pdf)
First order Taylor expansion
Your model is $Y=aX+b$ with estimated standard deviation $\sigma_a$ and $\sigma_b$ on $a$ and $b$ parameters and estimated covariance $\sigma_{ab}$.
You solve
$$aX+b=0 \Leftrightarrow X= \frac{-b} a.$$
Then the standard deviation $\sigma_X$ on $X$ is given by:
$$\left( \frac {\sigma_X} X \right)^2 = \left( \frac {\sigma_b} b \right)^2 + \left( \frac {\sigma_a} a \right)^2 - 2 \frac{\sigma_{ab}}{ab}.$$
MIB
See code from Marc in the box at How to calculate the confidence interval of the x-intercept in a linear regression?.
CAPITANI-POLLASTRI
CAPITANI-POLLASTRI provides the Cumulative Distribution Function and Density Function for the ratio of two correlated Normal random variables. It can be used to compute confidence interval of the x-intercept in a linear regression. This procedure gives (almost) identical results as the ones from MIB.
Indeed, using ordinary least square and assuming normality of the errors, $\hat\beta \sim \mathcal{N}(\beta, \sigma^2 (X^TX)^{-1})$ (verified) and $\hat{\beta}$'s are correlated (verified).
The procedure is the following:
get OLS estimator for $a$ and $b$.
get the variance-covariance matrix and extract, $\sigma_a, \sigma_b, \sigma_{ab}=\rho\sigma_a\sigma_b$.
Assume that $a$ and $b$ follow a Bivariate Correlated Normal distribution, $\mathcal{N}(a, b, \sigma_a, \sigma_b, \rho)$. Then the density function and Cumulative Distribution Function of $x_{intercept}= \frac{-b}{a}$ are given by CAPITANI-POLLASTRI.
Use the Cumulative Distribution Function of $x_{intercept}= \frac{-b}{a}$ to compute desired quantiles and set a cofidence interval.
Comparaison of the 3 procedures
The procedures are compared using the following data configuration:
x <- 1:10
a <- 20
b <- -2
y <- a + b*x + rnorm(length(x), mean=0, sd=1)
10000 diferent sample are generated and analyzed using the 3 methods. The code (R) used to generate and analyze can be found at: https://github.com/adrienrenaud/stackExchange/blob/master/crossValidated/q221630/answer.ipynb
MIB and CAPITANI-POLLASTRI give equivalent results.
First order Taylor expansion differs significantly from the the two other methods.
MIB and CAPITANI-POLLASTRI suffers from under-coverage. The 68% (95%) ci is found to contain the true value 63% (92%) of the time.
First order Taylor expansion suffers from over-coverage. The 68% (95%) ci is found to contain the true value 87% (99%) of the time.
Conclusions
The x-intercept distribution is asymmetric. It justify a asymmetric confidence interval. MIB and CAPITANI-POLLASTRI give equivalent results. CAPITANI-POLLASTRI have a nice theorical justification and it gives grounds for MIB. MIB and CAPITANI-POLLASTRI suffers from moderate under-coverage and can be used to set confidence intervals. | How to calculate the confidence interval of the x-intercept in a linear regression? | How to calculate the confidence interval of the x-intercept in a linear regression?
Asumptions
Use the simple regression model $y_i = \alpha + \beta x_i + \varepsilon_i$.
Errors have normal distribut | How to calculate the confidence interval of the x-intercept in a linear regression?
How to calculate the confidence interval of the x-intercept in a linear regression?
Asumptions
Use the simple regression model $y_i = \alpha + \beta x_i + \varepsilon_i$.
Errors have normal distribution conditional on the regressors $\epsilon | X \sim \mathcal{N}(0, \sigma^2 I_n)$
Fit using ordinary least square
3 procedures to calculate confidence interval on x-intercept
Taylor expansion (easy to use)
Marc in the box method (MIB)
CAPITANI-POLLASTRI (https://boa.unimib.it/retrieve/handle/10281/43053/64388/DECAPITANI_Pollastri.pdf)
First order Taylor expansion
Your model is $Y=aX+b$ with estimated standard deviation $\sigma_a$ and $\sigma_b$ on $a$ and $b$ parameters and estimated covariance $\sigma_{ab}$.
You solve
$$aX+b=0 \Leftrightarrow X= \frac{-b} a.$$
Then the standard deviation $\sigma_X$ on $X$ is given by:
$$\left( \frac {\sigma_X} X \right)^2 = \left( \frac {\sigma_b} b \right)^2 + \left( \frac {\sigma_a} a \right)^2 - 2 \frac{\sigma_{ab}}{ab}.$$
MIB
See code from Marc in the box at How to calculate the confidence interval of the x-intercept in a linear regression?.
CAPITANI-POLLASTRI
CAPITANI-POLLASTRI provides the Cumulative Distribution Function and Density Function for the ratio of two correlated Normal random variables. It can be used to compute confidence interval of the x-intercept in a linear regression. This procedure gives (almost) identical results as the ones from MIB.
Indeed, using ordinary least square and assuming normality of the errors, $\hat\beta \sim \mathcal{N}(\beta, \sigma^2 (X^TX)^{-1})$ (verified) and $\hat{\beta}$'s are correlated (verified).
The procedure is the following:
get OLS estimator for $a$ and $b$.
get the variance-covariance matrix and extract, $\sigma_a, \sigma_b, \sigma_{ab}=\rho\sigma_a\sigma_b$.
Assume that $a$ and $b$ follow a Bivariate Correlated Normal distribution, $\mathcal{N}(a, b, \sigma_a, \sigma_b, \rho)$. Then the density function and Cumulative Distribution Function of $x_{intercept}= \frac{-b}{a}$ are given by CAPITANI-POLLASTRI.
Use the Cumulative Distribution Function of $x_{intercept}= \frac{-b}{a}$ to compute desired quantiles and set a cofidence interval.
Comparaison of the 3 procedures
The procedures are compared using the following data configuration:
x <- 1:10
a <- 20
b <- -2
y <- a + b*x + rnorm(length(x), mean=0, sd=1)
10000 diferent sample are generated and analyzed using the 3 methods. The code (R) used to generate and analyze can be found at: https://github.com/adrienrenaud/stackExchange/blob/master/crossValidated/q221630/answer.ipynb
MIB and CAPITANI-POLLASTRI give equivalent results.
First order Taylor expansion differs significantly from the the two other methods.
MIB and CAPITANI-POLLASTRI suffers from under-coverage. The 68% (95%) ci is found to contain the true value 63% (92%) of the time.
First order Taylor expansion suffers from over-coverage. The 68% (95%) ci is found to contain the true value 87% (99%) of the time.
Conclusions
The x-intercept distribution is asymmetric. It justify a asymmetric confidence interval. MIB and CAPITANI-POLLASTRI give equivalent results. CAPITANI-POLLASTRI have a nice theorical justification and it gives grounds for MIB. MIB and CAPITANI-POLLASTRI suffers from moderate under-coverage and can be used to set confidence intervals. | How to calculate the confidence interval of the x-intercept in a linear regression?
How to calculate the confidence interval of the x-intercept in a linear regression?
Asumptions
Use the simple regression model $y_i = \alpha + \beta x_i + \varepsilon_i$.
Errors have normal distribut |
25,879 | How to calculate the confidence interval of the x-intercept in a linear regression? | I would recommend bootstrapping the residuals:
library(boot)
set.seed(42)
sims <- boot(residuals(fit), function(r, i, d = data.frame(x, y), yhat = fitted(fit)) {
d$y <- yhat + r[i]
fitb <- lm(y ~ x, data = d)
-coef(fitb)[1]/coef(fitb)[2]
}, R = 1e4)
lines(quantile(sims$t, c(0.025, 0.975)), c(0, 0), col = "blue")
What you show in the graph are the points where the lower/upper limit of the confidence band of the predictions cross the axis. I don't think these are the confidence limits of the intercept, but maybe they are a rough approximation. | How to calculate the confidence interval of the x-intercept in a linear regression? | I would recommend bootstrapping the residuals:
library(boot)
set.seed(42)
sims <- boot(residuals(fit), function(r, i, d = data.frame(x, y), yhat = fitted(fit)) {
d$y <- yhat + r[i]
fitb <- lm(y | How to calculate the confidence interval of the x-intercept in a linear regression?
I would recommend bootstrapping the residuals:
library(boot)
set.seed(42)
sims <- boot(residuals(fit), function(r, i, d = data.frame(x, y), yhat = fitted(fit)) {
d$y <- yhat + r[i]
fitb <- lm(y ~ x, data = d)
-coef(fitb)[1]/coef(fitb)[2]
}, R = 1e4)
lines(quantile(sims$t, c(0.025, 0.975)), c(0, 0), col = "blue")
What you show in the graph are the points where the lower/upper limit of the confidence band of the predictions cross the axis. I don't think these are the confidence limits of the intercept, but maybe they are a rough approximation. | How to calculate the confidence interval of the x-intercept in a linear regression?
I would recommend bootstrapping the residuals:
library(boot)
set.seed(42)
sims <- boot(residuals(fit), function(r, i, d = data.frame(x, y), yhat = fitted(fit)) {
d$y <- yhat + r[i]
fitb <- lm(y |
25,880 | Mathematical and statistical prerequisites to understand particle filters? | You can get shockingly far with just a few basic concepts. Notation, an explosion of variables etc... can make things look complicated, but the core idea of particle filtering is remarkably simple.
Some basic probability that you would need to (and likely already do!) understand:
Computing marginal distribution: $P(X = x) = \sum_i P(X = x, Y = y_i)$
Def. Conditional probability: $P(X \mid Y) = \frac{P(X,Y)}{P(Y)}$
Bayes Rule: $P(X \mid Y) = \frac{P(Y \mid X) P(X)}{P(Y)}$
Bayesian terms: eg. prior, likelihood, posterior (+1 @Yair Daon, I agree!)
The basic steps of a particle filter are incredibly simple:
First:
Start with some beliefs about some hidden state. For example, you may start with the belief that your rocket is on the launch pad. (In a particle filter, beliefs about the hidden state will be represented with a cloud of points, each point denotes a possible value of the hidden state. Each point is also associated with a probability of the state being the true state.)
Then you iterate the following steps to update from time $t$ to time $t+1$:
Prediction step: Move forward location of points based upon law of motion. (eg. move points forward based upon rocket's current speed, trajectory etc...). This will typically expand out the cloud of points as uncertainty increases.
Probability update step: Use data, sensor input to update probabilities associated with points using Bayes Rule. This will typically collapse back the cloud of points as uncertainty is reduced.
Add some particle filtering specific steps/tricks. Eg. :
Occasionally resample your points so that each point has equal probability.
Mix in some noise, prevent your probability step (2) from collapsing your cloud of points too much (in particle filtering, it's important that there's at least one point with positive probability vaguely at your true location!)
Example:
Initialize your filter:
- Look at your location, where you are standing. Now close your eyes.
Then iterate:
Take a step forward with your eyes closed.
Prediction step: given past beliefs about where you were standing, predict where you are now standing given a step forward. (Note how uncertainty expands because your step forward with your eyes closed isn't super precise!)
Update step: Use sensors (eg. feeling around, etc...) to update your beliefs about where you're standing.
REPEAT!
The probability machinery required to implement is basically just basic probability: Bayes rule, computing marginal distribution etc...
Highly related ideas that might help understand the big picture:
In some sense, steps (1) and (2) are common to any Bayesian filtering problem. Some highly related concepts to possibly read about:
Hidden Markov model. A process is Markov if the past is independent of the future given the current state. Almost any time series is modeled as some kind of Markov process. A Hidden Markov Model is one where the state isn't directly observed (eg. you never directly observe the exact location of your rocket and instead infer it's location through a Bayesian filter).
Kalman Filter. This is an alternative to particle filtering that's commonly used. It's basically a Bayesian filter where everything is assumed to be multivariate Gaussian. | Mathematical and statistical prerequisites to understand particle filters? | You can get shockingly far with just a few basic concepts. Notation, an explosion of variables etc... can make things look complicated, but the core idea of particle filtering is remarkably simple.
So | Mathematical and statistical prerequisites to understand particle filters?
You can get shockingly far with just a few basic concepts. Notation, an explosion of variables etc... can make things look complicated, but the core idea of particle filtering is remarkably simple.
Some basic probability that you would need to (and likely already do!) understand:
Computing marginal distribution: $P(X = x) = \sum_i P(X = x, Y = y_i)$
Def. Conditional probability: $P(X \mid Y) = \frac{P(X,Y)}{P(Y)}$
Bayes Rule: $P(X \mid Y) = \frac{P(Y \mid X) P(X)}{P(Y)}$
Bayesian terms: eg. prior, likelihood, posterior (+1 @Yair Daon, I agree!)
The basic steps of a particle filter are incredibly simple:
First:
Start with some beliefs about some hidden state. For example, you may start with the belief that your rocket is on the launch pad. (In a particle filter, beliefs about the hidden state will be represented with a cloud of points, each point denotes a possible value of the hidden state. Each point is also associated with a probability of the state being the true state.)
Then you iterate the following steps to update from time $t$ to time $t+1$:
Prediction step: Move forward location of points based upon law of motion. (eg. move points forward based upon rocket's current speed, trajectory etc...). This will typically expand out the cloud of points as uncertainty increases.
Probability update step: Use data, sensor input to update probabilities associated with points using Bayes Rule. This will typically collapse back the cloud of points as uncertainty is reduced.
Add some particle filtering specific steps/tricks. Eg. :
Occasionally resample your points so that each point has equal probability.
Mix in some noise, prevent your probability step (2) from collapsing your cloud of points too much (in particle filtering, it's important that there's at least one point with positive probability vaguely at your true location!)
Example:
Initialize your filter:
- Look at your location, where you are standing. Now close your eyes.
Then iterate:
Take a step forward with your eyes closed.
Prediction step: given past beliefs about where you were standing, predict where you are now standing given a step forward. (Note how uncertainty expands because your step forward with your eyes closed isn't super precise!)
Update step: Use sensors (eg. feeling around, etc...) to update your beliefs about where you're standing.
REPEAT!
The probability machinery required to implement is basically just basic probability: Bayes rule, computing marginal distribution etc...
Highly related ideas that might help understand the big picture:
In some sense, steps (1) and (2) are common to any Bayesian filtering problem. Some highly related concepts to possibly read about:
Hidden Markov model. A process is Markov if the past is independent of the future given the current state. Almost any time series is modeled as some kind of Markov process. A Hidden Markov Model is one where the state isn't directly observed (eg. you never directly observe the exact location of your rocket and instead infer it's location through a Bayesian filter).
Kalman Filter. This is an alternative to particle filtering that's commonly used. It's basically a Bayesian filter where everything is assumed to be multivariate Gaussian. | Mathematical and statistical prerequisites to understand particle filters?
You can get shockingly far with just a few basic concepts. Notation, an explosion of variables etc... can make things look complicated, but the core idea of particle filtering is remarkably simple.
So |
25,881 | Mathematical and statistical prerequisites to understand particle filters? | You should learn about easier-to-code state space models and closed-form filtering first (i.e. kalman filters, hidden markov models). Matthew Gunn is correct that you can get surprisingly far with simple concepts, but in my humble opinion, you should make this an intermediate goal because:
1.) Relatively speaking, there are more moving parts in state space models. When you learn SSMs or hidden markov models, there is a lot of notation. This means there are more things to keep in your working memory while you play around with verifying things. Personally, when I was learning about Kalman filters and linear-Gaussian SSMs first, I was basically thinking "eh this is all just properties of multivariate normal vectors...I just have to keep track of which matrix is which." Also, if you're switching between books, they often change notation.
Afterwards I thought about it like "eh, this is all just Bayes' rule at every time point." Once you think of it this way you understand why conjugate families are nice, as in the case of the Kalman filter. When you code up a hidden markov model, with its discrete state space, you see why you don't have to calculate any likelihood, and filtering/smoothing is easy. (I think I am deviating from the convential hmm jargon here.)
2.) Cutting your teeth on coding a lot of these up will make you realize how general the definition of a state space model is. Pretty soon you'll be writing down models you want to use, and at the same time seeing why you can't. First you will eventually see that you just can't write it down in one of these two forms that you're used to. When you think about it a little more, you write down Bayes' rule, and see the problem is your inability to calculate some sort of likelihood for the data.
So you will eventually fail at being able to calculate these posterior distributions (smoothing or filtering distributions of the states). To take care of this, there are a lot of approximate filtering stuff out there. Particle filtering is just one of them. The main takeaway of particle filtering: you simulate from these distributions because you can't calculate them.
How do you simulate? Most algorithms are just some variant of importance sampling. But it does get more complicated here as well. I recommend that tutorial paper by Doucet and Johansen (http://www.cs.ubc.ca/~arnaud/doucet_johansen_tutorialPF.pdf). If you get how closed form filtering works, they introduce the general idea of importance sampling, then the general idea of monte carlo method, and then show you how to use these two things to get started with a nice financial time series example. IMHO, this is the best tutorial on particle filtering that I have come across.
In addition to adding two new ideas to the mix (importance sampling and the monte carlo method), there's more notation now. Some densities you're sampling from now; some you're evaluating, and when you evaluate them, you're evaluating at samples. The result, after you code it all up, are weighted samples, deemed particles. They change after every new observation. It would be very hard to pick all of this up at once. I think it's a process.
I apologize if I'm coming across as cryptic, or handwavy. This is just the timeline for my personal familiarity with the subject. Matthew Gunn's post probably more directly answers your question. I just figured I would toss out this response. | Mathematical and statistical prerequisites to understand particle filters? | You should learn about easier-to-code state space models and closed-form filtering first (i.e. kalman filters, hidden markov models). Matthew Gunn is correct that you can get surprisingly far with si | Mathematical and statistical prerequisites to understand particle filters?
You should learn about easier-to-code state space models and closed-form filtering first (i.e. kalman filters, hidden markov models). Matthew Gunn is correct that you can get surprisingly far with simple concepts, but in my humble opinion, you should make this an intermediate goal because:
1.) Relatively speaking, there are more moving parts in state space models. When you learn SSMs or hidden markov models, there is a lot of notation. This means there are more things to keep in your working memory while you play around with verifying things. Personally, when I was learning about Kalman filters and linear-Gaussian SSMs first, I was basically thinking "eh this is all just properties of multivariate normal vectors...I just have to keep track of which matrix is which." Also, if you're switching between books, they often change notation.
Afterwards I thought about it like "eh, this is all just Bayes' rule at every time point." Once you think of it this way you understand why conjugate families are nice, as in the case of the Kalman filter. When you code up a hidden markov model, with its discrete state space, you see why you don't have to calculate any likelihood, and filtering/smoothing is easy. (I think I am deviating from the convential hmm jargon here.)
2.) Cutting your teeth on coding a lot of these up will make you realize how general the definition of a state space model is. Pretty soon you'll be writing down models you want to use, and at the same time seeing why you can't. First you will eventually see that you just can't write it down in one of these two forms that you're used to. When you think about it a little more, you write down Bayes' rule, and see the problem is your inability to calculate some sort of likelihood for the data.
So you will eventually fail at being able to calculate these posterior distributions (smoothing or filtering distributions of the states). To take care of this, there are a lot of approximate filtering stuff out there. Particle filtering is just one of them. The main takeaway of particle filtering: you simulate from these distributions because you can't calculate them.
How do you simulate? Most algorithms are just some variant of importance sampling. But it does get more complicated here as well. I recommend that tutorial paper by Doucet and Johansen (http://www.cs.ubc.ca/~arnaud/doucet_johansen_tutorialPF.pdf). If you get how closed form filtering works, they introduce the general idea of importance sampling, then the general idea of monte carlo method, and then show you how to use these two things to get started with a nice financial time series example. IMHO, this is the best tutorial on particle filtering that I have come across.
In addition to adding two new ideas to the mix (importance sampling and the monte carlo method), there's more notation now. Some densities you're sampling from now; some you're evaluating, and when you evaluate them, you're evaluating at samples. The result, after you code it all up, are weighted samples, deemed particles. They change after every new observation. It would be very hard to pick all of this up at once. I think it's a process.
I apologize if I'm coming across as cryptic, or handwavy. This is just the timeline for my personal familiarity with the subject. Matthew Gunn's post probably more directly answers your question. I just figured I would toss out this response. | Mathematical and statistical prerequisites to understand particle filters?
You should learn about easier-to-code state space models and closed-form filtering first (i.e. kalman filters, hidden markov models). Matthew Gunn is correct that you can get surprisingly far with si |
25,882 | Neural Networks Vs Structural Equation Modeling What's the Difference? | Short answer: With SEM, the goal is generally to understand the relationships between the variables. With the type of ANNs you have been studying, the nodes are a way of transforming the data so that the predictor variables can better explain the outcomes. Ultimately the similarity is pretty superficial: while the diagrams look similar, you will struggle to get good predictions from an SEM and you will also struggle to interpret the relatships between variables in an ANN.
Pedantic answer: there are lots of different types of SEMs and ANNs. Many do not look so similar. E.g., a kohonen network looks little like an SEM, and is not great for prediction. When SEM is used to address endogeneity, it can be good for prediction, but such SEMs usually don't get drawn as pretty network diagrams. | Neural Networks Vs Structural Equation Modeling What's the Difference? | Short answer: With SEM, the goal is generally to understand the relationships between the variables. With the type of ANNs you have been studying, the nodes are a way of transforming the data so that | Neural Networks Vs Structural Equation Modeling What's the Difference?
Short answer: With SEM, the goal is generally to understand the relationships between the variables. With the type of ANNs you have been studying, the nodes are a way of transforming the data so that the predictor variables can better explain the outcomes. Ultimately the similarity is pretty superficial: while the diagrams look similar, you will struggle to get good predictions from an SEM and you will also struggle to interpret the relatships between variables in an ANN.
Pedantic answer: there are lots of different types of SEMs and ANNs. Many do not look so similar. E.g., a kohonen network looks little like an SEM, and is not great for prediction. When SEM is used to address endogeneity, it can be good for prediction, but such SEMs usually don't get drawn as pretty network diagrams. | Neural Networks Vs Structural Equation Modeling What's the Difference?
Short answer: With SEM, the goal is generally to understand the relationships between the variables. With the type of ANNs you have been studying, the nodes are a way of transforming the data so that |
25,883 | What's the relationship between an SVM and hinge loss? | Here's my attempt to answer your questions:
Is an SVM as simple as saying it's a discriminative classifier that simply optimizes the hinge loss? Or is it more complex than that? Yes, you can say that. Also, don't forget that it regularizes the model too. I wouldn't say SVM is more complex than that, however, it is important to mention that all of those choices (e.g. hinge loss and $L_2$ regularization) have precise mathematical interpretations and are not arbitrary. That's what makes SVMs so popular and powerful. For example, hinge loss is a continuous and convex upper bound to the task loss which, for binary classification problems, is the $0/1$ loss. Note that $0/1$ loss is non-convex and discontinuous. Convexity of hinge loss makes the entire training objective of SVM convex. The fact that it is an upper bound to the task loss guarantees that the minimizer of the bound won't have a bad value on the task loss. $L_2$ regularization can be geometrically interpreted as the size of the margin.
How do the support vectors come into play?
Support vectors play an important role in training SVMs. They identify the separating hyperplane. Let $D$ denote a training set and $SV(D) \subseteq D$ be the set of support vectors that you get by training an SVM on $D$ (assume all hyperparameters are fixed a priori). If we throw out all the non-SV samples from $D$ and train another SVM (with the same hyperparameter values) on the remaining samples (i.e. on $SV(D)$) we get the same exact classifier as before!
What about the slack variables? SVM was originally designed for problems where there exists a separating hyperplane (i.e. a hyperplane that perfectly separates the training samples from the two classes), and the goal was to find, among all separating hyperplanes, the hyperplane with the largest margin. The margin, denoted by $d(w, D)$, is defined for a classifier $w$ and a training set $D$. Assuming $w$ perfectly separates all the examples in $D$, we have $d(w, D) = \min_{(x, y) \in D} y \frac{w^Tx}{||w||_2}$, which is the distance of the closest training example from the separating hyperplane $w$. Note that $y \in \{+1, -1\}$ here.
The introduction of slack variables made it possible to train SVMs on problems where either 1) a separating hyperplane does not exist (i.e. the training data is not linearly separable), or 2) you are happy to (or would like to) sacrifice making some error (higher bias) for better generalization (lower variance). However, this comes at the price of breaking some of the concrete mathematical and geometric interpretations of SVMs without slack variables (e.g. the geometrical interpretation of the margin).
Why can't you have deep SVM's?
SVM objective is convex. More precisely, it is piecewise quadratic; that is because the $L_2$ regularizer is quadratic and the hinge loss is piecewise linear. The training objectives in deep hierarchical models, however, are much more complex. In particular, they are not convex. Of course, one can design a hierarchical discriminative model with hinge loss and $L_2$ regularization etc., but, it wouldn't be called an SVM. In fact, the hinge loss is commonly used in DNNs (Deep Neural Networks) for classification problems. | What's the relationship between an SVM and hinge loss? | Here's my attempt to answer your questions:
Is an SVM as simple as saying it's a discriminative classifier that simply optimizes the hinge loss? Or is it more complex than that? Yes, you can say that | What's the relationship between an SVM and hinge loss?
Here's my attempt to answer your questions:
Is an SVM as simple as saying it's a discriminative classifier that simply optimizes the hinge loss? Or is it more complex than that? Yes, you can say that. Also, don't forget that it regularizes the model too. I wouldn't say SVM is more complex than that, however, it is important to mention that all of those choices (e.g. hinge loss and $L_2$ regularization) have precise mathematical interpretations and are not arbitrary. That's what makes SVMs so popular and powerful. For example, hinge loss is a continuous and convex upper bound to the task loss which, for binary classification problems, is the $0/1$ loss. Note that $0/1$ loss is non-convex and discontinuous. Convexity of hinge loss makes the entire training objective of SVM convex. The fact that it is an upper bound to the task loss guarantees that the minimizer of the bound won't have a bad value on the task loss. $L_2$ regularization can be geometrically interpreted as the size of the margin.
How do the support vectors come into play?
Support vectors play an important role in training SVMs. They identify the separating hyperplane. Let $D$ denote a training set and $SV(D) \subseteq D$ be the set of support vectors that you get by training an SVM on $D$ (assume all hyperparameters are fixed a priori). If we throw out all the non-SV samples from $D$ and train another SVM (with the same hyperparameter values) on the remaining samples (i.e. on $SV(D)$) we get the same exact classifier as before!
What about the slack variables? SVM was originally designed for problems where there exists a separating hyperplane (i.e. a hyperplane that perfectly separates the training samples from the two classes), and the goal was to find, among all separating hyperplanes, the hyperplane with the largest margin. The margin, denoted by $d(w, D)$, is defined for a classifier $w$ and a training set $D$. Assuming $w$ perfectly separates all the examples in $D$, we have $d(w, D) = \min_{(x, y) \in D} y \frac{w^Tx}{||w||_2}$, which is the distance of the closest training example from the separating hyperplane $w$. Note that $y \in \{+1, -1\}$ here.
The introduction of slack variables made it possible to train SVMs on problems where either 1) a separating hyperplane does not exist (i.e. the training data is not linearly separable), or 2) you are happy to (or would like to) sacrifice making some error (higher bias) for better generalization (lower variance). However, this comes at the price of breaking some of the concrete mathematical and geometric interpretations of SVMs without slack variables (e.g. the geometrical interpretation of the margin).
Why can't you have deep SVM's?
SVM objective is convex. More precisely, it is piecewise quadratic; that is because the $L_2$ regularizer is quadratic and the hinge loss is piecewise linear. The training objectives in deep hierarchical models, however, are much more complex. In particular, they are not convex. Of course, one can design a hierarchical discriminative model with hinge loss and $L_2$ regularization etc., but, it wouldn't be called an SVM. In fact, the hinge loss is commonly used in DNNs (Deep Neural Networks) for classification problems. | What's the relationship between an SVM and hinge loss?
Here's my attempt to answer your questions:
Is an SVM as simple as saying it's a discriminative classifier that simply optimizes the hinge loss? Or is it more complex than that? Yes, you can say that |
25,884 | Is Mahalanobis distance equivalent to the Euclidean one on the PCA-rotated data? | Mahalanobis distance is equivalent to the Euclidean distance on the PCA-transformed data (not just PCA-rotated!), where by "PCA-transformed" I mean (i) first rotated to become uncorrelated, and (ii) then scaled to become standardized. This is what @ttnphns said in the comments above and what both @DmitryLaptev and @whuber meant and explicitly wrote in their answers that you linked to (one and two), so I encourage you to re-read their answers and make sure this point becomes clear.
This means that you can make your code work simply by replacing pc$x with scale(pc$x) in the fourth line from the bottom.
Regarding your second question, with $n<p$, covariance matrix is singular and hence Mahalanobis distance is undefined. Indeed, think about Euclidean distance in the PCA-transformed data; when $n<p$, some of the eigenvalues of covariance matrix are zero and the corresponding PCs have zero variance (all the data points are projected to zero). It is therefore impossible to standardize these PCs, as it is impossible to divide by zero. Mahalanobis distance cannot be defined "in these directions".
What one can do, is to focus exclusively on the subspace where the data actually lie, and define Mahalanobis distance in this subspace. This is equivalent to doing PCA and keeping only non-zero components, which is I think what you suggested in your question #2. So the answer to this is yes. I am not sure though how useful this can be in practice, as this distance is likely to be very unstable (near-zero eigenvalues are known with very bad precision, but are going to be inverted in the Mahalanobis formula, possible yielding gross errors). | Is Mahalanobis distance equivalent to the Euclidean one on the PCA-rotated data? | Mahalanobis distance is equivalent to the Euclidean distance on the PCA-transformed data (not just PCA-rotated!), where by "PCA-transformed" I mean (i) first rotated to become uncorrelated, and (ii) t | Is Mahalanobis distance equivalent to the Euclidean one on the PCA-rotated data?
Mahalanobis distance is equivalent to the Euclidean distance on the PCA-transformed data (not just PCA-rotated!), where by "PCA-transformed" I mean (i) first rotated to become uncorrelated, and (ii) then scaled to become standardized. This is what @ttnphns said in the comments above and what both @DmitryLaptev and @whuber meant and explicitly wrote in their answers that you linked to (one and two), so I encourage you to re-read their answers and make sure this point becomes clear.
This means that you can make your code work simply by replacing pc$x with scale(pc$x) in the fourth line from the bottom.
Regarding your second question, with $n<p$, covariance matrix is singular and hence Mahalanobis distance is undefined. Indeed, think about Euclidean distance in the PCA-transformed data; when $n<p$, some of the eigenvalues of covariance matrix are zero and the corresponding PCs have zero variance (all the data points are projected to zero). It is therefore impossible to standardize these PCs, as it is impossible to divide by zero. Mahalanobis distance cannot be defined "in these directions".
What one can do, is to focus exclusively on the subspace where the data actually lie, and define Mahalanobis distance in this subspace. This is equivalent to doing PCA and keeping only non-zero components, which is I think what you suggested in your question #2. So the answer to this is yes. I am not sure though how useful this can be in practice, as this distance is likely to be very unstable (near-zero eigenvalues are known with very bad precision, but are going to be inverted in the Mahalanobis formula, possible yielding gross errors). | Is Mahalanobis distance equivalent to the Euclidean one on the PCA-rotated data?
Mahalanobis distance is equivalent to the Euclidean distance on the PCA-transformed data (not just PCA-rotated!), where by "PCA-transformed" I mean (i) first rotated to become uncorrelated, and (ii) t |
25,885 | Is Mahalanobis distance equivalent to the Euclidean one on the PCA-rotated data? | Mahalanobis distance is the scaled Euclidean distance when the covariance matrix is diagonal. In PCA the covariance matrix between components is diagonal. The scaled Euclidean distance is the Euclidean distance where the variables were scaled by their standard deviations. See p.303 in Encyclopedia of Distances, an very useful book, btw.
It seems that you're trying to use Euclidean distance on the subset of factors of PCA. You probably reduced dimensionality using PCA. You can do it, but there will be some error introduced which is "proportional" to the proportion of variance that is explained by your PCA components. You'll also have to adjust the distance for the scale (i.e. variances explained), of course. | Is Mahalanobis distance equivalent to the Euclidean one on the PCA-rotated data? | Mahalanobis distance is the scaled Euclidean distance when the covariance matrix is diagonal. In PCA the covariance matrix between components is diagonal. The scaled Euclidean distance is the Euclidea | Is Mahalanobis distance equivalent to the Euclidean one on the PCA-rotated data?
Mahalanobis distance is the scaled Euclidean distance when the covariance matrix is diagonal. In PCA the covariance matrix between components is diagonal. The scaled Euclidean distance is the Euclidean distance where the variables were scaled by their standard deviations. See p.303 in Encyclopedia of Distances, an very useful book, btw.
It seems that you're trying to use Euclidean distance on the subset of factors of PCA. You probably reduced dimensionality using PCA. You can do it, but there will be some error introduced which is "proportional" to the proportion of variance that is explained by your PCA components. You'll also have to adjust the distance for the scale (i.e. variances explained), of course. | Is Mahalanobis distance equivalent to the Euclidean one on the PCA-rotated data?
Mahalanobis distance is the scaled Euclidean distance when the covariance matrix is diagonal. In PCA the covariance matrix between components is diagonal. The scaled Euclidean distance is the Euclidea |
25,886 | what is the difference between area under roc and weighted area under roc? | One of the advantages to ROC curves is that they are agnostic to class skew. ROC curves remain the same whether your data is balanced or not, bar some finite-sample effects when you have very few examples of one class.
As such, weighted ROC curves have nothing to do with class balance. Instead, weighted ROC curves are used when you're interested in performance in a certain region of ROC space (e.g. high recall) and was proposed as an improvement over partial AUC (which does exactly this but has some issues). You can read more about it in Weighted Area Under the Receiver Operating Characteristic Curve and Its Application to Gene Selection by Li and Fine. | what is the difference between area under roc and weighted area under roc? | One of the advantages to ROC curves is that they are agnostic to class skew. ROC curves remain the same whether your data is balanced or not, bar some finite-sample effects when you have very few exam | what is the difference between area under roc and weighted area under roc?
One of the advantages to ROC curves is that they are agnostic to class skew. ROC curves remain the same whether your data is balanced or not, bar some finite-sample effects when you have very few examples of one class.
As such, weighted ROC curves have nothing to do with class balance. Instead, weighted ROC curves are used when you're interested in performance in a certain region of ROC space (e.g. high recall) and was proposed as an improvement over partial AUC (which does exactly this but has some issues). You can read more about it in Weighted Area Under the Receiver Operating Characteristic Curve and Its Application to Gene Selection by Li and Fine. | what is the difference between area under roc and weighted area under roc?
One of the advantages to ROC curves is that they are agnostic to class skew. ROC curves remain the same whether your data is balanced or not, bar some finite-sample effects when you have very few exam |
25,887 | what is the difference between area under roc and weighted area under roc? | I second Manish Mahajan's suggest for looking into SMOTE, which can improve your ROC by synthesizing data from the minority class. It is a researched technique and not as simple as "a fictitious set of data" as user48956 put it. See the paper. | what is the difference between area under roc and weighted area under roc? | I second Manish Mahajan's suggest for looking into SMOTE, which can improve your ROC by synthesizing data from the minority class. It is a researched technique and not as simple as "a fictitious set o | what is the difference between area under roc and weighted area under roc?
I second Manish Mahajan's suggest for looking into SMOTE, which can improve your ROC by synthesizing data from the minority class. It is a researched technique and not as simple as "a fictitious set of data" as user48956 put it. See the paper. | what is the difference between area under roc and weighted area under roc?
I second Manish Mahajan's suggest for looking into SMOTE, which can improve your ROC by synthesizing data from the minority class. It is a researched technique and not as simple as "a fictitious set o |
25,888 | what is the difference between area under roc and weighted area under roc? | Hello if you have class imbalance then the approach usually involves balancing the class either by oversampling or under sampling or create synthetic samples using the SMOTE algorithm. Once this is done then the usual Area under curve of ROC can be looked at. Hope that makes sense I am afraid I haven't yet used weighted ROC so not sure about that approach | what is the difference between area under roc and weighted area under roc? | Hello if you have class imbalance then the approach usually involves balancing the class either by oversampling or under sampling or create synthetic samples using the SMOTE algorithm. Once this is do | what is the difference between area under roc and weighted area under roc?
Hello if you have class imbalance then the approach usually involves balancing the class either by oversampling or under sampling or create synthetic samples using the SMOTE algorithm. Once this is done then the usual Area under curve of ROC can be looked at. Hope that makes sense I am afraid I haven't yet used weighted ROC so not sure about that approach | what is the difference between area under roc and weighted area under roc?
Hello if you have class imbalance then the approach usually involves balancing the class either by oversampling or under sampling or create synthetic samples using the SMOTE algorithm. Once this is do |
25,889 | Data Visualization: Alternatives to Choropleth maps for spatial data and statistical graphics | Choropleths have a number of flaws, as you note. Most infamous is the way a shape size is usually unrelated to it's measure yet the size is very prominent visually (electoral maps are a classic example).
Cartograms strive to solve the sizing issue but distort the geography, which looks odd and can be a problem if you're looking for geographic patterns.
A few common alternatives more applicable to US counties:
Geographic Scatterplot
Draw a dot for each shape. That way every shape gets the same amount of color, though overstriking is still an issue for very tiny shapes.
Micromaps
Works with choropleths or geographic scatterplots. Partition the graphs by geographic sector or some measure, not necessarily the same as the coloring variable.
Custom Coloring
You mentioned needing to focus on a particular range of values. One way to help with that is to use a coloring scheme that highlights that range at the expense of others.
Smooth Contours
It sounds like you want to see individual shapes, but if you're looking for broad patterns, plotting a smoothed contour like in a weather map can be useful (no picture).
No Map
Finally, if the data values are more important than the geographic patterns, consider another kind of graph altogether, such as a ranked bar chart of the top and bottom counties. The general weakness of maps is that they use the two most prominent dimensions (X and Y) for geography and leave lesser dimensions of the data measures, so the geography must be relevant to justify using a map. | Data Visualization: Alternatives to Choropleth maps for spatial data and statistical graphics | Choropleths have a number of flaws, as you note. Most infamous is the way a shape size is usually unrelated to it's measure yet the size is very prominent visually (electoral maps are a classic exampl | Data Visualization: Alternatives to Choropleth maps for spatial data and statistical graphics
Choropleths have a number of flaws, as you note. Most infamous is the way a shape size is usually unrelated to it's measure yet the size is very prominent visually (electoral maps are a classic example).
Cartograms strive to solve the sizing issue but distort the geography, which looks odd and can be a problem if you're looking for geographic patterns.
A few common alternatives more applicable to US counties:
Geographic Scatterplot
Draw a dot for each shape. That way every shape gets the same amount of color, though overstriking is still an issue for very tiny shapes.
Micromaps
Works with choropleths or geographic scatterplots. Partition the graphs by geographic sector or some measure, not necessarily the same as the coloring variable.
Custom Coloring
You mentioned needing to focus on a particular range of values. One way to help with that is to use a coloring scheme that highlights that range at the expense of others.
Smooth Contours
It sounds like you want to see individual shapes, but if you're looking for broad patterns, plotting a smoothed contour like in a weather map can be useful (no picture).
No Map
Finally, if the data values are more important than the geographic patterns, consider another kind of graph altogether, such as a ranked bar chart of the top and bottom counties. The general weakness of maps is that they use the two most prominent dimensions (X and Y) for geography and leave lesser dimensions of the data measures, so the geography must be relevant to justify using a map. | Data Visualization: Alternatives to Choropleth maps for spatial data and statistical graphics
Choropleths have a number of flaws, as you note. Most infamous is the way a shape size is usually unrelated to it's measure yet the size is very prominent visually (electoral maps are a classic exampl |
25,890 | Data Visualization: Alternatives to Choropleth maps for spatial data and statistical graphics | Here is a slew of examples (hopefully that's OK) which try to show variations on the map theme while providing a range of flexibility that is lacking in the standard choropleth. This may overlap other answers, but I am trying to be as exhaustive as possible.
Favorite: Interactive density with multiple levels of scale
Before going through them, I will say that the "racial dot plot" of US census data is one of the most compelling visualizations that solves this problem. It is highly interactive, handles scale, and allows for an incredible density of information. Creating it may be difficult, but it sure is beautiful. Check it out first: http://demographics.coopercenter.org/DotMap/index.html
After that, here is a more systematic take on the examples.
Option 1: plot the data based on the size of an object instead of coloring an area. This can be with a bubble, density, or weather map style.
This is called out on the same site that you linked to: http://indiemapper.com/app/learnmore.php?l=dot_density
This is done with a bubble plot in this example: http://bost.ocks.org/mike/bubble-map/
The racial dot map above is another example.
The "stop and frisk" viz is also a great example of using density to encode info. http://www.nytimes.com/interactive/2014/09/19/nyregion/stop-and-frisk-is-all-but-gone-from-new-york.html
Here is a weather map example showing how drought affects the US over time. It is good for an overall spatial representation that is independent of man made boundaries. http://www.nytimes.com/interactive/2014/upshot/mapping-the-spread-of-drought-across-the-us.html?abt=0002&abg=1
Option 2: extract the shapes of the counties/states and plot them small multiple fashion with the same scaling
The same site you linked to talks about "non-contiguous cartograms". You can extend this idea to scale the states all the same and show them in a grid or other orderly arrangement. This allows small states to be shown at the same level of detail as large ones. http://indiemapper.com/app/learnmore.php?l=cartogram
Along that same vein, here is an example of the small multiples comparing different bike sharing options across the world. It removes boundaries and scale and encodes info as density. http://qz.com/89019/29-of-the-worlds-largest-bike-sharing-programs-in-one-map/
Option 3: allow for a degree of interactivity so that the scale can be changed at will by the user
The racial dot map is the best example of this above.
There is another stop and frisk example which allows for zooming in and out on the neighborhoods for emphasis. http://www.nytimes.com/interactive/2010/07/11/nyregion/20100711-stop-and-frisk.html?_r=0
Option 4: have multiple visualizations that highlight areas of interest at the small scale
Some of the NYTimes examples do this. Here is one showing baseball fans over the US with small scale maps of the intersections. It does a great job of splitting scale. http://www.nytimes.com/interactive/2014/04/23/upshot/24-upshot-baseball.html?abt=0002&abg=1 | Data Visualization: Alternatives to Choropleth maps for spatial data and statistical graphics | Here is a slew of examples (hopefully that's OK) which try to show variations on the map theme while providing a range of flexibility that is lacking in the standard choropleth. This may overlap othe | Data Visualization: Alternatives to Choropleth maps for spatial data and statistical graphics
Here is a slew of examples (hopefully that's OK) which try to show variations on the map theme while providing a range of flexibility that is lacking in the standard choropleth. This may overlap other answers, but I am trying to be as exhaustive as possible.
Favorite: Interactive density with multiple levels of scale
Before going through them, I will say that the "racial dot plot" of US census data is one of the most compelling visualizations that solves this problem. It is highly interactive, handles scale, and allows for an incredible density of information. Creating it may be difficult, but it sure is beautiful. Check it out first: http://demographics.coopercenter.org/DotMap/index.html
After that, here is a more systematic take on the examples.
Option 1: plot the data based on the size of an object instead of coloring an area. This can be with a bubble, density, or weather map style.
This is called out on the same site that you linked to: http://indiemapper.com/app/learnmore.php?l=dot_density
This is done with a bubble plot in this example: http://bost.ocks.org/mike/bubble-map/
The racial dot map above is another example.
The "stop and frisk" viz is also a great example of using density to encode info. http://www.nytimes.com/interactive/2014/09/19/nyregion/stop-and-frisk-is-all-but-gone-from-new-york.html
Here is a weather map example showing how drought affects the US over time. It is good for an overall spatial representation that is independent of man made boundaries. http://www.nytimes.com/interactive/2014/upshot/mapping-the-spread-of-drought-across-the-us.html?abt=0002&abg=1
Option 2: extract the shapes of the counties/states and plot them small multiple fashion with the same scaling
The same site you linked to talks about "non-contiguous cartograms". You can extend this idea to scale the states all the same and show them in a grid or other orderly arrangement. This allows small states to be shown at the same level of detail as large ones. http://indiemapper.com/app/learnmore.php?l=cartogram
Along that same vein, here is an example of the small multiples comparing different bike sharing options across the world. It removes boundaries and scale and encodes info as density. http://qz.com/89019/29-of-the-worlds-largest-bike-sharing-programs-in-one-map/
Option 3: allow for a degree of interactivity so that the scale can be changed at will by the user
The racial dot map is the best example of this above.
There is another stop and frisk example which allows for zooming in and out on the neighborhoods for emphasis. http://www.nytimes.com/interactive/2010/07/11/nyregion/20100711-stop-and-frisk.html?_r=0
Option 4: have multiple visualizations that highlight areas of interest at the small scale
Some of the NYTimes examples do this. Here is one showing baseball fans over the US with small scale maps of the intersections. It does a great job of splitting scale. http://www.nytimes.com/interactive/2014/04/23/upshot/24-upshot-baseball.html?abt=0002&abg=1 | Data Visualization: Alternatives to Choropleth maps for spatial data and statistical graphics
Here is a slew of examples (hopefully that's OK) which try to show variations on the map theme while providing a range of flexibility that is lacking in the standard choropleth. This may overlap othe |
25,891 | Regression with inverse independent variable | When Y is plotted against $\frac{1}{X}$, I see that there is a linear relationship (upward trend) between the two. Now, this also means that there is a linear downward trend between Y and X
The last sentence is wrong: there is a downward trend, but it is by no means linear:
I used a $f(x) = \frac{1}{x}$ as function plus a bit of noise on $Y$. As you can see, while plotting $Y$ against $\frac{1}{X}$ yields a linear behaviour, $Y$ against $X$ is far from linear.
(@whuber points out that the $Y$ against $\frac{1}{X}$ plot doesn't look homoscedastic. I think it appears to have higher variance for low $Y$ because the much higher case density leads to larger range which is essentially what we perceive. Actually, the data is homoscedastic: I used Y = 1 / X + rnorm (length (X), sd = 0.1) to generate the data, so no dependency on the size of $X$.)
So in general the relationship is very much non-linear. That is, unless your range of $X$ is so narrow that you can approximate $\frac{d \frac{1}{x}}{dx} = - \frac{1}{x^2} \approx const.$ Here's an example:
Bottomline:
In general, it is very hard to approximate a $\frac{1}{X}$-type function by a linear or polynomial function. And without offset term you'll never get a reasonable approximation.
If the $X$ interval is narrow enough to allow a linear approximation, you'll anyways not be able from the data to guess the relation should be $\frac{1}{X}$ and not linear ($X$). | Regression with inverse independent variable | When Y is plotted against $\frac{1}{X}$, I see that there is a linear relationship (upward trend) between the two. Now, this also means that there is a linear downward trend between Y and X
The last | Regression with inverse independent variable
When Y is plotted against $\frac{1}{X}$, I see that there is a linear relationship (upward trend) between the two. Now, this also means that there is a linear downward trend between Y and X
The last sentence is wrong: there is a downward trend, but it is by no means linear:
I used a $f(x) = \frac{1}{x}$ as function plus a bit of noise on $Y$. As you can see, while plotting $Y$ against $\frac{1}{X}$ yields a linear behaviour, $Y$ against $X$ is far from linear.
(@whuber points out that the $Y$ against $\frac{1}{X}$ plot doesn't look homoscedastic. I think it appears to have higher variance for low $Y$ because the much higher case density leads to larger range which is essentially what we perceive. Actually, the data is homoscedastic: I used Y = 1 / X + rnorm (length (X), sd = 0.1) to generate the data, so no dependency on the size of $X$.)
So in general the relationship is very much non-linear. That is, unless your range of $X$ is so narrow that you can approximate $\frac{d \frac{1}{x}}{dx} = - \frac{1}{x^2} \approx const.$ Here's an example:
Bottomline:
In general, it is very hard to approximate a $\frac{1}{X}$-type function by a linear or polynomial function. And without offset term you'll never get a reasonable approximation.
If the $X$ interval is narrow enough to allow a linear approximation, you'll anyways not be able from the data to guess the relation should be $\frac{1}{X}$ and not linear ($X$). | Regression with inverse independent variable
When Y is plotted against $\frac{1}{X}$, I see that there is a linear relationship (upward trend) between the two. Now, this also means that there is a linear downward trend between Y and X
The last |
25,892 | Regression with inverse independent variable | I see no reason for them to be "approximately equal" in general -- but what exactly do you mean by approximately equal?
Here's a toy example:
library(ggplot2)
n <- 10^3
df <- data.frame(x=runif(n, min=1, max=2))
df$y <- 5 / df$x + rnorm(n)
p <- (ggplot(df, aes(x=x, y=y)) +
geom_point() +
geom_smooth(method="lm", formula=y ~ 0 + x) + # Blue, OP's y hat
geom_smooth(method="lm", formula=y ~ 0 + I(x^-1), color="red")) # Red, OP's y tilde
p
The picture:
The "blue" model would do much better if it were allowed to have an intercept (i.e. constant) term... | Regression with inverse independent variable | I see no reason for them to be "approximately equal" in general -- but what exactly do you mean by approximately equal?
Here's a toy example:
library(ggplot2)
n <- 10^3
df <- data.frame(x=runif(n, min | Regression with inverse independent variable
I see no reason for them to be "approximately equal" in general -- but what exactly do you mean by approximately equal?
Here's a toy example:
library(ggplot2)
n <- 10^3
df <- data.frame(x=runif(n, min=1, max=2))
df$y <- 5 / df$x + rnorm(n)
p <- (ggplot(df, aes(x=x, y=y)) +
geom_point() +
geom_smooth(method="lm", formula=y ~ 0 + x) + # Blue, OP's y hat
geom_smooth(method="lm", formula=y ~ 0 + I(x^-1), color="red")) # Red, OP's y tilde
p
The picture:
The "blue" model would do much better if it were allowed to have an intercept (i.e. constant) term... | Regression with inverse independent variable
I see no reason for them to be "approximately equal" in general -- but what exactly do you mean by approximately equal?
Here's a toy example:
library(ggplot2)
n <- 10^3
df <- data.frame(x=runif(n, min |
25,893 | What is the difference between controlling for a variable in a regression model vs. controlling for a variable in your study design? | By "controlling for a variable in your study design", I assume you mean causing a variable to be constant across all study units or manipulating a variable so that the level of that variable is independently set for each study unit. That is, controlling for a variable in your study design means that you are conducting a true experiment. The benefit of this is that it can help with inferring causality.
In theory, controlling for a variable in your regression model can also help with inferring causality. However, this is only the case if you control for every variable that has a direct causal connection to the response. If you omit such a variable (perhaps you didn't know to include it), and it is correlated with any of the other variables, then your causal inferences will be biased and incorrect. In practice, we don't know all the relevant variables, so statistical control is a fairly dicey endeavor that relies on big assumptions you can't check.
However, your question asks about "reducing error and yielding more precise predictions", not inferring causality. This is a different issue. If you were to make a given variable constant through your study design, all of the variability in the response due to that variable would be eliminated. On the other hand, if you simply control for a variable, you are estimating its effect which is subject to sampling error at a minimum. In other words, statistical control wouldn't be quite as good, in the long run, at reducing residual variance in your sample.
But if you are interested in reducing error and getting more precise predictions, presumably you primarily care about out of sample properties, not the precision within your sample. And therein lies the rub. When you control for a variable by manipulating it in some form (holding it constant, etc.), you create a situation that is more artificial than the original, natural observation. That is, experiments tend to have less external validity / generalizability than observational studies.
In case it's not clear, an example of a true experiment that holds something constant might be assessing a treatment in a mouse model using inbred mice that are all genetically identical. On the other hand, an example of controlling for a variable might be representing family history of disease by a dummy code and including that variable in a multiple regression model (cf., How exactly does one “control for other variables”?, and How can adding a 2nd IV make the 1st IV significant?). | What is the difference between controlling for a variable in a regression model vs. controlling for | By "controlling for a variable in your study design", I assume you mean causing a variable to be constant across all study units or manipulating a variable so that the level of that variable is indepe | What is the difference between controlling for a variable in a regression model vs. controlling for a variable in your study design?
By "controlling for a variable in your study design", I assume you mean causing a variable to be constant across all study units or manipulating a variable so that the level of that variable is independently set for each study unit. That is, controlling for a variable in your study design means that you are conducting a true experiment. The benefit of this is that it can help with inferring causality.
In theory, controlling for a variable in your regression model can also help with inferring causality. However, this is only the case if you control for every variable that has a direct causal connection to the response. If you omit such a variable (perhaps you didn't know to include it), and it is correlated with any of the other variables, then your causal inferences will be biased and incorrect. In practice, we don't know all the relevant variables, so statistical control is a fairly dicey endeavor that relies on big assumptions you can't check.
However, your question asks about "reducing error and yielding more precise predictions", not inferring causality. This is a different issue. If you were to make a given variable constant through your study design, all of the variability in the response due to that variable would be eliminated. On the other hand, if you simply control for a variable, you are estimating its effect which is subject to sampling error at a minimum. In other words, statistical control wouldn't be quite as good, in the long run, at reducing residual variance in your sample.
But if you are interested in reducing error and getting more precise predictions, presumably you primarily care about out of sample properties, not the precision within your sample. And therein lies the rub. When you control for a variable by manipulating it in some form (holding it constant, etc.), you create a situation that is more artificial than the original, natural observation. That is, experiments tend to have less external validity / generalizability than observational studies.
In case it's not clear, an example of a true experiment that holds something constant might be assessing a treatment in a mouse model using inbred mice that are all genetically identical. On the other hand, an example of controlling for a variable might be representing family history of disease by a dummy code and including that variable in a multiple regression model (cf., How exactly does one “control for other variables”?, and How can adding a 2nd IV make the 1st IV significant?). | What is the difference between controlling for a variable in a regression model vs. controlling for
By "controlling for a variable in your study design", I assume you mean causing a variable to be constant across all study units or manipulating a variable so that the level of that variable is indepe |
25,894 | Name for outer product of gradient approximation of Hessian | The expected value of the outer product of the gradient of the log-likelihood is the "information matrix", or "Fisher information" irrespective of whether we use it instead of the negative of the Hessian or not, see this post. It is also the "variance of the score".
The relation that permits us to use the outer product of the gradient instead of the negative of the Hessian, is called the Information Matrix Equality, and it is valid under the assumption of correct specification (this is important but usually goes unmentioned), as well as some regularity conditions that permit the interchange of integration and differentiation.
Perhaps this could be useful also.
Note: In many corners it is just said "outer product of the gradient" without adding "with itself". | Name for outer product of gradient approximation of Hessian | The expected value of the outer product of the gradient of the log-likelihood is the "information matrix", or "Fisher information" irrespective of whether we use it instead of the negative of the Hess | Name for outer product of gradient approximation of Hessian
The expected value of the outer product of the gradient of the log-likelihood is the "information matrix", or "Fisher information" irrespective of whether we use it instead of the negative of the Hessian or not, see this post. It is also the "variance of the score".
The relation that permits us to use the outer product of the gradient instead of the negative of the Hessian, is called the Information Matrix Equality, and it is valid under the assumption of correct specification (this is important but usually goes unmentioned), as well as some regularity conditions that permit the interchange of integration and differentiation.
Perhaps this could be useful also.
Note: In many corners it is just said "outer product of the gradient" without adding "with itself". | Name for outer product of gradient approximation of Hessian
The expected value of the outer product of the gradient of the log-likelihood is the "information matrix", or "Fisher information" irrespective of whether we use it instead of the negative of the Hess |
25,895 | Name for outer product of gradient approximation of Hessian | The outer product of gradient estimator for the covariance matrix of maximum likelihood estimates is also known as the BHHH estimator, because it was proposed by Berndt, Hall, Hall and Hausman in
this paper:
Berndt, E.K., Hall, B.H., Hall, R.E. and Hausman, J.A. (1974).
"Estimation and Inference in Nonlinear Structural Models".
Annals of Economic and Social Measurement, 3, pp. 653-665.
In the discussion around equation (3.8) of the paper you may get further details justifying the use of this expression. | Name for outer product of gradient approximation of Hessian | The outer product of gradient estimator for the covariance matrix of maximum likelihood estimates is also known as the BHHH estimator, because it was proposed by Berndt, Hall, Hall and Hausman in
thi | Name for outer product of gradient approximation of Hessian
The outer product of gradient estimator for the covariance matrix of maximum likelihood estimates is also known as the BHHH estimator, because it was proposed by Berndt, Hall, Hall and Hausman in
this paper:
Berndt, E.K., Hall, B.H., Hall, R.E. and Hausman, J.A. (1974).
"Estimation and Inference in Nonlinear Structural Models".
Annals of Economic and Social Measurement, 3, pp. 653-665.
In the discussion around equation (3.8) of the paper you may get further details justifying the use of this expression. | Name for outer product of gradient approximation of Hessian
The outer product of gradient estimator for the covariance matrix of maximum likelihood estimates is also known as the BHHH estimator, because it was proposed by Berndt, Hall, Hall and Hausman in
thi |
25,896 | How to discuss a scatterplot with multiple emerging lines? | You may have artefacts arising from restrictions on what is possible physically or on what is recorded (at the simplest, integers only). Completely anonymous $Y$ and $X$ don't suggest any confident guesses about how that arises, but it looks as if some $Y/X$ are favoured and I would certainly look at the distribution of that ratio. Also, if so it's not in my experience useful to look for separate models unless you really are mixing quite different situations. (For "physically" read "biologically" or whatever adverb makes sense.)
The more I look at this, the more I guess that lines such as $X/k$ or $kX$ are evident for integer $k$, because the values themselves are integers.
A different but possibly related point is that to me these data cry out for transformations. If they are all positive, logarithms are indicated. I fear that you have zeros, in which case what to do is open to discussion. For example, a line at $Y = 0$ may be guessed at from your graph. If there are zeros, some swear by $\log(Y + \text{constant})$ or cube root should help. Whatever helps you see patterns more clearly is defensible.
A point of terminology: skewness in statistics is described with reference to the tail that is more stretched out. You're free to regard this terminology as backwards. Here both variables are skewed to high values or positively or right-skewed.
UPDATE: Thanks for the extra graphs, which are most helpful. Almost all guesses appear confirmed. (The bottom line, so to speak, is $Y = 1$, not $Y = 0$.) The stripes are artefacts or secondary effects of using integers, which may well be the only, or at least the most practical, way of measuring what you are measuring (about which the question remains discreet). The log-log and other plots expose the discreteness. So despite the discretion, the discreteness is confirmed. There are pronounced modes (peaks in distribution) for the ratios 1/4, 1/2, 1/1 and 2/1.
As before, I wouldn't advise modelling different stripes differently without a scientific reason to distinguish them or treat them separately. You should just average over what you have. (There may be known methods with this kind of data to suppress the discreteness. If people in your field routinely measure millions of points for each plot, it is hard to believe that this has not been seen before.)
The correlation should certainly be positive. Apart from a formal significance test, which here would be utterly useless as minute correlations will qualify as significant with this sample size, whether it is declared strong is a matter of the expectations and standards in your field. Comparing your correlation quantitatively with others' results is a way to go.
Detail: The skewness is still described the wrong way round according to statistical convention. These variables are right-skewed; that jargon fits when looking at a histogram with horizontal magnitude axis and noting that skewness is named for the longer tail, not the concentration with more values. | How to discuss a scatterplot with multiple emerging lines? | You may have artefacts arising from restrictions on what is possible physically or on what is recorded (at the simplest, integers only). Completely anonymous $Y$ and $X$ don't suggest any confident gu | How to discuss a scatterplot with multiple emerging lines?
You may have artefacts arising from restrictions on what is possible physically or on what is recorded (at the simplest, integers only). Completely anonymous $Y$ and $X$ don't suggest any confident guesses about how that arises, but it looks as if some $Y/X$ are favoured and I would certainly look at the distribution of that ratio. Also, if so it's not in my experience useful to look for separate models unless you really are mixing quite different situations. (For "physically" read "biologically" or whatever adverb makes sense.)
The more I look at this, the more I guess that lines such as $X/k$ or $kX$ are evident for integer $k$, because the values themselves are integers.
A different but possibly related point is that to me these data cry out for transformations. If they are all positive, logarithms are indicated. I fear that you have zeros, in which case what to do is open to discussion. For example, a line at $Y = 0$ may be guessed at from your graph. If there are zeros, some swear by $\log(Y + \text{constant})$ or cube root should help. Whatever helps you see patterns more clearly is defensible.
A point of terminology: skewness in statistics is described with reference to the tail that is more stretched out. You're free to regard this terminology as backwards. Here both variables are skewed to high values or positively or right-skewed.
UPDATE: Thanks for the extra graphs, which are most helpful. Almost all guesses appear confirmed. (The bottom line, so to speak, is $Y = 1$, not $Y = 0$.) The stripes are artefacts or secondary effects of using integers, which may well be the only, or at least the most practical, way of measuring what you are measuring (about which the question remains discreet). The log-log and other plots expose the discreteness. So despite the discretion, the discreteness is confirmed. There are pronounced modes (peaks in distribution) for the ratios 1/4, 1/2, 1/1 and 2/1.
As before, I wouldn't advise modelling different stripes differently without a scientific reason to distinguish them or treat them separately. You should just average over what you have. (There may be known methods with this kind of data to suppress the discreteness. If people in your field routinely measure millions of points for each plot, it is hard to believe that this has not been seen before.)
The correlation should certainly be positive. Apart from a formal significance test, which here would be utterly useless as minute correlations will qualify as significant with this sample size, whether it is declared strong is a matter of the expectations and standards in your field. Comparing your correlation quantitatively with others' results is a way to go.
Detail: The skewness is still described the wrong way round according to statistical convention. These variables are right-skewed; that jargon fits when looking at a histogram with horizontal magnitude axis and noting that skewness is named for the longer tail, not the concentration with more values. | How to discuss a scatterplot with multiple emerging lines?
You may have artefacts arising from restrictions on what is possible physically or on what is recorded (at the simplest, integers only). Completely anonymous $Y$ and $X$ don't suggest any confident gu |
25,897 | How to discuss a scatterplot with multiple emerging lines? | The tool you want, I think, is called switching regression. The idea is that there are several regression lines, and each data point is assigned to one of them. For example, the equation of the first regression line would be:
\begin{align}
Y_i &= \alpha_1 + \beta_1X_i + \epsilon_i
\end{align}
The equation of the $m^{th}$ regression line would be:
\begin{align}
Y_i &= \alpha_m + \beta_mX_i + \epsilon_i
\end{align}
In total, there are $M$ different regression lines, say. For any given data point, we only get to see one of the regression lines. Thus, there has to be some mechanism for deciding which regression line we see for each point. The simplest mechanism is just the multinomial distribution. That is, we see the $m^{th}$ regression line with probability $p_m$, where $\sum_m p_m =1$.
The model is usually estimated by maximum likelihood. Assuming that the $\epsilon$ are distributed $N(0,\sigma^2)$, the likelihood function you would maximize would be:
\begin{align}
L(\alpha,\beta,\sigma) = \sum_{m=1}^M p_m\frac{1}{\sigma}\phi\left(\frac{Y_i-\alpha_1-\beta_1X_i}{\sigma}\right)
\end{align}
The function $\phi$ is the standard normal density. You maximize this in the $3M+1$ parameters, subject to the constraints $\sum_m p_m=1,\; p_m\ge0$. This is usually a somewhat cranky maximization problem if you are going to use quasi-Newton methods to solve it. You can't just start all the $\alpha$ and $\beta$ at zero and the $p_m$ at $\frac{1}{M}$, for example. You have to give distinct starting values to the $\alpha$ and $\beta$ so that the algorithm can "tell them apart."
There are a number of ways to make this more involved if you want to. Maybe you have a variable $Z_i$ which you think influences $p_m$, that is which influences which regression is chosen. Well, you can use a multinomial logit function to make $p_m$ be a function of $Z_i$:
\begin{align}
L(\alpha,\beta,\sigma) = \sum_{m=1}^M \left(\frac{exp(\delta_m+\gamma_mZ_i)}{\sum_{m'} exp(\delta_{m'}+\gamma_{m'}Z_i)}\right)\frac{1}{\sigma}\phi\left(\frac{Y_i-\alpha_1-\beta_1X_i}{\sigma}\right)
\end{align}
Now there are $5M+1$ parameters. Actually, there are $5M-1$ parameters because there is a normalization required on the $\delta, \gamma$ --- read up on the multinomial logit for an explanation.
Another way to make it more involved is to use some method for choosing $M$, the number of regression lines. I'm pretty casual about this kind of choice in my own work, so maybe someone else can point you towards the best way to choose it. | How to discuss a scatterplot with multiple emerging lines? | The tool you want, I think, is called switching regression. The idea is that there are several regression lines, and each data point is assigned to one of them. For example, the equation of the firs | How to discuss a scatterplot with multiple emerging lines?
The tool you want, I think, is called switching regression. The idea is that there are several regression lines, and each data point is assigned to one of them. For example, the equation of the first regression line would be:
\begin{align}
Y_i &= \alpha_1 + \beta_1X_i + \epsilon_i
\end{align}
The equation of the $m^{th}$ regression line would be:
\begin{align}
Y_i &= \alpha_m + \beta_mX_i + \epsilon_i
\end{align}
In total, there are $M$ different regression lines, say. For any given data point, we only get to see one of the regression lines. Thus, there has to be some mechanism for deciding which regression line we see for each point. The simplest mechanism is just the multinomial distribution. That is, we see the $m^{th}$ regression line with probability $p_m$, where $\sum_m p_m =1$.
The model is usually estimated by maximum likelihood. Assuming that the $\epsilon$ are distributed $N(0,\sigma^2)$, the likelihood function you would maximize would be:
\begin{align}
L(\alpha,\beta,\sigma) = \sum_{m=1}^M p_m\frac{1}{\sigma}\phi\left(\frac{Y_i-\alpha_1-\beta_1X_i}{\sigma}\right)
\end{align}
The function $\phi$ is the standard normal density. You maximize this in the $3M+1$ parameters, subject to the constraints $\sum_m p_m=1,\; p_m\ge0$. This is usually a somewhat cranky maximization problem if you are going to use quasi-Newton methods to solve it. You can't just start all the $\alpha$ and $\beta$ at zero and the $p_m$ at $\frac{1}{M}$, for example. You have to give distinct starting values to the $\alpha$ and $\beta$ so that the algorithm can "tell them apart."
There are a number of ways to make this more involved if you want to. Maybe you have a variable $Z_i$ which you think influences $p_m$, that is which influences which regression is chosen. Well, you can use a multinomial logit function to make $p_m$ be a function of $Z_i$:
\begin{align}
L(\alpha,\beta,\sigma) = \sum_{m=1}^M \left(\frac{exp(\delta_m+\gamma_mZ_i)}{\sum_{m'} exp(\delta_{m'}+\gamma_{m'}Z_i)}\right)\frac{1}{\sigma}\phi\left(\frac{Y_i-\alpha_1-\beta_1X_i}{\sigma}\right)
\end{align}
Now there are $5M+1$ parameters. Actually, there are $5M-1$ parameters because there is a normalization required on the $\delta, \gamma$ --- read up on the multinomial logit for an explanation.
Another way to make it more involved is to use some method for choosing $M$, the number of regression lines. I'm pretty casual about this kind of choice in my own work, so maybe someone else can point you towards the best way to choose it. | How to discuss a scatterplot with multiple emerging lines?
The tool you want, I think, is called switching regression. The idea is that there are several regression lines, and each data point is assigned to one of them. For example, the equation of the firs |
25,898 | How to discuss a scatterplot with multiple emerging lines? | I have observed similar behavior in some of my data sets. In my case the multiple-different lines were due to quantization error in one of my processing algorithms.
That is, we looking at scatter plots of processed data, and the processing algorithm had some quantization effects, that caused dependencies in the data that looked exactly like you have above.
Fixing the quantization effects, caused our output to look far smoother and less clumped.
As for your "linear correlation" comment. What you presented is insufficient to determine if this data is linear correlated or not. That is, in some fields, a correlation coefficient of > 0.7 is considered strong linear correlation. Given that most of your data is near the origin, it is quite conceivable that your data is linearly correlated relative to what "conventional wisdom" would say. Correlation tells you very little about a data set. | How to discuss a scatterplot with multiple emerging lines? | I have observed similar behavior in some of my data sets. In my case the multiple-different lines were due to quantization error in one of my processing algorithms.
That is, we looking at scatter p | How to discuss a scatterplot with multiple emerging lines?
I have observed similar behavior in some of my data sets. In my case the multiple-different lines were due to quantization error in one of my processing algorithms.
That is, we looking at scatter plots of processed data, and the processing algorithm had some quantization effects, that caused dependencies in the data that looked exactly like you have above.
Fixing the quantization effects, caused our output to look far smoother and less clumped.
As for your "linear correlation" comment. What you presented is insufficient to determine if this data is linear correlated or not. That is, in some fields, a correlation coefficient of > 0.7 is considered strong linear correlation. Given that most of your data is near the origin, it is quite conceivable that your data is linearly correlated relative to what "conventional wisdom" would say. Correlation tells you very little about a data set. | How to discuss a scatterplot with multiple emerging lines?
I have observed similar behavior in some of my data sets. In my case the multiple-different lines were due to quantization error in one of my processing algorithms.
That is, we looking at scatter p |
25,899 | Post-hoc tests for MANOVA: univariate ANOVAs or discriminant analysis? | "Significant influence from group" means that $H_0: {\mu_1}=\mu_2=\mu_3$ has been rejected, where $\mu_i$ is the mean vector of the dependent variables in group $i$. This can happen if $\mu_1=\mu_2\neq \mu_3$. In this case, discriminant analysis between group 1 and 2 would fail. You would have first to decompose the overall hypothesis into $\mu_1 = \mu_2$, $\mu_2 = \mu_3$ and $\mu_1 = \mu_3$. There, of course, multiplicity adjustments (e.g. Bonferroni) are again necessary.
Even if it does not fail, discriminant analysis gives you rather estimates of the effects, not test results. If you are in fact interested in such a tool (e.g. in order to diagnose to which group a new patient would belong), discriminant analysis will of course still be necessary.
Multiplicity adjustments of the hypotheses $H_0 ^ {j}:\;\mu_1^{j}=\mu_2^{j}=\mu_3^{j}$, where $j$ denotes a dependent variable, can be done with Bonferroni-method. The interpretation of a significant result would be that in the dependent variable you identified not all groups have equal means. Usually you would want to decompose this result as well into pairwise comparisons as above. Also you have to keep in mind that it may happen that you can reject the global hypothesis but fail with the post-hoc analyses.
Your last question: As Bonferroni is quite conservative, you may consider using different methods, e.g. like in the SimComp R-package. This would estimate the unknown dependency between the variables. Said information would lead to a less conservative adjustment, thus, better power. | Post-hoc tests for MANOVA: univariate ANOVAs or discriminant analysis? | "Significant influence from group" means that $H_0: {\mu_1}=\mu_2=\mu_3$ has been rejected, where $\mu_i$ is the mean vector of the dependent variables in group $i$. This can happen if $\mu_1=\mu_2\ne | Post-hoc tests for MANOVA: univariate ANOVAs or discriminant analysis?
"Significant influence from group" means that $H_0: {\mu_1}=\mu_2=\mu_3$ has been rejected, where $\mu_i$ is the mean vector of the dependent variables in group $i$. This can happen if $\mu_1=\mu_2\neq \mu_3$. In this case, discriminant analysis between group 1 and 2 would fail. You would have first to decompose the overall hypothesis into $\mu_1 = \mu_2$, $\mu_2 = \mu_3$ and $\mu_1 = \mu_3$. There, of course, multiplicity adjustments (e.g. Bonferroni) are again necessary.
Even if it does not fail, discriminant analysis gives you rather estimates of the effects, not test results. If you are in fact interested in such a tool (e.g. in order to diagnose to which group a new patient would belong), discriminant analysis will of course still be necessary.
Multiplicity adjustments of the hypotheses $H_0 ^ {j}:\;\mu_1^{j}=\mu_2^{j}=\mu_3^{j}$, where $j$ denotes a dependent variable, can be done with Bonferroni-method. The interpretation of a significant result would be that in the dependent variable you identified not all groups have equal means. Usually you would want to decompose this result as well into pairwise comparisons as above. Also you have to keep in mind that it may happen that you can reject the global hypothesis but fail with the post-hoc analyses.
Your last question: As Bonferroni is quite conservative, you may consider using different methods, e.g. like in the SimComp R-package. This would estimate the unknown dependency between the variables. Said information would lead to a less conservative adjustment, thus, better power. | Post-hoc tests for MANOVA: univariate ANOVAs or discriminant analysis?
"Significant influence from group" means that $H_0: {\mu_1}=\mu_2=\mu_3$ has been rejected, where $\mu_i$ is the mean vector of the dependent variables in group $i$. This can happen if $\mu_1=\mu_2\ne |
25,900 | Post-hoc tests for MANOVA: univariate ANOVAs or discriminant analysis? | These two follow-up approaches have very different goals!
Univariate ANOVAs (as follow-ups to MANOVA) aim at checking which individual variables (as opposed to all variables together) differ between groups.
Linear Discriminant Analysis, LDA, (as a follow-up to MANOVA) aims at checking which linear combination of individual variables leads to maximal group separability and at interpreting this linear combination.
This question asked about one-way MANOVA with only a single factor, but see here for the [more complicated] case of factorial MANOVA: How to follow up a factorial MANOVA with discriminant analysis?
So e.g. if your individual variables are weight and height, then with univariate ANOVAs you can test if weight and height, separately, differ between groups. With LDA you can find out that the best group separability is given by, say, 2*weight+3*height. Then you can try to interpret this linear combination.
So the choice between these two follow-up approaches entirely depends on what you want to test.
Two further remarks
First, if you are "interested in how the three groups influence every dependent variable" (i.e. individual DVs are of primary interest), then you should arguably not run MANOVA at all, but go straight to univariate ANOVAs! Correct for multiple comparisons (note that Bonferroni is very conservative, you might prefer to control false discover rate instead; but see comment below for another opinion), but proceed with univariate tests. After all, nine DVs are not a lot. If, instead, you are interested in whether groups differed at all (and maybe in what respect they differed the most) but do not care so much about individual DVs, then you should use MANOVA. It all depends on your research hypothesis.
Second, it sounds though as if you might have no pre-specified hypothesis about which DVs should be influenced by group, and what exactly this influence should be. Instead, you probably have a bunch of data that you wish to explore. It is a valid wish, but it means that you are doing exploratory analysis. And in this case my best advice would be: plot the data and look at it!
You can plot a number of things. I would plot distributions of each DV for each of the three groups (i.e. nine plots; can be density plots or box plots). I would also run linear discriminant analysis (which is intimately related to MANOVA, see e.g. my answer here), project the data onto the first two discriminant axes and plot all your data as a scatter plot with different groups marked in different colours. You can project original DVs onto the same scatter plot, obtaining a biplot (here is a nice example done with PCA, but one can make a similar one with LDA too). | Post-hoc tests for MANOVA: univariate ANOVAs or discriminant analysis? | These two follow-up approaches have very different goals!
Univariate ANOVAs (as follow-ups to MANOVA) aim at checking which individual variables (as opposed to all variables together) differ between | Post-hoc tests for MANOVA: univariate ANOVAs or discriminant analysis?
These two follow-up approaches have very different goals!
Univariate ANOVAs (as follow-ups to MANOVA) aim at checking which individual variables (as opposed to all variables together) differ between groups.
Linear Discriminant Analysis, LDA, (as a follow-up to MANOVA) aims at checking which linear combination of individual variables leads to maximal group separability and at interpreting this linear combination.
This question asked about one-way MANOVA with only a single factor, but see here for the [more complicated] case of factorial MANOVA: How to follow up a factorial MANOVA with discriminant analysis?
So e.g. if your individual variables are weight and height, then with univariate ANOVAs you can test if weight and height, separately, differ between groups. With LDA you can find out that the best group separability is given by, say, 2*weight+3*height. Then you can try to interpret this linear combination.
So the choice between these two follow-up approaches entirely depends on what you want to test.
Two further remarks
First, if you are "interested in how the three groups influence every dependent variable" (i.e. individual DVs are of primary interest), then you should arguably not run MANOVA at all, but go straight to univariate ANOVAs! Correct for multiple comparisons (note that Bonferroni is very conservative, you might prefer to control false discover rate instead; but see comment below for another opinion), but proceed with univariate tests. After all, nine DVs are not a lot. If, instead, you are interested in whether groups differed at all (and maybe in what respect they differed the most) but do not care so much about individual DVs, then you should use MANOVA. It all depends on your research hypothesis.
Second, it sounds though as if you might have no pre-specified hypothesis about which DVs should be influenced by group, and what exactly this influence should be. Instead, you probably have a bunch of data that you wish to explore. It is a valid wish, but it means that you are doing exploratory analysis. And in this case my best advice would be: plot the data and look at it!
You can plot a number of things. I would plot distributions of each DV for each of the three groups (i.e. nine plots; can be density plots or box plots). I would also run linear discriminant analysis (which is intimately related to MANOVA, see e.g. my answer here), project the data onto the first two discriminant axes and plot all your data as a scatter plot with different groups marked in different colours. You can project original DVs onto the same scatter plot, obtaining a biplot (here is a nice example done with PCA, but one can make a similar one with LDA too). | Post-hoc tests for MANOVA: univariate ANOVAs or discriminant analysis?
These two follow-up approaches have very different goals!
Univariate ANOVAs (as follow-ups to MANOVA) aim at checking which individual variables (as opposed to all variables together) differ between |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.