idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
23,801 | Statistical uncertainties in Deep Learning | how it is possible to compare different approaches and claim a possible improvement without a statistical confidence on the claim
I think that many papers overclaim. In the subfield of giant pre-trained models evaluated on GLUE: even if they are tested on the same datasets, they are usually trained on different data so it is not possible to claim that they are "better overall". A more realistic claim is that the data and the models yield better results, with all the caveats (maybe other methods trained longer or better optimised would be better, or other methods with the same data, or the improvements on the benchmarks do not reflect real progress as was shown on NLI recently, etc.).
ML and NLP researchers and reviewers were more concerned about statistical significance years ago, see for example Dietterich (1998), a popular paper on the topic. The standard have dropped on that front for possibly several reasons:
People realized that statistical testing and the whole p-value approach can be more harmful overall, see for example this wikipedia page on misuse of p-values and Andrew Gelman's piece. That might justify dropping hypothesis testing but not ignoring variance.
Datasets have grown a lot since the 90s/00s. The large increase in train and test data have reduced the variance of the results quite a lot and the influence of parameter initialisation and randomness in the optimisation procedure is less important. That might justify ignoring variance.
New researchers are less exposed to statistics as ML research distinguished itself from stats (see Breiman's "The two cultures" for example). I've noticed this personally, as a PhD student in a big "AI" public lab.
While these reasons make sense, I would say the trend went way too far. You are not the only one to be concerned. Here are a few interesting papers to realize the extent of the problem or proposing concrete solutions:
It is hard to say that an algorithm is better than another in general, so weaker claims are about algorithms being better on specific datasets. However, even such weak claims sometimes do not hold to scrutiny! Gorman and Bedrick (2019) showed in a replication study that these results sometimes only hold when the "standard" train/test splits are used!
In general, one cannot simply reuse the numbers of another paper directly and needs to replicate the results. But a common problem is unfair comparison due to uneven optimisation of the hyperparameters. Dodge & al. (2019) proposed to make more robust comparisons by taking into account the amount of computation used.
Dror & al. 2017 focused on how to make broad claims based on the results on several datasets.
The problem is not specific to NLP. For example, recent work by Musgrave & al claim that recent "improvements" in the subfield of metric learning have been "at best marginal" (Figure 4 is extremely telling and concerning).
References:
Dietterich (1998): Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms
Gelman: The problems with p-values are not just with p-values
Gorman & Bedrick (2019): We Need to Talk about Standard Splits
Dodge & al. (2019): Show Your Work: Improved Reporting of Experimental Results
Dror & al. (2017): Replicability Analysis for Natural Language Processing: Testing Significance with Multiple Datasets
Musgrave & al. (2020): A Metric Learning Reality Check | Statistical uncertainties in Deep Learning | how it is possible to compare different approaches and claim a possible improvement without a statistical confidence on the claim
I think that many papers overclaim. In the subfield of giant pre-trai | Statistical uncertainties in Deep Learning
how it is possible to compare different approaches and claim a possible improvement without a statistical confidence on the claim
I think that many papers overclaim. In the subfield of giant pre-trained models evaluated on GLUE: even if they are tested on the same datasets, they are usually trained on different data so it is not possible to claim that they are "better overall". A more realistic claim is that the data and the models yield better results, with all the caveats (maybe other methods trained longer or better optimised would be better, or other methods with the same data, or the improvements on the benchmarks do not reflect real progress as was shown on NLI recently, etc.).
ML and NLP researchers and reviewers were more concerned about statistical significance years ago, see for example Dietterich (1998), a popular paper on the topic. The standard have dropped on that front for possibly several reasons:
People realized that statistical testing and the whole p-value approach can be more harmful overall, see for example this wikipedia page on misuse of p-values and Andrew Gelman's piece. That might justify dropping hypothesis testing but not ignoring variance.
Datasets have grown a lot since the 90s/00s. The large increase in train and test data have reduced the variance of the results quite a lot and the influence of parameter initialisation and randomness in the optimisation procedure is less important. That might justify ignoring variance.
New researchers are less exposed to statistics as ML research distinguished itself from stats (see Breiman's "The two cultures" for example). I've noticed this personally, as a PhD student in a big "AI" public lab.
While these reasons make sense, I would say the trend went way too far. You are not the only one to be concerned. Here are a few interesting papers to realize the extent of the problem or proposing concrete solutions:
It is hard to say that an algorithm is better than another in general, so weaker claims are about algorithms being better on specific datasets. However, even such weak claims sometimes do not hold to scrutiny! Gorman and Bedrick (2019) showed in a replication study that these results sometimes only hold when the "standard" train/test splits are used!
In general, one cannot simply reuse the numbers of another paper directly and needs to replicate the results. But a common problem is unfair comparison due to uneven optimisation of the hyperparameters. Dodge & al. (2019) proposed to make more robust comparisons by taking into account the amount of computation used.
Dror & al. 2017 focused on how to make broad claims based on the results on several datasets.
The problem is not specific to NLP. For example, recent work by Musgrave & al claim that recent "improvements" in the subfield of metric learning have been "at best marginal" (Figure 4 is extremely telling and concerning).
References:
Dietterich (1998): Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms
Gelman: The problems with p-values are not just with p-values
Gorman & Bedrick (2019): We Need to Talk about Standard Splits
Dodge & al. (2019): Show Your Work: Improved Reporting of Experimental Results
Dror & al. (2017): Replicability Analysis for Natural Language Processing: Testing Significance with Multiple Datasets
Musgrave & al. (2020): A Metric Learning Reality Check | Statistical uncertainties in Deep Learning
how it is possible to compare different approaches and claim a possible improvement without a statistical confidence on the claim
I think that many papers overclaim. In the subfield of giant pre-trai |
23,802 | Statistical uncertainties in Deep Learning | First problem is that fitting the model once and comparing the metrics is not the best approach as discussed in the answer by EdM to the How to find a statistical significance difference of classification results? question (see the whole answer, as it seems to answer your question to great extent):
Second, instead of simply fitting the model one time to your data, you
need to see how well the modeling process works in repeated
application to your data set. One way to proceed would be to work with
multiple bootstrap samples, say a few hundred to a thousand, of the
data. For each bootstrap sample as a training set, build KNN models
with each of your distance metrics, then evaluate their performances
on the entire original data set as the test set. The distribution of
Brier scores for each type of model over the few hundred to a thousand
bootstraps could then indicate significant differences, among the
models based on different distance metrics, in terms of that proper
scoring rule.
With simple models, we can make some assumptions and derive the errors, for more complicated cases, as noted in the answer we can use procedures like bootstrap. Now, with deep learning models using bootstrap is problematic, because they need great computational power and time to train, where cost of the biggest models in this field is comparable to cost of a car (per single training).
This is one of the reasons why there is ongoing research on models that are aware of their uncertainties, e.g. Bayesian neural networks, and many research projects look into approximating it, e.g. with using dropout in prediction phase (see blog post by Yarin Gal, but see also the critique by Ian Osband). All those approaches are based on approximations and have their pitfalls. So the answer to your question would be that it's not that simple to get meaningful estimates for the uncertainties. | Statistical uncertainties in Deep Learning | First problem is that fitting the model once and comparing the metrics is not the best approach as discussed in the answer by EdM to the How to find a statistical significance difference of classifica | Statistical uncertainties in Deep Learning
First problem is that fitting the model once and comparing the metrics is not the best approach as discussed in the answer by EdM to the How to find a statistical significance difference of classification results? question (see the whole answer, as it seems to answer your question to great extent):
Second, instead of simply fitting the model one time to your data, you
need to see how well the modeling process works in repeated
application to your data set. One way to proceed would be to work with
multiple bootstrap samples, say a few hundred to a thousand, of the
data. For each bootstrap sample as a training set, build KNN models
with each of your distance metrics, then evaluate their performances
on the entire original data set as the test set. The distribution of
Brier scores for each type of model over the few hundred to a thousand
bootstraps could then indicate significant differences, among the
models based on different distance metrics, in terms of that proper
scoring rule.
With simple models, we can make some assumptions and derive the errors, for more complicated cases, as noted in the answer we can use procedures like bootstrap. Now, with deep learning models using bootstrap is problematic, because they need great computational power and time to train, where cost of the biggest models in this field is comparable to cost of a car (per single training).
This is one of the reasons why there is ongoing research on models that are aware of their uncertainties, e.g. Bayesian neural networks, and many research projects look into approximating it, e.g. with using dropout in prediction phase (see blog post by Yarin Gal, but see also the critique by Ian Osband). All those approaches are based on approximations and have their pitfalls. So the answer to your question would be that it's not that simple to get meaningful estimates for the uncertainties. | Statistical uncertainties in Deep Learning
First problem is that fitting the model once and comparing the metrics is not the best approach as discussed in the answer by EdM to the How to find a statistical significance difference of classifica |
23,803 | What is the geometric relationship between the covariance matrix and the inverse of the covariance matrix? | Before I answer your questions, allow me to share how I think about covariance and precision matrices.
Covariance matrices have a special structure: they are positive semi-definite (PSD), which means for a covariance matrix $\Sigma$ of size $m\text{x}m$, there are vectors $x$ of size $m\text{x}1$ such that $x^T\Sigma x\geq0$.
Such matrices enjoy a very nice property: they can be decomposed as $\Sigma=R\Lambda R^T$, where R is a rotation matrix, and $\Lambda$ is a diagonal matrix.
Now that we have the definition out of the way, let's take a look at what this means with the help of a $\Sigma$ of size 2x2 (i.e. our dataset has two variables). In the image below, we see in figure a, an identity covariance matrix, which implies no correlation between the data variables. This can be drawn as a circle. Below the image, we see an identity covariance matrix decomposed into its $\Sigma=R\Lambda R^T$ form.
In figure b, we see what happens to the geometry if we scale the variance of variables by two different factors. The variables are still uncorrelated, but their respective variances are now m, and n, respectively. Now how do we introduce correlation into the mix? We rotate the ellipse with the help of the rotation matrix, which for the figure c is simply:
$R = \begin{bmatrix}
cos(\theta) & sin(\theta)\\
-sin(\theta) & cos(\theta)
\end{bmatrix}$
Rotation matrices have a nice property: they are orthonormal and $RR^T=1 \therefore R^T=R^{-1}$
After that digression, lets come back to our covariance matrix. For $\Sigma$: $\Sigma = R\Lambda R^T = \begin{bmatrix}
R_{11} & R_{12}\\
R_{21} & R_{22}
\end{bmatrix}
\begin{bmatrix}
\lambda_1 & 0\\
0 & \lambda_2
\end{bmatrix}
\begin{bmatrix}
R_{11} & R_{21}\\
R_{12} & R_{22}
\end{bmatrix}$
Now some fun facts: $det(\Sigma)=\prod_{i}\lambda_i=\lambda_1\lambda_2$ and $tr(\Sigma)=\sum_{i}\lambda_i=\lambda_1+\lambda_2$. Here is the kicker: $R$ actually consists of eigenvectors of $\Sigma$ and $\lambda_i$ are the eigenvalues.
Finally, note that $\Sigma^{-1}$ is also PSD with the following decomposition: $\Sigma^{-1} = (R\Lambda R^T)^{-1} = (\Lambda R^T)^{-1}(R)^{-1}=(R^T)^{-1}\Lambda^{-1}R^{-1}=R\Lambda^{-1}R^T$, in the last simplification, we made use of $RR^T=1$.
Furthermore: $\Lambda^{-1} = \begin{bmatrix}
\frac{1}{\lambda_1} & 0\\
0 & \frac{1}{\lambda_2}
\end{bmatrix}$, that is, we simply take inverse of elements along diagonals!
With this information, we are now ready to answer your questions!
How is the dispersion and tightness related geometrically?
Dispersion gives you a sense of the area of the ellipse compared to that of the circle, tightness is the inverse of dispersion. Dispersion tells you how much area change happens to the unit circle (with uncorrelated variables and identity eigenvectors), tightness tells you how much area you have to undo in the ellipse so it ends up being unit variance.
What does the determinant of the inverse of the covariance matrix represent?
Since $\Lambda^{-1} = \begin{bmatrix}
\frac{1}{\lambda_1} & 0\\
0 & \frac{1}{\lambda_2}
\end{bmatrix}$, the determinant of precision matrix ($\frac{1}{\lambda_1\lambda_2}$) tells you how much area change you have to undo on your data variance so you end up with unit variance. Recall that $det(\Sigma)=\lambda_1\lambda_2$.
What does the trace of the inverse of the covariance matrix represent?
Its equal to $\lambda_1^{-1}+\lambda_2^{-1}$. Geometric interpretation of $tr(\Sigma^{-1})$ is less clear. | What is the geometric relationship between the covariance matrix and the inverse of the covariance m | Before I answer your questions, allow me to share how I think about covariance and precision matrices.
Covariance matrices have a special structure: they are positive semi-definite (PSD), which means | What is the geometric relationship between the covariance matrix and the inverse of the covariance matrix?
Before I answer your questions, allow me to share how I think about covariance and precision matrices.
Covariance matrices have a special structure: they are positive semi-definite (PSD), which means for a covariance matrix $\Sigma$ of size $m\text{x}m$, there are vectors $x$ of size $m\text{x}1$ such that $x^T\Sigma x\geq0$.
Such matrices enjoy a very nice property: they can be decomposed as $\Sigma=R\Lambda R^T$, where R is a rotation matrix, and $\Lambda$ is a diagonal matrix.
Now that we have the definition out of the way, let's take a look at what this means with the help of a $\Sigma$ of size 2x2 (i.e. our dataset has two variables). In the image below, we see in figure a, an identity covariance matrix, which implies no correlation between the data variables. This can be drawn as a circle. Below the image, we see an identity covariance matrix decomposed into its $\Sigma=R\Lambda R^T$ form.
In figure b, we see what happens to the geometry if we scale the variance of variables by two different factors. The variables are still uncorrelated, but their respective variances are now m, and n, respectively. Now how do we introduce correlation into the mix? We rotate the ellipse with the help of the rotation matrix, which for the figure c is simply:
$R = \begin{bmatrix}
cos(\theta) & sin(\theta)\\
-sin(\theta) & cos(\theta)
\end{bmatrix}$
Rotation matrices have a nice property: they are orthonormal and $RR^T=1 \therefore R^T=R^{-1}$
After that digression, lets come back to our covariance matrix. For $\Sigma$: $\Sigma = R\Lambda R^T = \begin{bmatrix}
R_{11} & R_{12}\\
R_{21} & R_{22}
\end{bmatrix}
\begin{bmatrix}
\lambda_1 & 0\\
0 & \lambda_2
\end{bmatrix}
\begin{bmatrix}
R_{11} & R_{21}\\
R_{12} & R_{22}
\end{bmatrix}$
Now some fun facts: $det(\Sigma)=\prod_{i}\lambda_i=\lambda_1\lambda_2$ and $tr(\Sigma)=\sum_{i}\lambda_i=\lambda_1+\lambda_2$. Here is the kicker: $R$ actually consists of eigenvectors of $\Sigma$ and $\lambda_i$ are the eigenvalues.
Finally, note that $\Sigma^{-1}$ is also PSD with the following decomposition: $\Sigma^{-1} = (R\Lambda R^T)^{-1} = (\Lambda R^T)^{-1}(R)^{-1}=(R^T)^{-1}\Lambda^{-1}R^{-1}=R\Lambda^{-1}R^T$, in the last simplification, we made use of $RR^T=1$.
Furthermore: $\Lambda^{-1} = \begin{bmatrix}
\frac{1}{\lambda_1} & 0\\
0 & \frac{1}{\lambda_2}
\end{bmatrix}$, that is, we simply take inverse of elements along diagonals!
With this information, we are now ready to answer your questions!
How is the dispersion and tightness related geometrically?
Dispersion gives you a sense of the area of the ellipse compared to that of the circle, tightness is the inverse of dispersion. Dispersion tells you how much area change happens to the unit circle (with uncorrelated variables and identity eigenvectors), tightness tells you how much area you have to undo in the ellipse so it ends up being unit variance.
What does the determinant of the inverse of the covariance matrix represent?
Since $\Lambda^{-1} = \begin{bmatrix}
\frac{1}{\lambda_1} & 0\\
0 & \frac{1}{\lambda_2}
\end{bmatrix}$, the determinant of precision matrix ($\frac{1}{\lambda_1\lambda_2}$) tells you how much area change you have to undo on your data variance so you end up with unit variance. Recall that $det(\Sigma)=\lambda_1\lambda_2$.
What does the trace of the inverse of the covariance matrix represent?
Its equal to $\lambda_1^{-1}+\lambda_2^{-1}$. Geometric interpretation of $tr(\Sigma^{-1})$ is less clear. | What is the geometric relationship between the covariance matrix and the inverse of the covariance m
Before I answer your questions, allow me to share how I think about covariance and precision matrices.
Covariance matrices have a special structure: they are positive semi-definite (PSD), which means |
23,804 | What is the geometric relationship between the covariance matrix and the inverse of the covariance matrix? | @PAF's answer is good, but I would argue that ellipse's semiaxes correspond to square roots of eigenvalues.
Indeed, the contours of Gaussian distribution with zero mean can be defined by the equation $x^\top \Sigma^{-1} x = \alpha,\, \alpha \in \mathbb{R}$. Let us pick $\alpha=1$. As @PAF pointed out, we can decompose $\Sigma=R\Lambda R^\top\Rightarrow \Sigma^{-1}=R\Lambda^{-1} R^\top$, where $R$ is a rotation matrix. Coordinates in the rotated system $\tilde{x}$ relate to the original coordinates as $x=R\tilde{x}$, and the equation $x^\top \Sigma^{-1} x = 1$ in the new coordinates reads as
$$ \tilde{x}^\top \Lambda^{-1} \tilde{x} = \frac{\tilde{x}_1^2}{\lambda_1}+\frac{\tilde{x}_2^2}{\lambda_2} = 1. $$
Compare that to the equation of an ellipse with semiaxes $a$ and $b$, $\frac{\tilde{x}_1^2}{a^2}+\frac{\tilde{x}_2^2}{b^2} = 1$, and we conclude $a=\sqrt{\lambda_1},\; b=\sqrt{\lambda_2}$.
An additional observation is that the eigenvectors $\nu_1, \nu_2$ of $\Sigma$ (i.e., the columns of $R$) will be the basis vectors of the rotated system. | What is the geometric relationship between the covariance matrix and the inverse of the covariance m | @PAF's answer is good, but I would argue that ellipse's semiaxes correspond to square roots of eigenvalues.
Indeed, the contours of Gaussian distribution with zero mean can be defined by the equation | What is the geometric relationship between the covariance matrix and the inverse of the covariance matrix?
@PAF's answer is good, but I would argue that ellipse's semiaxes correspond to square roots of eigenvalues.
Indeed, the contours of Gaussian distribution with zero mean can be defined by the equation $x^\top \Sigma^{-1} x = \alpha,\, \alpha \in \mathbb{R}$. Let us pick $\alpha=1$. As @PAF pointed out, we can decompose $\Sigma=R\Lambda R^\top\Rightarrow \Sigma^{-1}=R\Lambda^{-1} R^\top$, where $R$ is a rotation matrix. Coordinates in the rotated system $\tilde{x}$ relate to the original coordinates as $x=R\tilde{x}$, and the equation $x^\top \Sigma^{-1} x = 1$ in the new coordinates reads as
$$ \tilde{x}^\top \Lambda^{-1} \tilde{x} = \frac{\tilde{x}_1^2}{\lambda_1}+\frac{\tilde{x}_2^2}{\lambda_2} = 1. $$
Compare that to the equation of an ellipse with semiaxes $a$ and $b$, $\frac{\tilde{x}_1^2}{a^2}+\frac{\tilde{x}_2^2}{b^2} = 1$, and we conclude $a=\sqrt{\lambda_1},\; b=\sqrt{\lambda_2}$.
An additional observation is that the eigenvectors $\nu_1, \nu_2$ of $\Sigma$ (i.e., the columns of $R$) will be the basis vectors of the rotated system. | What is the geometric relationship between the covariance matrix and the inverse of the covariance m
@PAF's answer is good, but I would argue that ellipse's semiaxes correspond to square roots of eigenvalues.
Indeed, the contours of Gaussian distribution with zero mean can be defined by the equation |
23,805 | Interpretation of Impulse Response and Variance Decomposition Graphs | Impulse response plots represent what they are named after - the response of a variable given an impulse in another variable.
In your first graph you plot the impulse-response of EUR to EUR. At the initial period, a positive shock on EUR will obviously lead the EUR to go up by the shock amount - thus the initial value of one. The decay in the plot illustrates that, as time passes, the effects of a shock in EUR today decay to 0. (Typically to be expected in stationary VAR models - think of the stationary AR definition..)
Similarly, when GBP goes up by 1 unit of measurement, EUR goes down by about 1/2 on the next period, but the impact of a shock on GBP today on future EUR goes to 0 fast. Pretty much the same with JPY.
The IR of GBP to EUR shows a different pattern - a shock to EUR causes GBP to go down in the near future, but the effect of such shock is mean reverting to 0.
Variance decomposition shows how much a shock to one variable impacts the (variance of the) forecast error of a different one - in your case, 50% of the variance in the forecast error of GBP seems to be explained by a unit shock in EUR. The variance in the forecast error of all other variables is completely explained by the variable alone, i.e, the orthogonal shocks to other variables in the system do not increase the variance of your forecast error.
Maybe you would like to check this link - it expands on what I believe you are looking for. | Interpretation of Impulse Response and Variance Decomposition Graphs | Impulse response plots represent what they are named after - the response of a variable given an impulse in another variable.
In your first graph you plot the impulse-response of EUR to EUR. At the i | Interpretation of Impulse Response and Variance Decomposition Graphs
Impulse response plots represent what they are named after - the response of a variable given an impulse in another variable.
In your first graph you plot the impulse-response of EUR to EUR. At the initial period, a positive shock on EUR will obviously lead the EUR to go up by the shock amount - thus the initial value of one. The decay in the plot illustrates that, as time passes, the effects of a shock in EUR today decay to 0. (Typically to be expected in stationary VAR models - think of the stationary AR definition..)
Similarly, when GBP goes up by 1 unit of measurement, EUR goes down by about 1/2 on the next period, but the impact of a shock on GBP today on future EUR goes to 0 fast. Pretty much the same with JPY.
The IR of GBP to EUR shows a different pattern - a shock to EUR causes GBP to go down in the near future, but the effect of such shock is mean reverting to 0.
Variance decomposition shows how much a shock to one variable impacts the (variance of the) forecast error of a different one - in your case, 50% of the variance in the forecast error of GBP seems to be explained by a unit shock in EUR. The variance in the forecast error of all other variables is completely explained by the variable alone, i.e, the orthogonal shocks to other variables in the system do not increase the variance of your forecast error.
Maybe you would like to check this link - it expands on what I believe you are looking for. | Interpretation of Impulse Response and Variance Decomposition Graphs
Impulse response plots represent what they are named after - the response of a variable given an impulse in another variable.
In your first graph you plot the impulse-response of EUR to EUR. At the i |
23,806 | Quadratic form and Chi-squared distribution | In general, the quadratic form is a weighted sum of $\chi_1^2$
It is not true in general that $\mathbf{z}^\text{T} \mathbf{\Sigma} \mathbf{z} \sim \chi^2_p$ for any symmetric positive-definite (variance) matrix $\mathbf{\Sigma}$. Breaking this quadratic form down using the spectral theorem you get:
$$\begin{equation} \begin{aligned}
\mathbf{z}^\text{T} \mathbf{\Sigma} \mathbf{z}
= \mathbf{z}^\text{T} \mathbf{Q} \mathbf{\Lambda} \mathbf{Q}^\text{T} \mathbf{z}
&= (\mathbf{Q}^\text{T} \mathbf{z})^\text{T} \mathbf{\Lambda} (\mathbf{Q}^\text{T} \mathbf{z}) \\[6pt]
&= \sum_{i=1}^p \lambda_i ( \mathbf{q}_i \cdot \mathbf{z} )^2, \\[6pt]
\end{aligned} \end{equation}$$
where $\mathbf{q}_1,...,\mathbf{q}_p$ are the eigenvectors of $\mathbf{\Sigma}$ (i.e., the columns of $\mathbf{Q}$). Define the random variables $y_i = \mathbf{q}_i \cdot \mathbf{z}$. Since $\mathbf{\Sigma}$ is a real symmetric matrix, the eigenvectors $\mathbf{q}_1,...,\mathbf{q}_p$ are orthonormal, which means that $y_1,...,y_p \sim \text{IID N}(0,1)$. Thus, we have:
$$\begin{equation} \begin{aligned}
\mathbf{z}^\text{T} \mathbf{\Sigma} \mathbf{z}
&= \sum_{i=1}^p \lambda_i ( \mathbf{q}_i \cdot \mathbf{z} )^2 \\[6pt]
&= \sum_{i=1}^p \lambda_i \cdot y_i^2 \\[6pt]
&\sim \sum_{i=1}^p \lambda_i \cdot \chi_1^2. \\[6pt]
\end{aligned} \end{equation}$$
We can see that the distribution of the quadratic form is a weighted sum of $\chi_1^2$ random variables, where the weights are the eigenvalues of the variance matrix. In the special case where these eigenvalues are all one we do indeed obtain $\mathbf{z}^\text{T} \mathbf{\Sigma} \mathbf{z} \sim \chi_n^2$, but in general this result does not hold. In fact, we can see that in general, the quadratic form is distributed as a weighted sum of chi-squared random variables each with one degree-of-freedom.
The general distribution for this form is complicated and its density function does not have a closed form representation. Davies (1980) provides an algorithm to compute the distributions function (the algorithm is actually for a broader generalisation). Bodenham and Adams (2015) examine some approximations to this distribution, and provide comparisons with simulations. | Quadratic form and Chi-squared distribution | In general, the quadratic form is a weighted sum of $\chi_1^2$
It is not true in general that $\mathbf{z}^\text{T} \mathbf{\Sigma} \mathbf{z} \sim \chi^2_p$ for any symmetric positive-definite (varian | Quadratic form and Chi-squared distribution
In general, the quadratic form is a weighted sum of $\chi_1^2$
It is not true in general that $\mathbf{z}^\text{T} \mathbf{\Sigma} \mathbf{z} \sim \chi^2_p$ for any symmetric positive-definite (variance) matrix $\mathbf{\Sigma}$. Breaking this quadratic form down using the spectral theorem you get:
$$\begin{equation} \begin{aligned}
\mathbf{z}^\text{T} \mathbf{\Sigma} \mathbf{z}
= \mathbf{z}^\text{T} \mathbf{Q} \mathbf{\Lambda} \mathbf{Q}^\text{T} \mathbf{z}
&= (\mathbf{Q}^\text{T} \mathbf{z})^\text{T} \mathbf{\Lambda} (\mathbf{Q}^\text{T} \mathbf{z}) \\[6pt]
&= \sum_{i=1}^p \lambda_i ( \mathbf{q}_i \cdot \mathbf{z} )^2, \\[6pt]
\end{aligned} \end{equation}$$
where $\mathbf{q}_1,...,\mathbf{q}_p$ are the eigenvectors of $\mathbf{\Sigma}$ (i.e., the columns of $\mathbf{Q}$). Define the random variables $y_i = \mathbf{q}_i \cdot \mathbf{z}$. Since $\mathbf{\Sigma}$ is a real symmetric matrix, the eigenvectors $\mathbf{q}_1,...,\mathbf{q}_p$ are orthonormal, which means that $y_1,...,y_p \sim \text{IID N}(0,1)$. Thus, we have:
$$\begin{equation} \begin{aligned}
\mathbf{z}^\text{T} \mathbf{\Sigma} \mathbf{z}
&= \sum_{i=1}^p \lambda_i ( \mathbf{q}_i \cdot \mathbf{z} )^2 \\[6pt]
&= \sum_{i=1}^p \lambda_i \cdot y_i^2 \\[6pt]
&\sim \sum_{i=1}^p \lambda_i \cdot \chi_1^2. \\[6pt]
\end{aligned} \end{equation}$$
We can see that the distribution of the quadratic form is a weighted sum of $\chi_1^2$ random variables, where the weights are the eigenvalues of the variance matrix. In the special case where these eigenvalues are all one we do indeed obtain $\mathbf{z}^\text{T} \mathbf{\Sigma} \mathbf{z} \sim \chi_n^2$, but in general this result does not hold. In fact, we can see that in general, the quadratic form is distributed as a weighted sum of chi-squared random variables each with one degree-of-freedom.
The general distribution for this form is complicated and its density function does not have a closed form representation. Davies (1980) provides an algorithm to compute the distributions function (the algorithm is actually for a broader generalisation). Bodenham and Adams (2015) examine some approximations to this distribution, and provide comparisons with simulations. | Quadratic form and Chi-squared distribution
In general, the quadratic form is a weighted sum of $\chi_1^2$
It is not true in general that $\mathbf{z}^\text{T} \mathbf{\Sigma} \mathbf{z} \sim \chi^2_p$ for any symmetric positive-definite (varian |
23,807 | Quadratic form and Chi-squared distribution | Regarding 3. above, suppose $\Sigma_{n\times n}$ is diagonal with diagonal elements $\{3, 2, 1\}$, then $\Sigma$ is symmetric positive definite. ...but $z'\Sigma z$ cannot be $\chi^2_n$ (chi-squared with 3 df), because $E\left[z'\Sigma z\right]=3E(z_1^2)+2E(z_2^2)+E(z_3^2)=6$, but the mean of a chi-squared with 3 df is 3. | Quadratic form and Chi-squared distribution | Regarding 3. above, suppose $\Sigma_{n\times n}$ is diagonal with diagonal elements $\{3, 2, 1\}$, then $\Sigma$ is symmetric positive definite. ...but $z'\Sigma z$ cannot be $\chi^2_n$ (chi-squared w | Quadratic form and Chi-squared distribution
Regarding 3. above, suppose $\Sigma_{n\times n}$ is diagonal with diagonal elements $\{3, 2, 1\}$, then $\Sigma$ is symmetric positive definite. ...but $z'\Sigma z$ cannot be $\chi^2_n$ (chi-squared with 3 df), because $E\left[z'\Sigma z\right]=3E(z_1^2)+2E(z_2^2)+E(z_3^2)=6$, but the mean of a chi-squared with 3 df is 3. | Quadratic form and Chi-squared distribution
Regarding 3. above, suppose $\Sigma_{n\times n}$ is diagonal with diagonal elements $\{3, 2, 1\}$, then $\Sigma$ is symmetric positive definite. ...but $z'\Sigma z$ cannot be $\chi^2_n$ (chi-squared w |
23,808 | Is Gaussian Process just a Multivariate Gaussian Distribution? | The multivariate Gaussian distribution is a distribution that describes the behaviour of a finite (or at least countable) random vector. Contrarily, a Gaussian process is a stochastic process defined over a continuum of values (i.e., an uncountably large set of values). Usually the process is defined over all real time inputs, so it is a process of the form $\{ X(t) | t \in \mathbb{R} \}$. The Gaussian process is fully defined by a mean function and covariance function, which respectively describe the mean of the process at any point, and the covariance of the process at any two points.
Now, one of the central properties of the Gaussian process is that any finite vector of points has a multivariate Gaussian distribution with mean vector and variance matrix described by the mean function and covariance function of the process. Specifically, for any time points $\mathbf{t}=(t_1,...,t_n)$ we have:
$$[X(t_1),...,X(t_n)] \sim \text{N}(\boldsymbol{\mu}(\mathbf{t}), \boldsymbol{\Sigma}(\mathbf{t})).$$
where $\boldsymbol{\mu}(\mathbf{t}) = [\mu(t_i)]_{i=1,...,n}$ is the mean vector composed of values of the mean function over these time points, and $\boldsymbol{\Sigma}(\mathbf{t}) = [\sigma(t_i, t_j)]_{i,j=1,...,n}$ is the variance matrix composed of values of the covariance function over pairs of time points. The stochastic behaviour of the Gaussian process can be regarded as an extension of the multivariate Gaussian distribution to stochastic process defined on a continuum. | Is Gaussian Process just a Multivariate Gaussian Distribution? | The multivariate Gaussian distribution is a distribution that describes the behaviour of a finite (or at least countable) random vector. Contrarily, a Gaussian process is a stochastic process defined | Is Gaussian Process just a Multivariate Gaussian Distribution?
The multivariate Gaussian distribution is a distribution that describes the behaviour of a finite (or at least countable) random vector. Contrarily, a Gaussian process is a stochastic process defined over a continuum of values (i.e., an uncountably large set of values). Usually the process is defined over all real time inputs, so it is a process of the form $\{ X(t) | t \in \mathbb{R} \}$. The Gaussian process is fully defined by a mean function and covariance function, which respectively describe the mean of the process at any point, and the covariance of the process at any two points.
Now, one of the central properties of the Gaussian process is that any finite vector of points has a multivariate Gaussian distribution with mean vector and variance matrix described by the mean function and covariance function of the process. Specifically, for any time points $\mathbf{t}=(t_1,...,t_n)$ we have:
$$[X(t_1),...,X(t_n)] \sim \text{N}(\boldsymbol{\mu}(\mathbf{t}), \boldsymbol{\Sigma}(\mathbf{t})).$$
where $\boldsymbol{\mu}(\mathbf{t}) = [\mu(t_i)]_{i=1,...,n}$ is the mean vector composed of values of the mean function over these time points, and $\boldsymbol{\Sigma}(\mathbf{t}) = [\sigma(t_i, t_j)]_{i,j=1,...,n}$ is the variance matrix composed of values of the covariance function over pairs of time points. The stochastic behaviour of the Gaussian process can be regarded as an extension of the multivariate Gaussian distribution to stochastic process defined on a continuum. | Is Gaussian Process just a Multivariate Gaussian Distribution?
The multivariate Gaussian distribution is a distribution that describes the behaviour of a finite (or at least countable) random vector. Contrarily, a Gaussian process is a stochastic process defined |
23,809 | Is Gaussian Process just a Multivariate Gaussian Distribution? | A Gaussian process is a generalization of the Gaussian probability distribution. Whereas a probability distribution describes random
variables which are scalars or vectors (for multivariate
distributions), a stochastic process governs the properties of
functions. Leaving mathematical sophistication aside, one can loosely
think of a function as a very long vector, each entry in the vector
specifying the function value f (x) at a particular input x. It
turns out, that although this idea is a little naı̈ve, it is
surprisingly close what we need. Indeed, the question of how we deal
computationally with these infinite dimensional objects has the most
pleasant resolution imaginable: if you ask only for the properties of
the function at a finite number of points, then inference in the
Gaussian process will give you the same answer if you ignore the
infinitely many other points, as if you would have taken them all into
account! And these answers are consistent with answers to any other
finite queries you may have. One of the main attractions of the
Gaussian process framework is precisely that it unites a sophisticated
and consistent view with computational tractability.
– C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006. (Emphasis is my own.) | Is Gaussian Process just a Multivariate Gaussian Distribution? | A Gaussian process is a generalization of the Gaussian probability distribution. Whereas a probability distribution describes random
variables which are scalars or vectors (for multivariate
distributi | Is Gaussian Process just a Multivariate Gaussian Distribution?
A Gaussian process is a generalization of the Gaussian probability distribution. Whereas a probability distribution describes random
variables which are scalars or vectors (for multivariate
distributions), a stochastic process governs the properties of
functions. Leaving mathematical sophistication aside, one can loosely
think of a function as a very long vector, each entry in the vector
specifying the function value f (x) at a particular input x. It
turns out, that although this idea is a little naı̈ve, it is
surprisingly close what we need. Indeed, the question of how we deal
computationally with these infinite dimensional objects has the most
pleasant resolution imaginable: if you ask only for the properties of
the function at a finite number of points, then inference in the
Gaussian process will give you the same answer if you ignore the
infinitely many other points, as if you would have taken them all into
account! And these answers are consistent with answers to any other
finite queries you may have. One of the main attractions of the
Gaussian process framework is precisely that it unites a sophisticated
and consistent view with computational tractability.
– C. E. Rasmussen & C. K. I. Williams, Gaussian Processes for Machine Learning, the MIT Press, 2006. (Emphasis is my own.) | Is Gaussian Process just a Multivariate Gaussian Distribution?
A Gaussian process is a generalization of the Gaussian probability distribution. Whereas a probability distribution describes random
variables which are scalars or vectors (for multivariate
distributi |
23,810 | Is Gaussian Process just a Multivariate Gaussian Distribution? | I had the same question so I found the following on Stan user guide which I think has both a sort answer and a more detailed answer if you want to read further (well, by now you probably have nailed it but it might be interesting for other) | Is Gaussian Process just a Multivariate Gaussian Distribution? | I had the same question so I found the following on Stan user guide which I think has both a sort answer and a more detailed answer if you want to read further (well, by now you probably have nailed i | Is Gaussian Process just a Multivariate Gaussian Distribution?
I had the same question so I found the following on Stan user guide which I think has both a sort answer and a more detailed answer if you want to read further (well, by now you probably have nailed it but it might be interesting for other) | Is Gaussian Process just a Multivariate Gaussian Distribution?
I had the same question so I found the following on Stan user guide which I think has both a sort answer and a more detailed answer if you want to read further (well, by now you probably have nailed i |
23,811 | What are some of the disavantage of bayesian hyper parameter optimization? | results are sensitive to parameters of the surrogate model, which are typically fixed at some value; this underestimates uncertainty; or else you have to be fully Bayesian and marginalize over hyper parameter distributions, which can be expensive and unwieldy.
it takes a dozen or so samples to get a good surrogate surface in 2 or 3 dimensions of search space; increasing dimensionality of the search space requires yet more samples
Bayesian optimization itself depends on an optimizer to search the surrogate surface, which has its own costs -- this problem is (hopefully) cheaper to evaluate than the original problem, but it is still a non-convex box-constrained optimization problem (i.e., difficult!)
estimating the BO model itself has costs
To state it another way, BO is an attempt to keep the number of function evaluations to a minimum, and get the most "bang for the buck" from each evaluation. This is important if you're conducting destructive tests, or just doing a simulation that takes an obscene amount of time to execute. But in all but the most expensive cases, apply pure random search and call it a day! (Or LIPO if your problem is amenable to its assumptions.) It can save you a number of headaches, such as optimizing your Bayesian Optimization program. | What are some of the disavantage of bayesian hyper parameter optimization? | results are sensitive to parameters of the surrogate model, which are typically fixed at some value; this underestimates uncertainty; or else you have to be fully Bayesian and marginalize over hyper p | What are some of the disavantage of bayesian hyper parameter optimization?
results are sensitive to parameters of the surrogate model, which are typically fixed at some value; this underestimates uncertainty; or else you have to be fully Bayesian and marginalize over hyper parameter distributions, which can be expensive and unwieldy.
it takes a dozen or so samples to get a good surrogate surface in 2 or 3 dimensions of search space; increasing dimensionality of the search space requires yet more samples
Bayesian optimization itself depends on an optimizer to search the surrogate surface, which has its own costs -- this problem is (hopefully) cheaper to evaluate than the original problem, but it is still a non-convex box-constrained optimization problem (i.e., difficult!)
estimating the BO model itself has costs
To state it another way, BO is an attempt to keep the number of function evaluations to a minimum, and get the most "bang for the buck" from each evaluation. This is important if you're conducting destructive tests, or just doing a simulation that takes an obscene amount of time to execute. But in all but the most expensive cases, apply pure random search and call it a day! (Or LIPO if your problem is amenable to its assumptions.) It can save you a number of headaches, such as optimizing your Bayesian Optimization program. | What are some of the disavantage of bayesian hyper parameter optimization?
results are sensitive to parameters of the surrogate model, which are typically fixed at some value; this underestimates uncertainty; or else you have to be fully Bayesian and marginalize over hyper p |
23,812 | GAMM with zero-inflated data | In addition to mgcv and its zero-inflated Poisson families (ziP() and ziplss()), you might also look at the brms package by Paul-Christian Bürkner. It can fit distribution models (where you model more than just the mean, in your case the zero-inflation component of the model can be modelled as a function of covariates just like the count function).
You can include smooths in any of the linear predictors (for the mean/count, zero-inflation part, etc) via s() and t2() terms for simple 1-d or isotropic 2-d splines, or anisotropic tensor product splines respectively. It has support for zero-inflated binomial, Poisson, negative binomial, and beta distributions, plus zero-one-inflated beta distributions. It also has hurdle models for Poisson and negative binomial responses (where the count part of the model is a truncated distribution so as to not produce further zero counts).
brms fits these models using STAN, so they are fully Bayesian, but this will require you to learn a new set of interfaces to extract relevant information. That said, there are several packages offering support functions for just this task, and brms has helper functions written that utilise these secondary packages. You'll need to get STAN installed and you'll need a C++ compiler as brms compiles the model as defined using R into STAN code for evalutation. | GAMM with zero-inflated data | In addition to mgcv and its zero-inflated Poisson families (ziP() and ziplss()), you might also look at the brms package by Paul-Christian Bürkner. It can fit distribution models (where you model more | GAMM with zero-inflated data
In addition to mgcv and its zero-inflated Poisson families (ziP() and ziplss()), you might also look at the brms package by Paul-Christian Bürkner. It can fit distribution models (where you model more than just the mean, in your case the zero-inflation component of the model can be modelled as a function of covariates just like the count function).
You can include smooths in any of the linear predictors (for the mean/count, zero-inflation part, etc) via s() and t2() terms for simple 1-d or isotropic 2-d splines, or anisotropic tensor product splines respectively. It has support for zero-inflated binomial, Poisson, negative binomial, and beta distributions, plus zero-one-inflated beta distributions. It also has hurdle models for Poisson and negative binomial responses (where the count part of the model is a truncated distribution so as to not produce further zero counts).
brms fits these models using STAN, so they are fully Bayesian, but this will require you to learn a new set of interfaces to extract relevant information. That said, there are several packages offering support functions for just this task, and brms has helper functions written that utilise these secondary packages. You'll need to get STAN installed and you'll need a C++ compiler as brms compiles the model as defined using R into STAN code for evalutation. | GAMM with zero-inflated data
In addition to mgcv and its zero-inflated Poisson families (ziP() and ziplss()), you might also look at the brms package by Paul-Christian Bürkner. It can fit distribution models (where you model more |
23,813 | GAMM with zero-inflated data | The glmmTMB package offers this and is described in a recent bioRxiv paper: Brooks et al. (2017). Modeling Zero-Inflated Count Data with glmmTMB, bioRxiv, doi:10.1101/132753.
Gavin Simpson also has a nice blog post that compares glmmTMB with mgcv for this purpose: Fitting count and zero-inflated count GLMMs with mgcv. | GAMM with zero-inflated data | The glmmTMB package offers this and is described in a recent bioRxiv paper: Brooks et al. (2017). Modeling Zero-Inflated Count Data with glmmTMB, bioRxiv, doi:10.1101/132753.
Gavin Simpson also has a | GAMM with zero-inflated data
The glmmTMB package offers this and is described in a recent bioRxiv paper: Brooks et al. (2017). Modeling Zero-Inflated Count Data with glmmTMB, bioRxiv, doi:10.1101/132753.
Gavin Simpson also has a nice blog post that compares glmmTMB with mgcv for this purpose: Fitting count and zero-inflated count GLMMs with mgcv. | GAMM with zero-inflated data
The glmmTMB package offers this and is described in a recent bioRxiv paper: Brooks et al. (2017). Modeling Zero-Inflated Count Data with glmmTMB, bioRxiv, doi:10.1101/132753.
Gavin Simpson also has a |
23,814 | How to improve F1 score with skewed classes? | Most of the classification problems I've tackled are similar in nature, so a large class imbalance is quite common.
It is not clear whether you are using training-validation sets to build and fine tune the model. Cross-fold validation is generally preferred since it gives more reliable model performance estimates.
The F1 score is a good classification performance measure, I find it more important than the AUC-ROC metric. Its best to use a performance measure which matches the real-world problem you're trying to solve.
Without having access to the dataset, I'm unable to give exact pointers; so I'm suggesting a few directions to approach this problem and help improve the F1 score:
Use better features, sometimes a domain expert (specific to the problem you're trying to solve) can give relevant pointers that can result in significant improvements.
Use a better classification algorithm and better hyper-parameters.
Over-sample the minority class, and/or under-sample the majority class to reduce the class imbalance.
Use higher weights for the minority class, although I've found over-under sampling to be more effective than using weights.
Choose an optimal cutoff value to convert the continuous valued class probabilities output by your algorithm into a class label. This is as important as a good AUC metric but is overlooked quite often. A word of caution though: the choice of the cutoff should be guided by the users by evaluating the relevant trade-offs. | How to improve F1 score with skewed classes? | Most of the classification problems I've tackled are similar in nature, so a large class imbalance is quite common.
It is not clear whether you are using training-validation sets to build and fine tun | How to improve F1 score with skewed classes?
Most of the classification problems I've tackled are similar in nature, so a large class imbalance is quite common.
It is not clear whether you are using training-validation sets to build and fine tune the model. Cross-fold validation is generally preferred since it gives more reliable model performance estimates.
The F1 score is a good classification performance measure, I find it more important than the AUC-ROC metric. Its best to use a performance measure which matches the real-world problem you're trying to solve.
Without having access to the dataset, I'm unable to give exact pointers; so I'm suggesting a few directions to approach this problem and help improve the F1 score:
Use better features, sometimes a domain expert (specific to the problem you're trying to solve) can give relevant pointers that can result in significant improvements.
Use a better classification algorithm and better hyper-parameters.
Over-sample the minority class, and/or under-sample the majority class to reduce the class imbalance.
Use higher weights for the minority class, although I've found over-under sampling to be more effective than using weights.
Choose an optimal cutoff value to convert the continuous valued class probabilities output by your algorithm into a class label. This is as important as a good AUC metric but is overlooked quite often. A word of caution though: the choice of the cutoff should be guided by the users by evaluating the relevant trade-offs. | How to improve F1 score with skewed classes?
Most of the classification problems I've tackled are similar in nature, so a large class imbalance is quite common.
It is not clear whether you are using training-validation sets to build and fine tun |
23,815 | How to improve F1 score with skewed classes? | The following Python snippet demonstrates up-sampling, by sampling with replacement the instances of the class that are less in number(a.k.a minority class) in a data frame to solving the class imbalance problem,
import pandas as pd
# df is a data frame with FRAUD as the target column with classes 0 and 1.
# There are more instances of class 0 than class 1 in the data frame df.
# Separate majority and minority classes
df_majority = df.loc[df.FRAUD == 0].copy()
df_minority = df.loc[df.FRAUD == 1].copy()
# Upsample minority class
df_minority_upsampled = resample(df_minority,
replace=True, # sample with replacement
n_samples=498551, # to match majority class
random_state=123) # reproducible results
# Combine majority class with upsampled minority class
df_upsampled = pd.concat([df_majority, df_minority_upsampled])
# Display new class counts
print(df_upsampled.FRAUD.value_counts()) | How to improve F1 score with skewed classes? | The following Python snippet demonstrates up-sampling, by sampling with replacement the instances of the class that are less in number(a.k.a minority class) in a data frame to solving the class imbala | How to improve F1 score with skewed classes?
The following Python snippet demonstrates up-sampling, by sampling with replacement the instances of the class that are less in number(a.k.a minority class) in a data frame to solving the class imbalance problem,
import pandas as pd
# df is a data frame with FRAUD as the target column with classes 0 and 1.
# There are more instances of class 0 than class 1 in the data frame df.
# Separate majority and minority classes
df_majority = df.loc[df.FRAUD == 0].copy()
df_minority = df.loc[df.FRAUD == 1].copy()
# Upsample minority class
df_minority_upsampled = resample(df_minority,
replace=True, # sample with replacement
n_samples=498551, # to match majority class
random_state=123) # reproducible results
# Combine majority class with upsampled minority class
df_upsampled = pd.concat([df_majority, df_minority_upsampled])
# Display new class counts
print(df_upsampled.FRAUD.value_counts()) | How to improve F1 score with skewed classes?
The following Python snippet demonstrates up-sampling, by sampling with replacement the instances of the class that are less in number(a.k.a minority class) in a data frame to solving the class imbala |
23,816 | Interpretation of R-squared score of a Neural Network for classification | $R^2$ is not a good measure to assess goodness of fit for a classification.
$R^2$ is suitable for predicting continuous variable. When dependent variable is continuous $R^2$ usually takes values between $0$ and $1$ (in linear regression for example it is impossible to have $R^2$ beyond these boundaries), and it is interpreted as share of variance of dependent variable that model is able to correctly reproduce. When $R^2$ equals $1$ it means that the model is able to fully recreate dependent variable, when it equals $0$, it means that the model completely failed at this task.
When the dependent variable is categorical it makes no sense, because $R^2$ uses distances between predicted and actual values, while distances between let say '1' meaning class 'A', '2' meaning class 'B' and '3' meaning class 'C' make no sense.
Use other measures, for example AUC for classification with two classes and Logarithmic Loss for classification with more classes. And make sure that you are using appropriate parameters in your model: in many machine learning models you must declare if problem is of classification or regression nature and it can affect results dramatically. | Interpretation of R-squared score of a Neural Network for classification | $R^2$ is not a good measure to assess goodness of fit for a classification.
$R^2$ is suitable for predicting continuous variable. When dependent variable is continuous $R^2$ usually takes values be | Interpretation of R-squared score of a Neural Network for classification
$R^2$ is not a good measure to assess goodness of fit for a classification.
$R^2$ is suitable for predicting continuous variable. When dependent variable is continuous $R^2$ usually takes values between $0$ and $1$ (in linear regression for example it is impossible to have $R^2$ beyond these boundaries), and it is interpreted as share of variance of dependent variable that model is able to correctly reproduce. When $R^2$ equals $1$ it means that the model is able to fully recreate dependent variable, when it equals $0$, it means that the model completely failed at this task.
When the dependent variable is categorical it makes no sense, because $R^2$ uses distances between predicted and actual values, while distances between let say '1' meaning class 'A', '2' meaning class 'B' and '3' meaning class 'C' make no sense.
Use other measures, for example AUC for classification with two classes and Logarithmic Loss for classification with more classes. And make sure that you are using appropriate parameters in your model: in many machine learning models you must declare if problem is of classification or regression nature and it can affect results dramatically. | Interpretation of R-squared score of a Neural Network for classification
$R^2$ is not a good measure to assess goodness of fit for a classification.
$R^2$ is suitable for predicting continuous variable. When dependent variable is continuous $R^2$ usually takes values be |
23,817 | Multivariate linear regression with lasso in r | For multivariate responses (number of dependent variables larger than 1), you need family = "mgaussian" in the call of glmnet.
The lsgl package is an alternative, which provides a more flexible penalty.
With a $k$-dimensional response, the glmnet package implements the penalty
$$\sum_{j = 1}^p \| \boldsymbol{\beta}_j \|_2$$
where $\boldsymbol{\beta}_j = (\beta_{j1}, \ldots, \beta_{jk})^T$ is the vector of coefficients for the $j$th predictor. In the help page for glmnet you can read:
The former [family = "mgaussian"] allows a multi-response gaussian model to be fit, using a "group -lasso" penalty on the coefficients for each variable. Tying the responses together like this is called "multi-task" learning in some domains.
This penalty is an example of a group lasso penalty, which groups parameters for the different responses that are associated to the same predictor. It results in the selection of the same predictors across all responses for a given value of the tuning parameter.
The lsgl package implements sparse group lasso penalties of the form
$$\alpha \sum_{j=1}^p \sum_{l = 1}^k \xi_{jl} |\beta_{jl}| + (1-\alpha) \sum_{j = 1}^p \gamma_{j} \| \boldsymbol{\beta}_j \|_2$$
where $\xi_{jl}$ and $\gamma_{j}$ are certain weights chosen to balance the contributions from the different terms. The default is $\xi_{jl} = 1$ and $\gamma_{j} = \sqrt{k}$. The parameter $\alpha \in [0,1]$ is a tuning parameter. With $\alpha = 0$ (and $\gamma_j = 1$) the penalty is equivalent to the penalty used by glmnet with family = "mgaussian". With $\alpha = 1$ (and $\xi_{jl} = 1$) the penalty gives ordinary lasso. The lsgl implementation also allows for an additional grouping of the predictors.
A note about group lasso. The term group lasso is often associated with a grouping of predictors. However, from a more general viewpoint, group lasso is simply a grouping of parameters in the penalty. The grouping used by glmnet with family = "mgaussian" is a grouping of parameters across responses. The effect of such a grouping is to couple the estimation of the parameters across the responses, which turns out to be a good idea, if all the responses can be predicted from roughly the same set of predictors. The general idea of coupling multiple learning problems, that are expected to share some structure, is known as multi-task learning. | Multivariate linear regression with lasso in r | For multivariate responses (number of dependent variables larger than 1), you need family = "mgaussian" in the call of glmnet.
The lsgl package is an alternative, which provides a more flexible pena | Multivariate linear regression with lasso in r
For multivariate responses (number of dependent variables larger than 1), you need family = "mgaussian" in the call of glmnet.
The lsgl package is an alternative, which provides a more flexible penalty.
With a $k$-dimensional response, the glmnet package implements the penalty
$$\sum_{j = 1}^p \| \boldsymbol{\beta}_j \|_2$$
where $\boldsymbol{\beta}_j = (\beta_{j1}, \ldots, \beta_{jk})^T$ is the vector of coefficients for the $j$th predictor. In the help page for glmnet you can read:
The former [family = "mgaussian"] allows a multi-response gaussian model to be fit, using a "group -lasso" penalty on the coefficients for each variable. Tying the responses together like this is called "multi-task" learning in some domains.
This penalty is an example of a group lasso penalty, which groups parameters for the different responses that are associated to the same predictor. It results in the selection of the same predictors across all responses for a given value of the tuning parameter.
The lsgl package implements sparse group lasso penalties of the form
$$\alpha \sum_{j=1}^p \sum_{l = 1}^k \xi_{jl} |\beta_{jl}| + (1-\alpha) \sum_{j = 1}^p \gamma_{j} \| \boldsymbol{\beta}_j \|_2$$
where $\xi_{jl}$ and $\gamma_{j}$ are certain weights chosen to balance the contributions from the different terms. The default is $\xi_{jl} = 1$ and $\gamma_{j} = \sqrt{k}$. The parameter $\alpha \in [0,1]$ is a tuning parameter. With $\alpha = 0$ (and $\gamma_j = 1$) the penalty is equivalent to the penalty used by glmnet with family = "mgaussian". With $\alpha = 1$ (and $\xi_{jl} = 1$) the penalty gives ordinary lasso. The lsgl implementation also allows for an additional grouping of the predictors.
A note about group lasso. The term group lasso is often associated with a grouping of predictors. However, from a more general viewpoint, group lasso is simply a grouping of parameters in the penalty. The grouping used by glmnet with family = "mgaussian" is a grouping of parameters across responses. The effect of such a grouping is to couple the estimation of the parameters across the responses, which turns out to be a good idea, if all the responses can be predicted from roughly the same set of predictors. The general idea of coupling multiple learning problems, that are expected to share some structure, is known as multi-task learning. | Multivariate linear regression with lasso in r
For multivariate responses (number of dependent variables larger than 1), you need family = "mgaussian" in the call of glmnet.
The lsgl package is an alternative, which provides a more flexible pena |
23,818 | Machine Learning technique for learning string patterns | Could your problem be restated as wanting to discover the regular expressions that will match the strings in each category? This is a "regex generation" problem, a subset of the grammar induction problem (see also Alexander Clark's website).
The regular expression problem is easier. I can point you to code frak and RegexGenerator. The online RegexGenerator++ has references to their academic papers on the problem. | Machine Learning technique for learning string patterns | Could your problem be restated as wanting to discover the regular expressions that will match the strings in each category? This is a "regex generation" problem, a subset of the grammar induction prob | Machine Learning technique for learning string patterns
Could your problem be restated as wanting to discover the regular expressions that will match the strings in each category? This is a "regex generation" problem, a subset of the grammar induction problem (see also Alexander Clark's website).
The regular expression problem is easier. I can point you to code frak and RegexGenerator. The online RegexGenerator++ has references to their academic papers on the problem. | Machine Learning technique for learning string patterns
Could your problem be restated as wanting to discover the regular expressions that will match the strings in each category? This is a "regex generation" problem, a subset of the grammar induction prob |
23,819 | Machine Learning technique for learning string patterns | You could try recurrent neural networks, where your input is a sequence of the letters in the word, and your output is a category. This fits your requirement such that you don't hand code any features.
However for this method to actually work you will require a fairly large training data set.
You can refer Supervised Sequence Labelling with Recurrent Neural Networks by Alex Graves chapter 2 for more details.
This is a link to the preprint | Machine Learning technique for learning string patterns | You could try recurrent neural networks, where your input is a sequence of the letters in the word, and your output is a category. This fits your requirement such that you don't hand code any features | Machine Learning technique for learning string patterns
You could try recurrent neural networks, where your input is a sequence of the letters in the word, and your output is a category. This fits your requirement such that you don't hand code any features.
However for this method to actually work you will require a fairly large training data set.
You can refer Supervised Sequence Labelling with Recurrent Neural Networks by Alex Graves chapter 2 for more details.
This is a link to the preprint | Machine Learning technique for learning string patterns
You could try recurrent neural networks, where your input is a sequence of the letters in the word, and your output is a category. This fits your requirement such that you don't hand code any features |
23,820 | Using LASSO only for feature selection | Almost any approach that does some form of model selection and then does further analyses as if no model selection had previously happened typically has poor properties. Unless there are compelling theoretical arguments backed up by evidence from e.g. extensive simulation studies for realistic sample sizes and feature versus sample size ratios to show that this is an exception, it is likely that such an approach will have unsatisfactory properties. I am not aware of any such positive evidence for this approach, but perhaps someone else is. Given that there are reasonable alternatives that achieve all desired goals (e.g. the elastic net), it this approach is hard to justify using such a suspect ad-hoc approach instead. | Using LASSO only for feature selection | Almost any approach that does some form of model selection and then does further analyses as if no model selection had previously happened typically has poor properties. Unless there are compelling th | Using LASSO only for feature selection
Almost any approach that does some form of model selection and then does further analyses as if no model selection had previously happened typically has poor properties. Unless there are compelling theoretical arguments backed up by evidence from e.g. extensive simulation studies for realistic sample sizes and feature versus sample size ratios to show that this is an exception, it is likely that such an approach will have unsatisfactory properties. I am not aware of any such positive evidence for this approach, but perhaps someone else is. Given that there are reasonable alternatives that achieve all desired goals (e.g. the elastic net), it this approach is hard to justify using such a suspect ad-hoc approach instead. | Using LASSO only for feature selection
Almost any approach that does some form of model selection and then does further analyses as if no model selection had previously happened typically has poor properties. Unless there are compelling th |
23,821 | Using LASSO only for feature selection | Besides all the answers above: It is possible to calculate an exact chi2 permutation test for 2x2 and rxc tables.
Instead of comparing our observed value of the chi-square statistic to an asymptotic chi-square distribution we need to compare it to the exact permutation distribution. We need to permute our data in all possible ways keeping the row and column margins constant. For each permuted dataed set we caluclated the chi2 statistics . We then compare our observed chi2 with the (sorted) chi2 statistics
The ranking of the real test statistic among the permuted chi2 test
statistics gives a p-value. | Using LASSO only for feature selection | Besides all the answers above: It is possible to calculate an exact chi2 permutation test for 2x2 and rxc tables.
Instead of comparing our observed value of the chi-square statistic to an asymptotic c | Using LASSO only for feature selection
Besides all the answers above: It is possible to calculate an exact chi2 permutation test for 2x2 and rxc tables.
Instead of comparing our observed value of the chi-square statistic to an asymptotic chi-square distribution we need to compare it to the exact permutation distribution. We need to permute our data in all possible ways keeping the row and column margins constant. For each permuted dataed set we caluclated the chi2 statistics . We then compare our observed chi2 with the (sorted) chi2 statistics
The ranking of the real test statistic among the permuted chi2 test
statistics gives a p-value. | Using LASSO only for feature selection
Besides all the answers above: It is possible to calculate an exact chi2 permutation test for 2x2 and rxc tables.
Instead of comparing our observed value of the chi-square statistic to an asymptotic c |
23,822 | Interpretation of coefficient of inverse Mills ratio | Let's say we have the following model:
$$
y_{i}^{*}=x_{i}'\beta + \epsilon_{i} \quad \textrm{for} \quad i=1,\ldots,n
$$
We can think about this in a few ways, but I think the typical procedure is to imagine us trying to estimate the effect of observed characteristics on the wage individual $i$ earns. Naturally, there are some people who choose not to work and potentially the decision to work can be modeled in the following way:
$$
d_{i}^{*}=z_{i}'\gamma + v_{i} \quad \textrm{ for } \quad i =1,\ldots,n
$$
If $d_{i}^{*}$ is greater than zero, we observe $y_{i} = y_{i}^{*}$ and if not we simply don't observe a wage for the person. I'm assuming that you know that OLS will lead to biased estimates as $E[\epsilon_{i}|z_{i},d_{i}=1]\neq 0$ in some circumstances. There are some conditions under which this might hold, which we can test via Heckman's Two-Step procedure. Otherwise, OLS is just misspecified.
Heckman tried to account for the endogeneity in this selection bias situation. So, to try to get rid of the endogeneity, Heckman suggested that we first estimate $\gamma$ via MLE probit, typically using an exclusion restriction. Afterward, we estimate an Inverse Mill's Ratio which essentially tells us the probability that an agent decides to work over the cumulative probability of an agent's decision, i.e.:
$$\lambda_{i} = \frac{\phi(z_{i}'\gamma)}{\Phi(z_{i}'\gamma)}$$
Note: because we're using probit, we're actually estimating $ \gamma/\sigma_{v}$.
We'll call the estimated value above $\hat{\lambda}_{i}$. We use this as a means of controlling the endogeneity, i.e. the part of the error term for which the decision to work influences the wage earned. So, the second step is actually:
$$y_{i} = x_{i}'\beta + \mu{\hat{\lambda_{i}}} + \xi_{i}$$
So, ultimately, your question is how to interpret $\mu$, correct?
The interpretation of the coefficient, $\mu$, is:
$$
\frac{\sigma_{\epsilon{v}}}{\sigma_{v}^{2}}
$$
What does this tell us? Well, this is the fraction of the covariance between the decision to work and the wage earned relative to the variation in decision to work. A test of selection bias is therefore a t-test on whether or not $\mu=0$ or ${\rm cov}(\epsilon,{v})=0$.
Hopefully that makes sense to you (and I didn't make any egregious errors). | Interpretation of coefficient of inverse Mills ratio | Let's say we have the following model:
$$
y_{i}^{*}=x_{i}'\beta + \epsilon_{i} \quad \textrm{for} \quad i=1,\ldots,n
$$
We can think about this in a few ways, but I think the typical procedure is to | Interpretation of coefficient of inverse Mills ratio
Let's say we have the following model:
$$
y_{i}^{*}=x_{i}'\beta + \epsilon_{i} \quad \textrm{for} \quad i=1,\ldots,n
$$
We can think about this in a few ways, but I think the typical procedure is to imagine us trying to estimate the effect of observed characteristics on the wage individual $i$ earns. Naturally, there are some people who choose not to work and potentially the decision to work can be modeled in the following way:
$$
d_{i}^{*}=z_{i}'\gamma + v_{i} \quad \textrm{ for } \quad i =1,\ldots,n
$$
If $d_{i}^{*}$ is greater than zero, we observe $y_{i} = y_{i}^{*}$ and if not we simply don't observe a wage for the person. I'm assuming that you know that OLS will lead to biased estimates as $E[\epsilon_{i}|z_{i},d_{i}=1]\neq 0$ in some circumstances. There are some conditions under which this might hold, which we can test via Heckman's Two-Step procedure. Otherwise, OLS is just misspecified.
Heckman tried to account for the endogeneity in this selection bias situation. So, to try to get rid of the endogeneity, Heckman suggested that we first estimate $\gamma$ via MLE probit, typically using an exclusion restriction. Afterward, we estimate an Inverse Mill's Ratio which essentially tells us the probability that an agent decides to work over the cumulative probability of an agent's decision, i.e.:
$$\lambda_{i} = \frac{\phi(z_{i}'\gamma)}{\Phi(z_{i}'\gamma)}$$
Note: because we're using probit, we're actually estimating $ \gamma/\sigma_{v}$.
We'll call the estimated value above $\hat{\lambda}_{i}$. We use this as a means of controlling the endogeneity, i.e. the part of the error term for which the decision to work influences the wage earned. So, the second step is actually:
$$y_{i} = x_{i}'\beta + \mu{\hat{\lambda_{i}}} + \xi_{i}$$
So, ultimately, your question is how to interpret $\mu$, correct?
The interpretation of the coefficient, $\mu$, is:
$$
\frac{\sigma_{\epsilon{v}}}{\sigma_{v}^{2}}
$$
What does this tell us? Well, this is the fraction of the covariance between the decision to work and the wage earned relative to the variation in decision to work. A test of selection bias is therefore a t-test on whether or not $\mu=0$ or ${\rm cov}(\epsilon,{v})=0$.
Hopefully that makes sense to you (and I didn't make any egregious errors). | Interpretation of coefficient of inverse Mills ratio
Let's say we have the following model:
$$
y_{i}^{*}=x_{i}'\beta + \epsilon_{i} \quad \textrm{for} \quad i=1,\ldots,n
$$
We can think about this in a few ways, but I think the typical procedure is to |
23,823 | Handling unknown words in language modeling tasks using LSTM | Option 1 (adding an unknown word token) is how most people solve this problem.
Option 2 (deleting the unknown words) is a bad idea because it transforms the sentence in a way that is not consistent with how the LSTM was trained.
Another option that has recently been developed is to create a word embedding on-the-fly for each word using a convolutional neural network or a separate LSTM that processes the characters of each word one at a time. Using this technique your model will never encounter a word that it can't create an embedding for. | Handling unknown words in language modeling tasks using LSTM | Option 1 (adding an unknown word token) is how most people solve this problem.
Option 2 (deleting the unknown words) is a bad idea because it transforms the sentence in a way that is not consistent wi | Handling unknown words in language modeling tasks using LSTM
Option 1 (adding an unknown word token) is how most people solve this problem.
Option 2 (deleting the unknown words) is a bad idea because it transforms the sentence in a way that is not consistent with how the LSTM was trained.
Another option that has recently been developed is to create a word embedding on-the-fly for each word using a convolutional neural network or a separate LSTM that processes the characters of each word one at a time. Using this technique your model will never encounter a word that it can't create an embedding for. | Handling unknown words in language modeling tasks using LSTM
Option 1 (adding an unknown word token) is how most people solve this problem.
Option 2 (deleting the unknown words) is a bad idea because it transforms the sentence in a way that is not consistent wi |
23,824 | Handling unknown words in language modeling tasks using LSTM | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Mapping rare words to simply means that we delete those words and replace them with the token in the training data. Thus our model does not know of any rare words. It is a crude form of smoothing because the model assumes that the token will never actually occur in real data or better yet it ignores these n-grams altogether. | Handling unknown words in language modeling tasks using LSTM | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Handling unknown words in language modeling tasks using LSTM
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Mapping rare words to simply means that we delete those words and replace them with the token in the training data. Thus our model does not know of any rare words. It is a crude form of smoothing because the model assumes that the token will never actually occur in real data or better yet it ignores these n-grams altogether. | Handling unknown words in language modeling tasks using LSTM
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
23,825 | Logistic regression vs chi-square in a 2x2 and Ix2 (single factor - binary response) contingency tables? | Ultimately, it's apples and oranges.
Logistic regression is a way to model a nominal variable as a probabilistic outcome of one or more other variables. Fitting a logistic-regression model might be followed up with testing whether the model coefficients are significantly different from 0, computing confidence intervals for the coefficients, or examining how well the model can predict new observations.
The χ² test of independence is a specific significance test that tests the null hypothesis that two nominal variables are independent.
Whether you should use logistic regression or a χ² test depends on the question you want to answer. For example, a χ² test could check whether it is unreasonable to believe that a person's registered political party is independent of their race, whereas logistic regression could compute the probability that a person with a given race, age, and gender belongs to each political party. | Logistic regression vs chi-square in a 2x2 and Ix2 (single factor - binary response) contingency tab | Ultimately, it's apples and oranges.
Logistic regression is a way to model a nominal variable as a probabilistic outcome of one or more other variables. Fitting a logistic-regression model might be fo | Logistic regression vs chi-square in a 2x2 and Ix2 (single factor - binary response) contingency tables?
Ultimately, it's apples and oranges.
Logistic regression is a way to model a nominal variable as a probabilistic outcome of one or more other variables. Fitting a logistic-regression model might be followed up with testing whether the model coefficients are significantly different from 0, computing confidence intervals for the coefficients, or examining how well the model can predict new observations.
The χ² test of independence is a specific significance test that tests the null hypothesis that two nominal variables are independent.
Whether you should use logistic regression or a χ² test depends on the question you want to answer. For example, a χ² test could check whether it is unreasonable to believe that a person's registered political party is independent of their race, whereas logistic regression could compute the probability that a person with a given race, age, and gender belongs to each political party. | Logistic regression vs chi-square in a 2x2 and Ix2 (single factor - binary response) contingency tab
Ultimately, it's apples and oranges.
Logistic regression is a way to model a nominal variable as a probabilistic outcome of one or more other variables. Fitting a logistic-regression model might be fo |
23,826 | Two definitions of p-value: how to prove their equivalence? | We have some multivariate data $x$, drawn from a distribution $\mathcal{D}$ with some unknown parameter $\theta$. Note that $x$ are sample outcomes.
We want to test some hypothesis about an unknown parameter $\theta$, the values of $\theta$ under the null hypothesis are in the set $\theta_0$.
In the space of the $X$, we can define a rejection region $R$, and the power of this region $R$ is then defined as $\mathcal{P}_\bar{\theta}^R=P_\bar{\theta}(x \in R)$. So the power is computed for a particular value $\bar{\theta}$ of $\theta$ as the probability that the sample outcome $x$ is in the rejection region $R$ when the value of $\theta$ is $\bar{\theta}$. Obviously the power depends on the region $R$ and on the chosen $\bar{\theta}$.
Definition 1 defines the size of the region $R$ as the supremum of all the values of $\mathcal{P}_\bar{\theta}^R$ for $\bar{\theta}$ in $\theta_0$, so only for values of $\bar{\theta}$ under $H_0$. Obviously this depends on the region, so $\alpha^R=sup_{\bar{\theta} \in \theta_0} \mathcal{P}_\bar{\theta}^R$.
As $\alpha^R$ depends on $R$ we have another value when the region changes, and this is the basis for defining the p-value: change the region, but in such a way that the sample observed value still belongs to the region, for each such region, compute the $\alpha_R$ as defined above and take the infimum: $pv(x)=inf_{R |_{x \in R}} \alpha^R$. So the p-value is the smallest size of all regions that contain $x$.
The theorem is then just a 'translation' of it, namely the case where the regions $R$ are defined using a statistic $T$ and for a value $c$ you define a region $R$ as $R=\{ x | T(x) \ge c \}$. If you use this type of region $R$ in the above reasoning, then the theorem follows.
EDIT because of comments:
@user8: for the theorem; if you define rejection regions as in the theorem, then a rejection region of size $\alpha$ is a set that looks like $R^\alpha= \{X | T(X) \ge c_\alpha \}$ for some $c_\alpha$.
To find the p-value of an observed value $x$, i.e. $pv(x)$ you have to find the smallest region $R$, i.e. the largest value of $c$ such that $\{X | T(X) \ge c \}$ still contains $x$, the latter (the region contains $x$) is equivalent (because of the way the regions are defined) to saying that $c \ge T(x)$, so you have to find the largest $c$ such that $\{X | T(X) \ge c \& c \ge T(x) \}$
Obviously, the largest $c$ such that $ c \ge T(x)$ should be $ c = T(x)$ and then the set supra becomes $\{ X | T(X) \ge c = T(x)\}=\{ X | T(X) \ge T(x)\}$ | Two definitions of p-value: how to prove their equivalence? | We have some multivariate data $x$, drawn from a distribution $\mathcal{D}$ with some unknown parameter $\theta$. Note that $x$ are sample outcomes.
We want to test some hypothesis about an unknown p | Two definitions of p-value: how to prove their equivalence?
We have some multivariate data $x$, drawn from a distribution $\mathcal{D}$ with some unknown parameter $\theta$. Note that $x$ are sample outcomes.
We want to test some hypothesis about an unknown parameter $\theta$, the values of $\theta$ under the null hypothesis are in the set $\theta_0$.
In the space of the $X$, we can define a rejection region $R$, and the power of this region $R$ is then defined as $\mathcal{P}_\bar{\theta}^R=P_\bar{\theta}(x \in R)$. So the power is computed for a particular value $\bar{\theta}$ of $\theta$ as the probability that the sample outcome $x$ is in the rejection region $R$ when the value of $\theta$ is $\bar{\theta}$. Obviously the power depends on the region $R$ and on the chosen $\bar{\theta}$.
Definition 1 defines the size of the region $R$ as the supremum of all the values of $\mathcal{P}_\bar{\theta}^R$ for $\bar{\theta}$ in $\theta_0$, so only for values of $\bar{\theta}$ under $H_0$. Obviously this depends on the region, so $\alpha^R=sup_{\bar{\theta} \in \theta_0} \mathcal{P}_\bar{\theta}^R$.
As $\alpha^R$ depends on $R$ we have another value when the region changes, and this is the basis for defining the p-value: change the region, but in such a way that the sample observed value still belongs to the region, for each such region, compute the $\alpha_R$ as defined above and take the infimum: $pv(x)=inf_{R |_{x \in R}} \alpha^R$. So the p-value is the smallest size of all regions that contain $x$.
The theorem is then just a 'translation' of it, namely the case where the regions $R$ are defined using a statistic $T$ and for a value $c$ you define a region $R$ as $R=\{ x | T(x) \ge c \}$. If you use this type of region $R$ in the above reasoning, then the theorem follows.
EDIT because of comments:
@user8: for the theorem; if you define rejection regions as in the theorem, then a rejection region of size $\alpha$ is a set that looks like $R^\alpha= \{X | T(X) \ge c_\alpha \}$ for some $c_\alpha$.
To find the p-value of an observed value $x$, i.e. $pv(x)$ you have to find the smallest region $R$, i.e. the largest value of $c$ such that $\{X | T(X) \ge c \}$ still contains $x$, the latter (the region contains $x$) is equivalent (because of the way the regions are defined) to saying that $c \ge T(x)$, so you have to find the largest $c$ such that $\{X | T(X) \ge c \& c \ge T(x) \}$
Obviously, the largest $c$ such that $ c \ge T(x)$ should be $ c = T(x)$ and then the set supra becomes $\{ X | T(X) \ge c = T(x)\}=\{ X | T(X) \ge T(x)\}$ | Two definitions of p-value: how to prove their equivalence?
We have some multivariate data $x$, drawn from a distribution $\mathcal{D}$ with some unknown parameter $\theta$. Note that $x$ are sample outcomes.
We want to test some hypothesis about an unknown p |
23,827 | Two definitions of p-value: how to prove their equivalence? | In Definition 2, the $p$-value of a test statistic is the greatest lower bound of all $\alpha$ such that the hypothesis is rejected for a test of size $\alpha$. Recall that the smaller we make $\alpha$, the less tolerance for Type I error we are allowing, thus the rejection region $R_\alpha$ will also decrease. So (very) informally speaking, the $p$-value is the smallest $\alpha$ we can choose that still lets us reject $H_0$ for the data that we observed. We cannot arbitrarily choose a smaller $\alpha$ because at some point, $R_\alpha$ will be so small that it will exclude (i.e., fail to contain) the event we observed.
Now, in light of the above, I invite you to reconsider the theorem. | Two definitions of p-value: how to prove their equivalence? | In Definition 2, the $p$-value of a test statistic is the greatest lower bound of all $\alpha$ such that the hypothesis is rejected for a test of size $\alpha$. Recall that the smaller we make $\alph | Two definitions of p-value: how to prove their equivalence?
In Definition 2, the $p$-value of a test statistic is the greatest lower bound of all $\alpha$ such that the hypothesis is rejected for a test of size $\alpha$. Recall that the smaller we make $\alpha$, the less tolerance for Type I error we are allowing, thus the rejection region $R_\alpha$ will also decrease. So (very) informally speaking, the $p$-value is the smallest $\alpha$ we can choose that still lets us reject $H_0$ for the data that we observed. We cannot arbitrarily choose a smaller $\alpha$ because at some point, $R_\alpha$ will be so small that it will exclude (i.e., fail to contain) the event we observed.
Now, in light of the above, I invite you to reconsider the theorem. | Two definitions of p-value: how to prove their equivalence?
In Definition 2, the $p$-value of a test statistic is the greatest lower bound of all $\alpha$ such that the hypothesis is rejected for a test of size $\alpha$. Recall that the smaller we make $\alph |
23,828 | Fisher consistency versus "standard" consistency | Following https://en.wikipedia.org/wiki/Fisher_consistency Fisher consistency means that if the estimator was calculated using the complete population and not only a sample, the correct value would be obtained. Asymptotic consistency means that as the sample size goes to infinity, the estimator converges in probability to the true value. Neither concept entails the other, see above wikipedia article for examples.
I will give some other examples, not from the wikipedia article, I don't find that particularly clear!
Let $X_1, X_2, \dots, X_n$ be a random sample from a population described by the cumulative distribution function $F$. Now we will consider "functional parameters", that is, parameters which can be written as a functional of $F$,
$$
\theta = \theta(F)
$$
Examples are for instance the expectation $\mu=\mu(F)=\int_{-\infty}^\infty x\; dF(x)$ (where this notation denotes a Riemann-Stieltjes integral, which equals $\int_{-\infty}^\infty x f(x) \; dx $ in the continuous case and a summation in the discrete case). Another example is the median $m=F^{-1}(0.5)$, the interquartile range $\text{IQR} = F^{-1}(0.75) - F^{-1}(0.25)$ and many other examples like the variance $\sigma^2 = \int_{-\infty}^\infty (x-\mu)^2\; dF(x)$.
In the following let us use the expectation $\mu$ as example. So we are interested in estimating from our sample $\mu=\int_{-\infty}^\infty x\; dF(x)$.
One estimator is the arithmetic mean which can be written
$$
\bar{x}=\frac1{n}\sum x_i = \int_{-\infty}^\infty x\; d\hat{F}_n(x)
$$
where $\hat{F}_n$ denotes the empirical distribution function. The value of that at the population $F$ is obtained by replacing $\hat{F}_n$ by $F$ and is the true expectation, showing that the arithmetic mean is Fisher consistent.
Now, maybe you are preoccupied with the "lack of robustness" of the arithmetic mean, that it is unduly influenced by outliers, for instance. So you want some more robust (or resistant) estimator. Two such are the median and winzorized means, that is arithmetic means after throwing away some of the smallest and largest observations. The empirical median, for instance, will be an unbiased estimator of the mean $\mu$ in symmetric families such as the normal. But, the median as an estimator of the mean, is not Fisher-consistent. The empirical median can be written $\hat{F}_n^{-1}(0.5)$ and evaluated at the population $F$ that is in general different from the mean.
The same happens with the winzorized means. Many other robust estimators as well will not be Fisher-consistent.
Fisher-consistency is about "doing the right thing at the model", while robustness is about obtaining reasonable answers also in some neighbourhood of the model, when the model is not right. Those are different goals.
The original poster says in comments: " I would expect that except pathologies, a (classically) consistent estimator is also Fisher consistent." There is no base for such expectation! A simple example is the usual unbiased estimator of variance, $s^2 = \frac1{n-1}\sum_{i=1}^n \left(x_i-\bar{x} \right)^2$. This is unbiased and consistent in the usual sense (whatever is $F$, as long as it has variance!). In functional form this can be written (after some algebra ...)
$$
s^2=\frac{n}{n-1} \left\{ \int x^2 \; d\hat{F}_n(x) - \left(\int x \; d\hat{F}_n(x) \right)^2 \right\},
$$
which, evaluated at the true population $F$ is
$\frac{n}{n-1}\sigma^2$ so not Fisher consistent. We see this is because the concept of Fisher consistency is not asymptotic, so the factor of $\frac{n}{n-1}$ do not "disappear" as it does in the case of asymptotic consistency. So a lot of usual, not at all "pathological" estimators will be asymptotically consistent, but not Fisher consistent. The opposite case is maybe more unusual? I have had problems finding a natural countrexample in that other case, Fisher consistent but not asymptotially consistent. I guess one place to search is a situation where NO asymptotically consistent estimators exist! | Fisher consistency versus "standard" consistency | Following https://en.wikipedia.org/wiki/Fisher_consistency Fisher consistency means that if the estimator was calculated using the complete population and not only a sample, the correct value would | Fisher consistency versus "standard" consistency
Following https://en.wikipedia.org/wiki/Fisher_consistency Fisher consistency means that if the estimator was calculated using the complete population and not only a sample, the correct value would be obtained. Asymptotic consistency means that as the sample size goes to infinity, the estimator converges in probability to the true value. Neither concept entails the other, see above wikipedia article for examples.
I will give some other examples, not from the wikipedia article, I don't find that particularly clear!
Let $X_1, X_2, \dots, X_n$ be a random sample from a population described by the cumulative distribution function $F$. Now we will consider "functional parameters", that is, parameters which can be written as a functional of $F$,
$$
\theta = \theta(F)
$$
Examples are for instance the expectation $\mu=\mu(F)=\int_{-\infty}^\infty x\; dF(x)$ (where this notation denotes a Riemann-Stieltjes integral, which equals $\int_{-\infty}^\infty x f(x) \; dx $ in the continuous case and a summation in the discrete case). Another example is the median $m=F^{-1}(0.5)$, the interquartile range $\text{IQR} = F^{-1}(0.75) - F^{-1}(0.25)$ and many other examples like the variance $\sigma^2 = \int_{-\infty}^\infty (x-\mu)^2\; dF(x)$.
In the following let us use the expectation $\mu$ as example. So we are interested in estimating from our sample $\mu=\int_{-\infty}^\infty x\; dF(x)$.
One estimator is the arithmetic mean which can be written
$$
\bar{x}=\frac1{n}\sum x_i = \int_{-\infty}^\infty x\; d\hat{F}_n(x)
$$
where $\hat{F}_n$ denotes the empirical distribution function. The value of that at the population $F$ is obtained by replacing $\hat{F}_n$ by $F$ and is the true expectation, showing that the arithmetic mean is Fisher consistent.
Now, maybe you are preoccupied with the "lack of robustness" of the arithmetic mean, that it is unduly influenced by outliers, for instance. So you want some more robust (or resistant) estimator. Two such are the median and winzorized means, that is arithmetic means after throwing away some of the smallest and largest observations. The empirical median, for instance, will be an unbiased estimator of the mean $\mu$ in symmetric families such as the normal. But, the median as an estimator of the mean, is not Fisher-consistent. The empirical median can be written $\hat{F}_n^{-1}(0.5)$ and evaluated at the population $F$ that is in general different from the mean.
The same happens with the winzorized means. Many other robust estimators as well will not be Fisher-consistent.
Fisher-consistency is about "doing the right thing at the model", while robustness is about obtaining reasonable answers also in some neighbourhood of the model, when the model is not right. Those are different goals.
The original poster says in comments: " I would expect that except pathologies, a (classically) consistent estimator is also Fisher consistent." There is no base for such expectation! A simple example is the usual unbiased estimator of variance, $s^2 = \frac1{n-1}\sum_{i=1}^n \left(x_i-\bar{x} \right)^2$. This is unbiased and consistent in the usual sense (whatever is $F$, as long as it has variance!). In functional form this can be written (after some algebra ...)
$$
s^2=\frac{n}{n-1} \left\{ \int x^2 \; d\hat{F}_n(x) - \left(\int x \; d\hat{F}_n(x) \right)^2 \right\},
$$
which, evaluated at the true population $F$ is
$\frac{n}{n-1}\sigma^2$ so not Fisher consistent. We see this is because the concept of Fisher consistency is not asymptotic, so the factor of $\frac{n}{n-1}$ do not "disappear" as it does in the case of asymptotic consistency. So a lot of usual, not at all "pathological" estimators will be asymptotically consistent, but not Fisher consistent. The opposite case is maybe more unusual? I have had problems finding a natural countrexample in that other case, Fisher consistent but not asymptotially consistent. I guess one place to search is a situation where NO asymptotically consistent estimators exist! | Fisher consistency versus "standard" consistency
Following https://en.wikipedia.org/wiki/Fisher_consistency Fisher consistency means that if the estimator was calculated using the complete population and not only a sample, the correct value would |
23,829 | Which matrix should be interpreted in factor analysis: pattern matrix or structure matrix? | Let me recommend you first to read this Q/A. It is about rotations and can hint towards or partly answer your question.
A more specific answer from me about interpretation might be as follows. Theoretically, factor of Factor analysis is a univariate latent feature, or essence. It is not the same thing as a set or cluster of phenomena. Term "construct" in psychometry is generic and could be conceptualized as factor (essence) or cluster (prototype) or something else. Since factor is univariate essence it should be interpreted as the (relatively simple) meaning lying on (or "behind") the intersection of the meanings/contents of the variables loaded by the factor.
With oblique rotation, factors are not orthogonal; still, we usually prefer to interpret a factor as clean entity from the other factors. That is, ideally, factor X label would dissociate from a correlated factor Y label, to stress individuality of both factors, while assuming that "in outer reality" they correlate. Correlatedness thus gets to be an isolated characteristic of entities from the labels of the entities.
If it is this the strategy typically preferred then pattern matrix appears to be the main tool for interpretation. Coefficients of pattern matrix are the unique loads or investments of the given factor into variables. Because it is regression coefficients$^1$. [I insist that it is better to say "factor loads variable" than "variable loads factor".] Structure matrix contains (zero-order) correlations between factors and variables. The more two factors X and Y correlate with each other the greater can be the discrepancy between the pattern loadings and the structure loadings on some variable V. While V ought to correlate higher and higher with both factors, the regression coefficients can rise both or only one of the two. The latter case will mean that it is that part of X which is different from Y what loads V so much; and thence the V-X pattern coefficient is what is highly valuable in interpretation of X.
Weak side of pattern matrix is that it is less stable from sample to sample (as usually regression coefficients in comparison to correlation coefficients). Relying on pattern matrix in interpretation requires well planned study with sufficient sample size. For pilot study and tentative interpretation structure matrix might be better choice.
Structure matrix seems to me potentially better than pattern matrix in back interpretation of variables by factors, if such a task arises. And it can rise when we validate items in questionnaire construction, - that is, decide which variables to select and which to drop in the scale being created. Just remember that in psychometry common validity coefficient is correlation (and not regression) coefficient between construct/criterion and item. Usually I include an item in a scale this way: (1) look at maximal correlation (structure matrix) in the item's row; (2) if the value is above a threshold (say, .40), select the item if its situation in pattern matrix confirms the decision (i.e. the item is loaded by the factor - and desirably only by this one - which scale we're constructing). Also factor scores coefficient matrix is what is useful in addition to pattern and structure loadings in the job of selection items for a factor construct.
If you do not perceive a construct as univariate trait then using classic factor analysis would be questioned. Factor is thin and sleek, it is not like pangolin or armful of whatever. Variable loaded by it is its mask: factor in it shows through what appears to be completely not that factor in it.
$^1$ Pattern loadings are the regression coefficients of the factor model equation. It the model, the being predicted variable is meant either standardized (in a FA of correlations) or centered (in a FA of covariances) observed feature, while the factors are meant standardized (with variance 1) latent features. Coefficients of that linear combination are the pattern matrix values. As comes clear from pictures below here - pattern coefficients are never greater than structure coefficients which are correlations or covariances between the being predicted variable and the standardized factors.
Some geometry. Loadings are coordinates of variables (as their vector endpoints) in the factor space. We use to encounter those on "loading plots" and "biplots". See formulas.
Left. Without rotation or with orthogonal rotation, axes (factors) are geometrically orthogonal (as well as statistically uncorrelated) to each other. The only coordinates possible are square like what are shown. That is what is called "factor loading matrix" values.
Right. After oblique rotation factors are no longer orthogonal (and statistically they are correlated). Here two types of coordinates can be drawn: perpendicular (and that are structure values, correlations) and skew (or, to coin a word, "alloparallel": and that are pattern values, regression weights).
["Perpendicular" and "skew" have been my labels. At one point later I've learned that in math jargon, the perpendicular coordinates are called covariant ones or projection ones, and the skew coordinates are called contravariant ones or parallel-axis ones].
Of course, it is possible to plot pattern or structure coordinates while forcing the axes to be geometrically orthogonal on the plot - it is what when you take the table of the loadings (pattern or structure) and give to your software to build a standard scatterplot of those, - but then the angle between the variable vectors will appear widened. And so it will be a distorted loading plot, since the aforesaid original angle was the correlation coefficient between the variables.
See detailed explanation of a loading plot (in settings of orthogonal factors) here. | Which matrix should be interpreted in factor analysis: pattern matrix or structure matrix? | Let me recommend you first to read this Q/A. It is about rotations and can hint towards or partly answer your question.
A more specific answer from me about interpretation might be as follows. Theoret | Which matrix should be interpreted in factor analysis: pattern matrix or structure matrix?
Let me recommend you first to read this Q/A. It is about rotations and can hint towards or partly answer your question.
A more specific answer from me about interpretation might be as follows. Theoretically, factor of Factor analysis is a univariate latent feature, or essence. It is not the same thing as a set or cluster of phenomena. Term "construct" in psychometry is generic and could be conceptualized as factor (essence) or cluster (prototype) or something else. Since factor is univariate essence it should be interpreted as the (relatively simple) meaning lying on (or "behind") the intersection of the meanings/contents of the variables loaded by the factor.
With oblique rotation, factors are not orthogonal; still, we usually prefer to interpret a factor as clean entity from the other factors. That is, ideally, factor X label would dissociate from a correlated factor Y label, to stress individuality of both factors, while assuming that "in outer reality" they correlate. Correlatedness thus gets to be an isolated characteristic of entities from the labels of the entities.
If it is this the strategy typically preferred then pattern matrix appears to be the main tool for interpretation. Coefficients of pattern matrix are the unique loads or investments of the given factor into variables. Because it is regression coefficients$^1$. [I insist that it is better to say "factor loads variable" than "variable loads factor".] Structure matrix contains (zero-order) correlations between factors and variables. The more two factors X and Y correlate with each other the greater can be the discrepancy between the pattern loadings and the structure loadings on some variable V. While V ought to correlate higher and higher with both factors, the regression coefficients can rise both or only one of the two. The latter case will mean that it is that part of X which is different from Y what loads V so much; and thence the V-X pattern coefficient is what is highly valuable in interpretation of X.
Weak side of pattern matrix is that it is less stable from sample to sample (as usually regression coefficients in comparison to correlation coefficients). Relying on pattern matrix in interpretation requires well planned study with sufficient sample size. For pilot study and tentative interpretation structure matrix might be better choice.
Structure matrix seems to me potentially better than pattern matrix in back interpretation of variables by factors, if such a task arises. And it can rise when we validate items in questionnaire construction, - that is, decide which variables to select and which to drop in the scale being created. Just remember that in psychometry common validity coefficient is correlation (and not regression) coefficient between construct/criterion and item. Usually I include an item in a scale this way: (1) look at maximal correlation (structure matrix) in the item's row; (2) if the value is above a threshold (say, .40), select the item if its situation in pattern matrix confirms the decision (i.e. the item is loaded by the factor - and desirably only by this one - which scale we're constructing). Also factor scores coefficient matrix is what is useful in addition to pattern and structure loadings in the job of selection items for a factor construct.
If you do not perceive a construct as univariate trait then using classic factor analysis would be questioned. Factor is thin and sleek, it is not like pangolin or armful of whatever. Variable loaded by it is its mask: factor in it shows through what appears to be completely not that factor in it.
$^1$ Pattern loadings are the regression coefficients of the factor model equation. It the model, the being predicted variable is meant either standardized (in a FA of correlations) or centered (in a FA of covariances) observed feature, while the factors are meant standardized (with variance 1) latent features. Coefficients of that linear combination are the pattern matrix values. As comes clear from pictures below here - pattern coefficients are never greater than structure coefficients which are correlations or covariances between the being predicted variable and the standardized factors.
Some geometry. Loadings are coordinates of variables (as their vector endpoints) in the factor space. We use to encounter those on "loading plots" and "biplots". See formulas.
Left. Without rotation or with orthogonal rotation, axes (factors) are geometrically orthogonal (as well as statistically uncorrelated) to each other. The only coordinates possible are square like what are shown. That is what is called "factor loading matrix" values.
Right. After oblique rotation factors are no longer orthogonal (and statistically they are correlated). Here two types of coordinates can be drawn: perpendicular (and that are structure values, correlations) and skew (or, to coin a word, "alloparallel": and that are pattern values, regression weights).
["Perpendicular" and "skew" have been my labels. At one point later I've learned that in math jargon, the perpendicular coordinates are called covariant ones or projection ones, and the skew coordinates are called contravariant ones or parallel-axis ones].
Of course, it is possible to plot pattern or structure coordinates while forcing the axes to be geometrically orthogonal on the plot - it is what when you take the table of the loadings (pattern or structure) and give to your software to build a standard scatterplot of those, - but then the angle between the variable vectors will appear widened. And so it will be a distorted loading plot, since the aforesaid original angle was the correlation coefficient between the variables.
See detailed explanation of a loading plot (in settings of orthogonal factors) here. | Which matrix should be interpreted in factor analysis: pattern matrix or structure matrix?
Let me recommend you first to read this Q/A. It is about rotations and can hint towards or partly answer your question.
A more specific answer from me about interpretation might be as follows. Theoret |
23,830 | Why do all the PLS components together explain only a part of the variance of the original data? | The sum of variances of all PLS components is normally less than 100%.
There are many variants of partial least squares (PLS). What you used here, is PLS regression of a univariate response variable $\mathbf y$ onto several variables $\mathbf X$; this algorithm is traditionally known as PLS1 (as opposed to other variants, see Rosipal & Kramer, 2006, Overview and Recent Advances in Partial
Least Squares for a concise overview). PLS1 was later shown to be equivalent to a more elegant formulation called SIMPLS (see reference to the paywalled Jong 1988 in Rosipal & Kramer). The view provided by SIMPLS helps to understand what is going on in PLS1.
It turns out that what PLS1 does, is to find a sequence of linear projections $\mathbf t_i = \mathbf X \mathbf w_i$, such that:
Covariance between $\mathbf y$ and $\mathbf t_i$ is maximal;
All weight vectors have unit length, $\|\mathbf w_i\|=1$;
Any two PLS components (aka score vectors) $\mathbf t_i$ and $\mathbf t_j$ are uncorrelated.
Note that weight vectors do not have to be (and are not) orthogonal.
This means that if $\mathbf X$ consists of $k=10$ variables and you found $10$ PLS components, then you found a non-orthogonal basis with uncorrelated projections on the basis vectors. One can mathematically prove that in such a situation the sum of variances of all these projections will be less then the total variance of $\mathbf X$. They would be equal if the weight vectors were orthogonal (like e.g. in PCA), but in PLS this is not the case.
I don't know of any textbook or paper that explicitly discusses this issue, but I have earlier explained it in the context of linear discriminant analysis (LDA) that also yields a number of uncorrelated projections on non-orthogonal unit weight vectors, see here: Proportion of explained variance in PCA and LDA. | Why do all the PLS components together explain only a part of the variance of the original data? | The sum of variances of all PLS components is normally less than 100%.
There are many variants of partial least squares (PLS). What you used here, is PLS regression of a univariate response variable $ | Why do all the PLS components together explain only a part of the variance of the original data?
The sum of variances of all PLS components is normally less than 100%.
There are many variants of partial least squares (PLS). What you used here, is PLS regression of a univariate response variable $\mathbf y$ onto several variables $\mathbf X$; this algorithm is traditionally known as PLS1 (as opposed to other variants, see Rosipal & Kramer, 2006, Overview and Recent Advances in Partial
Least Squares for a concise overview). PLS1 was later shown to be equivalent to a more elegant formulation called SIMPLS (see reference to the paywalled Jong 1988 in Rosipal & Kramer). The view provided by SIMPLS helps to understand what is going on in PLS1.
It turns out that what PLS1 does, is to find a sequence of linear projections $\mathbf t_i = \mathbf X \mathbf w_i$, such that:
Covariance between $\mathbf y$ and $\mathbf t_i$ is maximal;
All weight vectors have unit length, $\|\mathbf w_i\|=1$;
Any two PLS components (aka score vectors) $\mathbf t_i$ and $\mathbf t_j$ are uncorrelated.
Note that weight vectors do not have to be (and are not) orthogonal.
This means that if $\mathbf X$ consists of $k=10$ variables and you found $10$ PLS components, then you found a non-orthogonal basis with uncorrelated projections on the basis vectors. One can mathematically prove that in such a situation the sum of variances of all these projections will be less then the total variance of $\mathbf X$. They would be equal if the weight vectors were orthogonal (like e.g. in PCA), but in PLS this is not the case.
I don't know of any textbook or paper that explicitly discusses this issue, but I have earlier explained it in the context of linear discriminant analysis (LDA) that also yields a number of uncorrelated projections on non-orthogonal unit weight vectors, see here: Proportion of explained variance in PCA and LDA. | Why do all the PLS components together explain only a part of the variance of the original data?
The sum of variances of all PLS components is normally less than 100%.
There are many variants of partial least squares (PLS). What you used here, is PLS regression of a univariate response variable $ |
23,831 | Advantage of GLMs in terminal nodes of a regression tree? | Like you say this idea has been explored before (albeit under different names) and there actually is a broad literature on that topic. The names that I associate with this line of work are Wei-Yin Loh, Probal Chaudhuri, Hongshik Ahn, Joao Gama, Antonio Ciampi or Achim Zeileis. You can find a rather comprehensive description of pros and cons and different algorithms (slightly outdated) in this thesis.
Trees with GLM have the following (dis-) advantages (paraphrased from here - you can easily find the preprint by googling):
The functional form of a GLM can can sometimes appear to be too rigid for the whole data set, even if the model might fit well in a subsample.
Especially with large data sets or data sets where knowledge about
the underlying processes is limited, setting up useful parametric
models can be difficult and their performance with respect to
prediction may not be suffcient.
Trees are able to incorporate non-linear relationships or find the
functional relationship by themselves and therefore can have higher
predictive power in settings where classic models are biased or even
fail.
Due to their explorative character, trees with GLM can reveal
patterns hidden within data modelled with GLM or provide further
explanation of surprising or counter-intuitive results by
incorporating additional information from other covariates.
They can be helpful in identifying segments of the data for whom an a
priori assumed model fits well. It may be that overall this model has
a poor fit but that this is due to some contamination (for example
merging two separate data files or systematic errors during data
collection at a certain date). Trees with GLM might partition the
data in a way that enables us to find the segments that have poor fit
and find segments for which the fit may be rather good.
The tree-like structure allows the effects of these covariates to be
non-linear and highly interactive as opposed to assuming a linear in
influence on the linked mean.
Trees with GLM may lead to additional insight for an a priori assumed
parametric model, especially if the underlying mechanisms are too
complex to be captured by the GLM.
Trees with GLM can automatically detect interactions, non-linearity,
model misspecification, unregarded covariate influence and so on.
They can be used as an exploratory tool in complex and large data
sets for which it has a number of advantages.
Compared to a global GLM, a GLM model tree can alleviate the problem
of bias and model misspecification and provide a better fit.
Compared to tree algorithms with constants, the specification of a
parametric model in the terminal nodes can add extra stability and
therefore reduce the variance of the tree methods.
Being a hybrid of trees and classic GLM-type models, the performance
usually lies between those two poles: They tend to exhibit higher
predictive power than classic models but less than non-parametric
trees.
They add some complexity compared to classical model because of the
splitting process but are usually more parsimonous than
non-parametric trees.
They show a higher prediction variance than a global model in bootstrap
experiments, but much less than non-parametric trees (even pruned
ones).
Using a GLM in the node of a tree typically leads to smaller trees
Using a GLM in the node of a tree typically leads to more stable
predictions as compared to a tree with only a constant (but not as
stable as bagging or forests of trees)
The VC Dimension of a tree with GLM in the nodes is higher than the
equivalent tree with only a constant (as the latter is a special case
of the former)
Regarding the "effectiveness" (I assume you mean predictive performance) of trees with GLM, most of the papers cited in the above two links do provide some investigation into that. However, a comprehensive, broad comparison of all the algorithms with competitors like standard trees have not been done to the best of my knowledge. | Advantage of GLMs in terminal nodes of a regression tree? | Like you say this idea has been explored before (albeit under different names) and there actually is a broad literature on that topic. The names that I associate with this line of work are Wei-Yin Loh | Advantage of GLMs in terminal nodes of a regression tree?
Like you say this idea has been explored before (albeit under different names) and there actually is a broad literature on that topic. The names that I associate with this line of work are Wei-Yin Loh, Probal Chaudhuri, Hongshik Ahn, Joao Gama, Antonio Ciampi or Achim Zeileis. You can find a rather comprehensive description of pros and cons and different algorithms (slightly outdated) in this thesis.
Trees with GLM have the following (dis-) advantages (paraphrased from here - you can easily find the preprint by googling):
The functional form of a GLM can can sometimes appear to be too rigid for the whole data set, even if the model might fit well in a subsample.
Especially with large data sets or data sets where knowledge about
the underlying processes is limited, setting up useful parametric
models can be difficult and their performance with respect to
prediction may not be suffcient.
Trees are able to incorporate non-linear relationships or find the
functional relationship by themselves and therefore can have higher
predictive power in settings where classic models are biased or even
fail.
Due to their explorative character, trees with GLM can reveal
patterns hidden within data modelled with GLM or provide further
explanation of surprising or counter-intuitive results by
incorporating additional information from other covariates.
They can be helpful in identifying segments of the data for whom an a
priori assumed model fits well. It may be that overall this model has
a poor fit but that this is due to some contamination (for example
merging two separate data files or systematic errors during data
collection at a certain date). Trees with GLM might partition the
data in a way that enables us to find the segments that have poor fit
and find segments for which the fit may be rather good.
The tree-like structure allows the effects of these covariates to be
non-linear and highly interactive as opposed to assuming a linear in
influence on the linked mean.
Trees with GLM may lead to additional insight for an a priori assumed
parametric model, especially if the underlying mechanisms are too
complex to be captured by the GLM.
Trees with GLM can automatically detect interactions, non-linearity,
model misspecification, unregarded covariate influence and so on.
They can be used as an exploratory tool in complex and large data
sets for which it has a number of advantages.
Compared to a global GLM, a GLM model tree can alleviate the problem
of bias and model misspecification and provide a better fit.
Compared to tree algorithms with constants, the specification of a
parametric model in the terminal nodes can add extra stability and
therefore reduce the variance of the tree methods.
Being a hybrid of trees and classic GLM-type models, the performance
usually lies between those two poles: They tend to exhibit higher
predictive power than classic models but less than non-parametric
trees.
They add some complexity compared to classical model because of the
splitting process but are usually more parsimonous than
non-parametric trees.
They show a higher prediction variance than a global model in bootstrap
experiments, but much less than non-parametric trees (even pruned
ones).
Using a GLM in the node of a tree typically leads to smaller trees
Using a GLM in the node of a tree typically leads to more stable
predictions as compared to a tree with only a constant (but not as
stable as bagging or forests of trees)
The VC Dimension of a tree with GLM in the nodes is higher than the
equivalent tree with only a constant (as the latter is a special case
of the former)
Regarding the "effectiveness" (I assume you mean predictive performance) of trees with GLM, most of the papers cited in the above two links do provide some investigation into that. However, a comprehensive, broad comparison of all the algorithms with competitors like standard trees have not been done to the best of my knowledge. | Advantage of GLMs in terminal nodes of a regression tree?
Like you say this idea has been explored before (albeit under different names) and there actually is a broad literature on that topic. The names that I associate with this line of work are Wei-Yin Loh |
23,832 | Is it "okay" to plot a regression line for ranked data (Spearman correlation)? | A rank-correlation may be used to pick up monotonic association between variates as you note; as such you wouldn't normally plot a line for that.
There are situations where it makes perfect sense to use rank-correlations to actually fit lines to numeric-y vs numeric-x, whether Kendall or Spearman (or some other). See the discussion (and in particular, the last plot) here.
That's not your situation, though. In your case, I'd be inclined to just present a scatterplot of the original data, perhaps with a smooth relationship (e.g. by LOESS).
You expect the relationship to be monotonic; you might perhaps try to estimate and plot a monotonic relationship. [There's an R-function discussed here that can fit isotonic regression -- while the example there is unimodal not isotonic, the function can do isotonic fits.]
Here's an example of the kind of thing I mean:
The plot shows a monotonic relationship between x and y; the red curve is a loess smooth (in this case generated in R by scatter.smooth), which also happens to be montonic (there are ways to obtain smooth fits that are guaranteed to be monotonic, but in this case the default loess smooth was monotonic, so I didn't feel the need to worry.
Plot of rank(y) vs rank(x), indicating a monotonic relationship. The green line shows the ranks of the loess curve fitted values against rank(x).
The correlation between ranks of x and y (i.e. the Spearman correlation) is 0.892 - a high monotonic association. Similarly, the Spearman correlation between the (montonic) fitted loess-smoothed curve ($\hat{y}$) and the y-values is also 0.892. [This is not surprising, though, since it would be true of any curve which is a monotonic-increasing function of x, all of which would also correspond to the green line. The green line isn't a regression line between rank(x) and rank(y), but it's the line corresponding to a monotonic fit in the original plot. The 'regression line' for the ranked data has slope 0.892, not 1, so it's a little "flatter".]
If you're not displaying anything but rank(Y) vs X, I think I'd avoid using lines on the plots; as far as I can see they don't convey much of value above the correlation coefficient. And already said you're only interested in the trend.
[I don't know that it's wrong to plot a regression line on a ranked-y vs ranked-x plot, the difficulty would be its interpretation.] | Is it "okay" to plot a regression line for ranked data (Spearman correlation)? | A rank-correlation may be used to pick up monotonic association between variates as you note; as such you wouldn't normally plot a line for that.
There are situations where it makes perfect sense to | Is it "okay" to plot a regression line for ranked data (Spearman correlation)?
A rank-correlation may be used to pick up monotonic association between variates as you note; as such you wouldn't normally plot a line for that.
There are situations where it makes perfect sense to use rank-correlations to actually fit lines to numeric-y vs numeric-x, whether Kendall or Spearman (or some other). See the discussion (and in particular, the last plot) here.
That's not your situation, though. In your case, I'd be inclined to just present a scatterplot of the original data, perhaps with a smooth relationship (e.g. by LOESS).
You expect the relationship to be monotonic; you might perhaps try to estimate and plot a monotonic relationship. [There's an R-function discussed here that can fit isotonic regression -- while the example there is unimodal not isotonic, the function can do isotonic fits.]
Here's an example of the kind of thing I mean:
The plot shows a monotonic relationship between x and y; the red curve is a loess smooth (in this case generated in R by scatter.smooth), which also happens to be montonic (there are ways to obtain smooth fits that are guaranteed to be monotonic, but in this case the default loess smooth was monotonic, so I didn't feel the need to worry.
Plot of rank(y) vs rank(x), indicating a monotonic relationship. The green line shows the ranks of the loess curve fitted values against rank(x).
The correlation between ranks of x and y (i.e. the Spearman correlation) is 0.892 - a high monotonic association. Similarly, the Spearman correlation between the (montonic) fitted loess-smoothed curve ($\hat{y}$) and the y-values is also 0.892. [This is not surprising, though, since it would be true of any curve which is a monotonic-increasing function of x, all of which would also correspond to the green line. The green line isn't a regression line between rank(x) and rank(y), but it's the line corresponding to a monotonic fit in the original plot. The 'regression line' for the ranked data has slope 0.892, not 1, so it's a little "flatter".]
If you're not displaying anything but rank(Y) vs X, I think I'd avoid using lines on the plots; as far as I can see they don't convey much of value above the correlation coefficient. And already said you're only interested in the trend.
[I don't know that it's wrong to plot a regression line on a ranked-y vs ranked-x plot, the difficulty would be its interpretation.] | Is it "okay" to plot a regression line for ranked data (Spearman correlation)?
A rank-correlation may be used to pick up monotonic association between variates as you note; as such you wouldn't normally plot a line for that.
There are situations where it makes perfect sense to |
23,833 | Is it "okay" to plot a regression line for ranked data (Spearman correlation)? | The use of Spearman's $\rho$ is equivalent to using the proportional odds ordinal logistic model if one were to rank the $X$ vector while modeling. The P.O. model typically models $X$ on its original scale, and can include nonlinear terms. For getting predictions, it is advantageous to use a model-based approach. You can for example plot $X$ vs. the predicted mean $Y$ or predicted median $Y$ from a P.O. model fit. Examples are in the handouts from http://biostat.mc.vanderbilt.edu/rms. | Is it "okay" to plot a regression line for ranked data (Spearman correlation)? | The use of Spearman's $\rho$ is equivalent to using the proportional odds ordinal logistic model if one were to rank the $X$ vector while modeling. The P.O. model typically models $X$ on its original | Is it "okay" to plot a regression line for ranked data (Spearman correlation)?
The use of Spearman's $\rho$ is equivalent to using the proportional odds ordinal logistic model if one were to rank the $X$ vector while modeling. The P.O. model typically models $X$ on its original scale, and can include nonlinear terms. For getting predictions, it is advantageous to use a model-based approach. You can for example plot $X$ vs. the predicted mean $Y$ or predicted median $Y$ from a P.O. model fit. Examples are in the handouts from http://biostat.mc.vanderbilt.edu/rms. | Is it "okay" to plot a regression line for ranked data (Spearman correlation)?
The use of Spearman's $\rho$ is equivalent to using the proportional odds ordinal logistic model if one were to rank the $X$ vector while modeling. The P.O. model typically models $X$ on its original |
23,834 | Intuition behind RKHS (Reproducing Kernel Hilbert Space}? | As the name says, reproducing kernel Hilbert spaces is a Hilbert space, so some knowledge of Hilbert space/functional analysis comes in handy ... But you might as well start with RKHS, and then see what you do not understand, and what you need to read to cover that.
The usual example of Hilbert spaces, $L_2$, have the problem that the members are not functions, but equivalence classes of functions that coincide except on a set of (Lebesgue) measure zero. That way, they always give the same results when integrated ... and that is what $L_2$ spaces can be used for. Members of $L_2$ spaces cannot really be evaluated since you can change the value at one point without changing the value of the integral.
So in applications where you really want functions that you can evaluate at individual points (like in approximation theory, regression, ...) RKHS come in handy, because the defining property is equivalent to the requirement that the evaluation functional
$$
E_x(f) = f(x)
$$
is continuous in $f$ for each $x$. So you can evaluate the member functions, and replacing $f$ with some other function, say $f+\epsilon$ (in some sense ...) will only change the value a little bit. That is the intuition you asked for. | Intuition behind RKHS (Reproducing Kernel Hilbert Space}? | As the name says, reproducing kernel Hilbert spaces is a Hilbert space, so some knowledge of Hilbert space/functional analysis comes in handy ... But you might as well start with RKHS, and then see wh | Intuition behind RKHS (Reproducing Kernel Hilbert Space}?
As the name says, reproducing kernel Hilbert spaces is a Hilbert space, so some knowledge of Hilbert space/functional analysis comes in handy ... But you might as well start with RKHS, and then see what you do not understand, and what you need to read to cover that.
The usual example of Hilbert spaces, $L_2$, have the problem that the members are not functions, but equivalence classes of functions that coincide except on a set of (Lebesgue) measure zero. That way, they always give the same results when integrated ... and that is what $L_2$ spaces can be used for. Members of $L_2$ spaces cannot really be evaluated since you can change the value at one point without changing the value of the integral.
So in applications where you really want functions that you can evaluate at individual points (like in approximation theory, regression, ...) RKHS come in handy, because the defining property is equivalent to the requirement that the evaluation functional
$$
E_x(f) = f(x)
$$
is continuous in $f$ for each $x$. So you can evaluate the member functions, and replacing $f$ with some other function, say $f+\epsilon$ (in some sense ...) will only change the value a little bit. That is the intuition you asked for. | Intuition behind RKHS (Reproducing Kernel Hilbert Space}?
As the name says, reproducing kernel Hilbert spaces is a Hilbert space, so some knowledge of Hilbert space/functional analysis comes in handy ... But you might as well start with RKHS, and then see wh |
23,835 | Number of principal components when preprocessing using PCA in caret package in R | By default, caret keeps the components that explain 95% of the variance.
But you can change it by using the thresh parameter.
# Example
preProcess(training, method = "pca", thresh = 0.8)
You can also set a particular number of components by setting the pcaComp parameter.
# Example
preProcess(training, method = "pca", pcaComp = 7)
If you use both parameters, pcaComp has precedence over thresh.
Please see: https://www.rdocumentation.org/packages/caret/versions/6.0-77/topics/preProcess | Number of principal components when preprocessing using PCA in caret package in R | By default, caret keeps the components that explain 95% of the variance.
But you can change it by using the thresh parameter.
# Example
preProcess(training, method = "pca", thresh = 0.8)
You can als | Number of principal components when preprocessing using PCA in caret package in R
By default, caret keeps the components that explain 95% of the variance.
But you can change it by using the thresh parameter.
# Example
preProcess(training, method = "pca", thresh = 0.8)
You can also set a particular number of components by setting the pcaComp parameter.
# Example
preProcess(training, method = "pca", pcaComp = 7)
If you use both parameters, pcaComp has precedence over thresh.
Please see: https://www.rdocumentation.org/packages/caret/versions/6.0-77/topics/preProcess | Number of principal components when preprocessing using PCA in caret package in R
By default, caret keeps the components that explain 95% of the variance.
But you can change it by using the thresh parameter.
# Example
preProcess(training, method = "pca", thresh = 0.8)
You can als |
23,836 | Regression with very small sample size | @Glen_b is right about the nature of the normality assumption in regression1.
I think your bigger problem is going to be that you don't have enough data to support 4 to 5 explanatory variables. The standard rule of thumb2 is that you should have at least 10 data per explanatory variable, i.e. 40 or 50 data in your case (and this is for ideal situations where there isn't any question about the assumptions). Because your model would not be completely saturated3 (you have more data than parameters to fit), you can get parameter (slope, etc.) estimates and under ideal circumstances the estimates are asymptotically unbiased. However, it is quite likely that your estimates will be a long way off from the true values and your SE's / CI's will be very large, so you will have no statistical power. Note that using a nonparametric, or other alternative, regression analysis will not get you out of this problem.
What you will need to do here is either pick a single explanatory variable (before looking at your data!) based on prior theories in your field or your hunches, or you should combine your explanatory variables. A reasonable strategy for the latter option is to run a principal components analysis (PCA) and use the first principle component as your explanatory variable.
References:
1. What if residuals are normally distributed but Y is not?
2. Rules of thumb for minimum sample size for multiple regression
3. Maximum number of independent variables that can be entered into a multiple regression equation | Regression with very small sample size | @Glen_b is right about the nature of the normality assumption in regression1.
I think your bigger problem is going to be that you don't have enough data to support 4 to 5 explanatory variables. The | Regression with very small sample size
@Glen_b is right about the nature of the normality assumption in regression1.
I think your bigger problem is going to be that you don't have enough data to support 4 to 5 explanatory variables. The standard rule of thumb2 is that you should have at least 10 data per explanatory variable, i.e. 40 or 50 data in your case (and this is for ideal situations where there isn't any question about the assumptions). Because your model would not be completely saturated3 (you have more data than parameters to fit), you can get parameter (slope, etc.) estimates and under ideal circumstances the estimates are asymptotically unbiased. However, it is quite likely that your estimates will be a long way off from the true values and your SE's / CI's will be very large, so you will have no statistical power. Note that using a nonparametric, or other alternative, regression analysis will not get you out of this problem.
What you will need to do here is either pick a single explanatory variable (before looking at your data!) based on prior theories in your field or your hunches, or you should combine your explanatory variables. A reasonable strategy for the latter option is to run a principal components analysis (PCA) and use the first principle component as your explanatory variable.
References:
1. What if residuals are normally distributed but Y is not?
2. Rules of thumb for minimum sample size for multiple regression
3. Maximum number of independent variables that can be entered into a multiple regression equation | Regression with very small sample size
@Glen_b is right about the nature of the normality assumption in regression1.
I think your bigger problem is going to be that you don't have enough data to support 4 to 5 explanatory variables. The |
23,837 | How do support vector machines avoid overfitting? | Maximising the margin is not the only trick (although it is very important). If a non-linear kernel function is used, then the smoothness of the kernel function also has an effect on the complexity of the classifier and hence on the risk of over-fitting. If you use a Radial Basis Function (RBF) kernel, for example, and set the scale factor (kernel parameter) to a very small value, the SVM will tend towards a linear classifier. If you use a high value, the output of the classifier will be very sensitive to small changes in the input, which means that even with margin maximisation, you are likely to get over-fitting.
Unfortunately, the performance of the SVM can be quite sensitive to the selection of the regularisation and kernel parameters, and it is possible to get over-fitting in tuning these hyper-parameters via e.g. cross-validation. The theory underpinning SVMs does nothing to prevent this form of over-fitting in model selection. See my paper on this topic:
G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. | How do support vector machines avoid overfitting? | Maximising the margin is not the only trick (although it is very important). If a non-linear kernel function is used, then the smoothness of the kernel function also has an effect on the complexity o | How do support vector machines avoid overfitting?
Maximising the margin is not the only trick (although it is very important). If a non-linear kernel function is used, then the smoothness of the kernel function also has an effect on the complexity of the classifier and hence on the risk of over-fitting. If you use a Radial Basis Function (RBF) kernel, for example, and set the scale factor (kernel parameter) to a very small value, the SVM will tend towards a linear classifier. If you use a high value, the output of the classifier will be very sensitive to small changes in the input, which means that even with margin maximisation, you are likely to get over-fitting.
Unfortunately, the performance of the SVM can be quite sensitive to the selection of the regularisation and kernel parameters, and it is possible to get over-fitting in tuning these hyper-parameters via e.g. cross-validation. The theory underpinning SVMs does nothing to prevent this form of over-fitting in model selection. See my paper on this topic:
G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. | How do support vector machines avoid overfitting?
Maximising the margin is not the only trick (although it is very important). If a non-linear kernel function is used, then the smoothness of the kernel function also has an effect on the complexity o |
23,838 | Identification of useless questions from a questionnaire | Both classical test theory (CTT) and item response theory (IRT) can provide guidance as far as which items are contributing to the latent trait you wish to measure, and which do not. With CTT, consider 1) item difficulty, 2) item correlation to total score, 3) item variance, and 4) impact on internal consistency estimates (e.g., Cronbach's alpha) if the item is removed.
Items that are too easy or too difficult tend not to help separate subject (discriminate between high scorers and low scorers). Unless you are interested in measuring differences between top performers, very difficult questions should be considered for removal. In a similar vein, very easy items are only suitable if you are interested in the performance of low performers.
All items should correlate positively with total score and you can set a lower bound for that correlation of around 0.20 as a guide. Low correlations or negative correlations may indicate that there are wording problems in your questionnaire and that the question should be reversed scored.
Items with low variance (variability of scores) should be considered for removal as they don't separate subjects and don't contribute to the information gathered from the survey. Items with very high variance may be measuring something else than the construct/trait you wish to measure.
If the estimate of internal consistency improves with the item removed, then the item should be considered for removal, or re-worded.
Items that everyone gets correct are sometimes maximum items and those everyone gets wrong are sometimes called minimum items. They don't contribute to the information you are trying to gather.
If you are developing a high stakes questionnaire or plan on marketing the questionnaire you should definitely consider IRT. However, it is a large subject area and unless you are truly interested it is not probably worth the space to get into it here.
Hope this helps. | Identification of useless questions from a questionnaire | Both classical test theory (CTT) and item response theory (IRT) can provide guidance as far as which items are contributing to the latent trait you wish to measure, and which do not. With CTT, consid | Identification of useless questions from a questionnaire
Both classical test theory (CTT) and item response theory (IRT) can provide guidance as far as which items are contributing to the latent trait you wish to measure, and which do not. With CTT, consider 1) item difficulty, 2) item correlation to total score, 3) item variance, and 4) impact on internal consistency estimates (e.g., Cronbach's alpha) if the item is removed.
Items that are too easy or too difficult tend not to help separate subject (discriminate between high scorers and low scorers). Unless you are interested in measuring differences between top performers, very difficult questions should be considered for removal. In a similar vein, very easy items are only suitable if you are interested in the performance of low performers.
All items should correlate positively with total score and you can set a lower bound for that correlation of around 0.20 as a guide. Low correlations or negative correlations may indicate that there are wording problems in your questionnaire and that the question should be reversed scored.
Items with low variance (variability of scores) should be considered for removal as they don't separate subjects and don't contribute to the information gathered from the survey. Items with very high variance may be measuring something else than the construct/trait you wish to measure.
If the estimate of internal consistency improves with the item removed, then the item should be considered for removal, or re-worded.
Items that everyone gets correct are sometimes maximum items and those everyone gets wrong are sometimes called minimum items. They don't contribute to the information you are trying to gather.
If you are developing a high stakes questionnaire or plan on marketing the questionnaire you should definitely consider IRT. However, it is a large subject area and unless you are truly interested it is not probably worth the space to get into it here.
Hope this helps. | Identification of useless questions from a questionnaire
Both classical test theory (CTT) and item response theory (IRT) can provide guidance as far as which items are contributing to the latent trait you wish to measure, and which do not. With CTT, consid |
23,839 | Identification of useless questions from a questionnaire | I believe what you are looking for is Item Response Theory.
The "useless" questions you refer to are items with poor discrimination.
Using IRT analysis you can calculate the discrimination, difficulty, and the associated probability of guessing on items by survey participants.
The R program has an easy package for using IRT and I imagine other statistical software packages do as well.
If you want a quick overview here's the wikipedia page, but I would advise researching it more.
http://en.wikipedia.org/wiki/Item_response_theory | Identification of useless questions from a questionnaire | I believe what you are looking for is Item Response Theory.
The "useless" questions you refer to are items with poor discrimination.
Using IRT analysis you can calculate the discrimination, difficult | Identification of useless questions from a questionnaire
I believe what you are looking for is Item Response Theory.
The "useless" questions you refer to are items with poor discrimination.
Using IRT analysis you can calculate the discrimination, difficulty, and the associated probability of guessing on items by survey participants.
The R program has an easy package for using IRT and I imagine other statistical software packages do as well.
If you want a quick overview here's the wikipedia page, but I would advise researching it more.
http://en.wikipedia.org/wiki/Item_response_theory | Identification of useless questions from a questionnaire
I believe what you are looking for is Item Response Theory.
The "useless" questions you refer to are items with poor discrimination.
Using IRT analysis you can calculate the discrimination, difficult |
23,840 | Friedman test vs Wilcoxon test | Friedman test is not the extension of Wilcoxon test, so when you have only 2 related samples it is not the same as Wilcoxon signed rank test. The latter accounts for the magnitude of difference within a case (and then ranks it across cases), whereas Friedman only ranks within a case (and never across cases): it is less sensitive.
Friedman is actually almost the extension of sign test. With 2 samples, their p-values are very close, with Friedman being just slightly more conservative (these two tests treat ties in somewhat different ways). This small difference quickly vanishes as the sample size grows. So, for two related samples these two tests are really peer alternatives.
The test which is equivalent to Wilcoxon - in the same sense as Friedman to sign - is not very well known Quade test, mentioned for example here: http://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/friedman.htm. | Friedman test vs Wilcoxon test | Friedman test is not the extension of Wilcoxon test, so when you have only 2 related samples it is not the same as Wilcoxon signed rank test. The latter accounts for the magnitude of difference within | Friedman test vs Wilcoxon test
Friedman test is not the extension of Wilcoxon test, so when you have only 2 related samples it is not the same as Wilcoxon signed rank test. The latter accounts for the magnitude of difference within a case (and then ranks it across cases), whereas Friedman only ranks within a case (and never across cases): it is less sensitive.
Friedman is actually almost the extension of sign test. With 2 samples, their p-values are very close, with Friedman being just slightly more conservative (these two tests treat ties in somewhat different ways). This small difference quickly vanishes as the sample size grows. So, for two related samples these two tests are really peer alternatives.
The test which is equivalent to Wilcoxon - in the same sense as Friedman to sign - is not very well known Quade test, mentioned for example here: http://www.itl.nist.gov/div898/software/dataplot/refman1/auxillar/friedman.htm. | Friedman test vs Wilcoxon test
Friedman test is not the extension of Wilcoxon test, so when you have only 2 related samples it is not the same as Wilcoxon signed rank test. The latter accounts for the magnitude of difference within |
23,841 | Why did Thomas Bayes find Bayes' theorem so challenging? | Bayes' paper† begins:–
Given the number of times in which an unknown event has happened and failed: Required the chance that the probability of its happening in a single trial lies somewhere between any two degrees of probability that can be named.
Coming up with the theorem that now bears his name may not have been the most challenging part, nor his primary concern; rather he struggled with applying it to the problem of inference, & especially with justifying the assumption of a particular prior probability distribution. Argumentation on these issues continues into the 21st Century.
† Bayes (1763), "An Essay towards solving a Problem in the Doctrine
of Chances", Philosophical Transactions of the Royal Society of London 53, pp370–418. | Why did Thomas Bayes find Bayes' theorem so challenging? | Bayes' paper† begins:–
Given the number of times in which an unknown event has happened and failed: Required the chance that the probability of its happening in a single trial lies somewhere between | Why did Thomas Bayes find Bayes' theorem so challenging?
Bayes' paper† begins:–
Given the number of times in which an unknown event has happened and failed: Required the chance that the probability of its happening in a single trial lies somewhere between any two degrees of probability that can be named.
Coming up with the theorem that now bears his name may not have been the most challenging part, nor his primary concern; rather he struggled with applying it to the problem of inference, & especially with justifying the assumption of a particular prior probability distribution. Argumentation on these issues continues into the 21st Century.
† Bayes (1763), "An Essay towards solving a Problem in the Doctrine
of Chances", Philosophical Transactions of the Royal Society of London 53, pp370–418. | Why did Thomas Bayes find Bayes' theorem so challenging?
Bayes' paper† begins:–
Given the number of times in which an unknown event has happened and failed: Required the chance that the probability of its happening in a single trial lies somewhere between |
23,842 | Why did Thomas Bayes find Bayes' theorem so challenging? | Because everything is better understood now after great efforts by many people. As a result of that, it is much easier to teach these concepts in an understandable, intuitive manner. Imagine that you only know what was known at that time, instead of everything you've learnt now.
You can think of it as a puzzle: the more pieces are in place the easier it is to solve the remainder. The comparison may be a stretch, but discovering fire was no trivial matter either at the time it happened even if it may seem like one now. | Why did Thomas Bayes find Bayes' theorem so challenging? | Because everything is better understood now after great efforts by many people. As a result of that, it is much easier to teach these concepts in an understandable, intuitive manner. Imagine that you | Why did Thomas Bayes find Bayes' theorem so challenging?
Because everything is better understood now after great efforts by many people. As a result of that, it is much easier to teach these concepts in an understandable, intuitive manner. Imagine that you only know what was known at that time, instead of everything you've learnt now.
You can think of it as a puzzle: the more pieces are in place the easier it is to solve the remainder. The comparison may be a stretch, but discovering fire was no trivial matter either at the time it happened even if it may seem like one now. | Why did Thomas Bayes find Bayes' theorem so challenging?
Because everything is better understood now after great efforts by many people. As a result of that, it is much easier to teach these concepts in an understandable, intuitive manner. Imagine that you |
23,843 | Likelihood Ratio Test and Wald test provide different conclusion for glm in R | The main problem is that if you're going to use the ratio as your response variable, you should be using the weights argument. You must have ignored a warning about "non-integer #successes in a binomial glm" ...
Dilution <- c(1/128, 1/64, 1/32, 1/16, 1/8, 1/4, 1/2, 1, 2, 4)
NoofPlates <- rep(x=5, times=10)
NoPositive <- c(0, 0, 2, 2, 3, 4, 5, 5, 5, 5)
Data <- data.frame(Dilution, NoofPlates, NoPositive)
fm1 <- glm(formula=NoPositive/NoofPlates~log(Dilution),
family=binomial("logit"), data=Data, weights=NoofPlates)
coef(summary(fm1))
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 4.173698 1.2522190 3.333042 0.0008590205
## log(Dilution) 1.622552 0.4571016 3.549653 0.0003857398
anova(fm1,test="Chisq")
## Df Deviance Resid. Df Resid. Dev Pr(>Chi)
## NULL 9 41.212
## log(Dilution) 1 37.979 8 3.233 7.151e-10 ***
The LRT and Wald test results are still quite different ($p$-values of $4 \times 10^{-4}$ vs. $7 \times 10^{-10}$), but for practical purposes we can go ahead say they're both strongly significant ... (In this case (with a single parameter), aod::wald.test() gives
exactly the same p-value as summary().)
The Wald vs profile confidence intervals are also moderately different, but whether CIs [shown below] of (0.7,2.5) (Wald) and (0.9, 2.75) (LRT) are practically different depends on the particular situation.
Wald:
confint.default(fm1)
## 2.5 % 97.5 %
## (Intercept) 1.7193940 6.628002
## log(Dilution) 0.7266493 2.518455
Profile:
confint(fm1)
## 2.5 % 97.5 %
## (Intercept) 2.2009398 7.267565
## log(Dilution) 0.9014053 2.757092 | Likelihood Ratio Test and Wald test provide different conclusion for glm in R | The main problem is that if you're going to use the ratio as your response variable, you should be using the weights argument. You must have ignored a warning about "non-integer #successes in a binom | Likelihood Ratio Test and Wald test provide different conclusion for glm in R
The main problem is that if you're going to use the ratio as your response variable, you should be using the weights argument. You must have ignored a warning about "non-integer #successes in a binomial glm" ...
Dilution <- c(1/128, 1/64, 1/32, 1/16, 1/8, 1/4, 1/2, 1, 2, 4)
NoofPlates <- rep(x=5, times=10)
NoPositive <- c(0, 0, 2, 2, 3, 4, 5, 5, 5, 5)
Data <- data.frame(Dilution, NoofPlates, NoPositive)
fm1 <- glm(formula=NoPositive/NoofPlates~log(Dilution),
family=binomial("logit"), data=Data, weights=NoofPlates)
coef(summary(fm1))
## Estimate Std. Error z value Pr(>|z|)
## (Intercept) 4.173698 1.2522190 3.333042 0.0008590205
## log(Dilution) 1.622552 0.4571016 3.549653 0.0003857398
anova(fm1,test="Chisq")
## Df Deviance Resid. Df Resid. Dev Pr(>Chi)
## NULL 9 41.212
## log(Dilution) 1 37.979 8 3.233 7.151e-10 ***
The LRT and Wald test results are still quite different ($p$-values of $4 \times 10^{-4}$ vs. $7 \times 10^{-10}$), but for practical purposes we can go ahead say they're both strongly significant ... (In this case (with a single parameter), aod::wald.test() gives
exactly the same p-value as summary().)
The Wald vs profile confidence intervals are also moderately different, but whether CIs [shown below] of (0.7,2.5) (Wald) and (0.9, 2.75) (LRT) are practically different depends on the particular situation.
Wald:
confint.default(fm1)
## 2.5 % 97.5 %
## (Intercept) 1.7193940 6.628002
## log(Dilution) 0.7266493 2.518455
Profile:
confint(fm1)
## 2.5 % 97.5 %
## (Intercept) 2.2009398 7.267565
## log(Dilution) 0.9014053 2.757092 | Likelihood Ratio Test and Wald test provide different conclusion for glm in R
The main problem is that if you're going to use the ratio as your response variable, you should be using the weights argument. You must have ignored a warning about "non-integer #successes in a binom |
23,844 | regarding conditional independence and its graphical representation | The inverse covariance matrix can be used to work out conditional variances and covariances for multivariate Gaussian distributions. An earlier question gives some references
For example to find the conditional covariance of $Y$ and $Z$ given the value $X=x$, you would take the bottom right corner of the inverse covariance matrix
$$\left( \begin{array}{rr}
1 & -1 \\
-1 & 3 \end{array} \right) \text{ and re-invert it to }\left( \begin{array}{rr}
\tfrac32 & \tfrac12 \\
\tfrac12 & \tfrac12 \end{array} \right)$$
which does indeed give the covariance matrix of $Y$ and $Z$ conditioned on the the value for $X=x$.
So similarly to find the conditional covariance matrix of $X$ and $Y$ given the value for $Z=z$, you would take the top left corner of the inverse covariance matrix
$$\left( \begin{array}{cc}
1 & 0 \\
0 & 1 \end{array} \right) \text{ and re-invert it to }\left( \begin{array}{cc}
1 & 0 \\
0 & 1 \end{array} \right)$$
telling you that the conditional covariance between $X$ and $Y$ given $Z=z$ is $0$ (and that each of their conditional variances is $1$).
To conclude that this zero conditional covariance implies conditional independence, you also have to use the fact this is a multivariate Gaussian (as in general zero covariance does not necessarily imply independence). You know this from the construction.
Arguably you also know about the conditional independence from the construction, since you are told that $\epsilon_1$ and $\epsilon_2$ are iid, so conditioned on a particular value for $Z=z$, $X=z+\epsilon_1$ and $Y=z+\epsilon_2$ are also iid. If you know $Z=z$, there is no additional information from $X$ that helps you say anything about possible values of $Y$. | regarding conditional independence and its graphical representation | The inverse covariance matrix can be used to work out conditional variances and covariances for multivariate Gaussian distributions. An earlier question gives some references
For example to find the | regarding conditional independence and its graphical representation
The inverse covariance matrix can be used to work out conditional variances and covariances for multivariate Gaussian distributions. An earlier question gives some references
For example to find the conditional covariance of $Y$ and $Z$ given the value $X=x$, you would take the bottom right corner of the inverse covariance matrix
$$\left( \begin{array}{rr}
1 & -1 \\
-1 & 3 \end{array} \right) \text{ and re-invert it to }\left( \begin{array}{rr}
\tfrac32 & \tfrac12 \\
\tfrac12 & \tfrac12 \end{array} \right)$$
which does indeed give the covariance matrix of $Y$ and $Z$ conditioned on the the value for $X=x$.
So similarly to find the conditional covariance matrix of $X$ and $Y$ given the value for $Z=z$, you would take the top left corner of the inverse covariance matrix
$$\left( \begin{array}{cc}
1 & 0 \\
0 & 1 \end{array} \right) \text{ and re-invert it to }\left( \begin{array}{cc}
1 & 0 \\
0 & 1 \end{array} \right)$$
telling you that the conditional covariance between $X$ and $Y$ given $Z=z$ is $0$ (and that each of their conditional variances is $1$).
To conclude that this zero conditional covariance implies conditional independence, you also have to use the fact this is a multivariate Gaussian (as in general zero covariance does not necessarily imply independence). You know this from the construction.
Arguably you also know about the conditional independence from the construction, since you are told that $\epsilon_1$ and $\epsilon_2$ are iid, so conditioned on a particular value for $Z=z$, $X=z+\epsilon_1$ and $Y=z+\epsilon_2$ are also iid. If you know $Z=z$, there is no additional information from $X$ that helps you say anything about possible values of $Y$. | regarding conditional independence and its graphical representation
The inverse covariance matrix can be used to work out conditional variances and covariances for multivariate Gaussian distributions. An earlier question gives some references
For example to find the |
23,845 | regarding conditional independence and its graphical representation | This is a supplement to the correct and accepted answer. In particular, the original question contains a follow-up question about the statement the book makes.
Also, the left graph in the following figure is claimed to capture the independence relationship between $X$ and $Y$, why?
This is what is addressed in this answer, and is the only thing addressed in this answer.
To make sure we are on the same page, in what follows I use this definition of (undirected) conditional independence graph which corresponds (at least roughly) to Markov random fields:
Definition: The conditional independence graph of $X$ is the undirected graph $G=(K,E)$ where $K=\{ 1, 2, \dots, k \}$ and $(i,j)$ is not in the edge set if and only if $X_i \perp \! \! \! \perp X_j | X_{K \setminus \{i,j\}}$. (Where $X_{K \setminus \{i,j\}}$ denotes the vector of all of the random variables except for $X_i$ and $X_j$.)
From p. 60 of Whittaker, Graphical Models in Applied Mathematical Multivariate Statistics (1990).
Here, using the argument given by Henry in the correct, accepted answer, we can establish that $X$ and $Y$ are conditionally independent given $Z$, in notation, $X \perp \! \! \perp Y \ | Z$.
Since the only three random variables are $X, Y$, and $Z$, this means that $X$ and $Y$ are conditionally independent when given all of the other remaining random variables (in this case just $Z$).
Using the definition of conditional independence graph given above, this means that all edges in the graph should be included except for the edge between $X$ and $Y$. Indeed, this is exactly what is shown on the right graph of that picture.
Regarding the left graph, it is unclear without having more context, but I think the idea is just to show what the conditional independence graph would look like if we didn't have zeros in those entries of the inverse covariance matrix.
In particular, using the above definition, we see that we can start with the complete graph on the nodes $X, Y, Z$, which is the left graph in that picture, and then derive the conditional independence graph from that first graph by removing all edges corresponding to conditionally independent random variables. The picture does compare the two graphs explicitly ("versus"), which to me suggests a comparison between the complete graph one might start with and the conditional independence graph one ends up with if/when they apply the definition of conditional independence graph as given above. | regarding conditional independence and its graphical representation | This is a supplement to the correct and accepted answer. In particular, the original question contains a follow-up question about the statement the book makes.
Also, the left graph in the following f | regarding conditional independence and its graphical representation
This is a supplement to the correct and accepted answer. In particular, the original question contains a follow-up question about the statement the book makes.
Also, the left graph in the following figure is claimed to capture the independence relationship between $X$ and $Y$, why?
This is what is addressed in this answer, and is the only thing addressed in this answer.
To make sure we are on the same page, in what follows I use this definition of (undirected) conditional independence graph which corresponds (at least roughly) to Markov random fields:
Definition: The conditional independence graph of $X$ is the undirected graph $G=(K,E)$ where $K=\{ 1, 2, \dots, k \}$ and $(i,j)$ is not in the edge set if and only if $X_i \perp \! \! \! \perp X_j | X_{K \setminus \{i,j\}}$. (Where $X_{K \setminus \{i,j\}}$ denotes the vector of all of the random variables except for $X_i$ and $X_j$.)
From p. 60 of Whittaker, Graphical Models in Applied Mathematical Multivariate Statistics (1990).
Here, using the argument given by Henry in the correct, accepted answer, we can establish that $X$ and $Y$ are conditionally independent given $Z$, in notation, $X \perp \! \! \perp Y \ | Z$.
Since the only three random variables are $X, Y$, and $Z$, this means that $X$ and $Y$ are conditionally independent when given all of the other remaining random variables (in this case just $Z$).
Using the definition of conditional independence graph given above, this means that all edges in the graph should be included except for the edge between $X$ and $Y$. Indeed, this is exactly what is shown on the right graph of that picture.
Regarding the left graph, it is unclear without having more context, but I think the idea is just to show what the conditional independence graph would look like if we didn't have zeros in those entries of the inverse covariance matrix.
In particular, using the above definition, we see that we can start with the complete graph on the nodes $X, Y, Z$, which is the left graph in that picture, and then derive the conditional independence graph from that first graph by removing all edges corresponding to conditionally independent random variables. The picture does compare the two graphs explicitly ("versus"), which to me suggests a comparison between the complete graph one might start with and the conditional independence graph one ends up with if/when they apply the definition of conditional independence graph as given above. | regarding conditional independence and its graphical representation
This is a supplement to the correct and accepted answer. In particular, the original question contains a follow-up question about the statement the book makes.
Also, the left graph in the following f |
23,846 | Weighted generalized regression in BUGS, JAGS | It might be late... but,
Please note 2 things:
Adding data points is not advised as it would change degrees of freedom. Mean estimates of fixed effect could be well estimated, but all inference should be avoided with such models. It is hard to "let the data speaks" if you change it.
Of course it only works with integer-valued weights (you cannot duplicate 0.5 data point), which is not what is done in most weighted (lm) regression. In general, a weighing is created based on local variability estimated from replicates (e.g. 1/s or 1/s^2 at a given 'x') or based on response height (e.g. 1/Y or 1/Y^2, at a given 'x').
In Jags, Bugs, Stan, proc MCMC, or in Bayesian in general, the likelihood is not different than in frequentist lm or glm (or any model), it is just the same !! Just create a new column "weight" for your response, and write the likelihood as
y[i] ~ dnorm(mu[i], tau / weight[i])
Or a weighted poisson:
y[i] ~ dpois(lambda[i] * weight[i])
This Bugs/Jags code would simply to the trick. You will get everything correct. Don't forget to continue multiplying the posterior of tau by the weight, for instance when making prediction and confidence/prediction intervals. | Weighted generalized regression in BUGS, JAGS | It might be late... but,
Please note 2 things:
Adding data points is not advised as it would change degrees of freedom. Mean estimates of fixed effect could be well estimated, but all inference shoul | Weighted generalized regression in BUGS, JAGS
It might be late... but,
Please note 2 things:
Adding data points is not advised as it would change degrees of freedom. Mean estimates of fixed effect could be well estimated, but all inference should be avoided with such models. It is hard to "let the data speaks" if you change it.
Of course it only works with integer-valued weights (you cannot duplicate 0.5 data point), which is not what is done in most weighted (lm) regression. In general, a weighing is created based on local variability estimated from replicates (e.g. 1/s or 1/s^2 at a given 'x') or based on response height (e.g. 1/Y or 1/Y^2, at a given 'x').
In Jags, Bugs, Stan, proc MCMC, or in Bayesian in general, the likelihood is not different than in frequentist lm or glm (or any model), it is just the same !! Just create a new column "weight" for your response, and write the likelihood as
y[i] ~ dnorm(mu[i], tau / weight[i])
Or a weighted poisson:
y[i] ~ dpois(lambda[i] * weight[i])
This Bugs/Jags code would simply to the trick. You will get everything correct. Don't forget to continue multiplying the posterior of tau by the weight, for instance when making prediction and confidence/prediction intervals. | Weighted generalized regression in BUGS, JAGS
It might be late... but,
Please note 2 things:
Adding data points is not advised as it would change degrees of freedom. Mean estimates of fixed effect could be well estimated, but all inference shoul |
23,847 | Weighted generalized regression in BUGS, JAGS | First, it's worth pointing out thatglm does not perform bayesian regression. The 'weights' parameter is basically a short hand for "proportion of observations," which can be replaced with up-sampling your dataset appropriately. For example:
x=1:10
y=jitter(10*x)
w=sample(x,10)
augmented.x=NULL
augmented.y=NULL
for(i in 1:length(x)){
augmented.x=c(augmented.x, rep(x[i],w[i]))
augmented.y=c(augmented.y, rep(y[i],w[i]))
}
# These are both basically the same thing
m.1=lm(y~x, weights=w)
m.2=lm(augmented.y~augmented.x)
So to add weight to points in JAGS or BUGS, you can augment your dataset in a similar fashion as above. | Weighted generalized regression in BUGS, JAGS | First, it's worth pointing out thatglm does not perform bayesian regression. The 'weights' parameter is basically a short hand for "proportion of observations," which can be replaced with up-sampling | Weighted generalized regression in BUGS, JAGS
First, it's worth pointing out thatglm does not perform bayesian regression. The 'weights' parameter is basically a short hand for "proportion of observations," which can be replaced with up-sampling your dataset appropriately. For example:
x=1:10
y=jitter(10*x)
w=sample(x,10)
augmented.x=NULL
augmented.y=NULL
for(i in 1:length(x)){
augmented.x=c(augmented.x, rep(x[i],w[i]))
augmented.y=c(augmented.y, rep(y[i],w[i]))
}
# These are both basically the same thing
m.1=lm(y~x, weights=w)
m.2=lm(augmented.y~augmented.x)
So to add weight to points in JAGS or BUGS, you can augment your dataset in a similar fashion as above. | Weighted generalized regression in BUGS, JAGS
First, it's worth pointing out thatglm does not perform bayesian regression. The 'weights' parameter is basically a short hand for "proportion of observations," which can be replaced with up-sampling |
23,848 | Weighted generalized regression in BUGS, JAGS | Tried adding to comment above, but my rep is too low.
Should
y[i] ~ dnorm(mu[i], tau / weight[i])
not be
y[i] ~ dnorm(mu[i], tau * weight[i])
in JAGS? I'm running some tests comparing results from this method in JAGS to results from a weighted regression via lm() and can only find accordance using the latter. Here's a simple example:
aggregated <-
data.frame(x=1:5) %>%
mutate( y = round(2 * x + 2 + rnorm(length(x)) ),
freq = as.numeric(table(sample(1:5, 100,
replace=TRUE, prob=c(.3, .4, .5, .4, .3)))))
x <- aggregated$x
y <- aggregated$y
weight <- aggregated$freq
N <- length(y)
# via lm()
lm(y ~ x, data = aggregated, weight = freq)
and compare to
lin_wt_mod <- function() {
for (i in 1:N) {
y[i] ~ dnorm(mu[i], tau*weight[i])
mu[i] <- beta[1] + beta[2] * x[i]
}
for(j in 1:2){
beta[j] ~ dnorm(0,0.0001)
}
tau ~ dgamma(0.001, 0.001)
sigma <- 1/sqrt(tau)
}
dat <- list("N","x","y","weight")
params <- c("beta","tau","sigma")
library(R2jags)
fit_wt_lm1 <- jags.parallel(data = dat, parameters.to.save = params,
model.file = lin_wt_mod, n.iter = 3000, n.burnin = 1000)
fit_wt_lm1$BUGSoutput$summary | Weighted generalized regression in BUGS, JAGS | Tried adding to comment above, but my rep is too low.
Should
y[i] ~ dnorm(mu[i], tau / weight[i])
not be
y[i] ~ dnorm(mu[i], tau * weight[i])
in JAGS? I'm running some tests comparing results from | Weighted generalized regression in BUGS, JAGS
Tried adding to comment above, but my rep is too low.
Should
y[i] ~ dnorm(mu[i], tau / weight[i])
not be
y[i] ~ dnorm(mu[i], tau * weight[i])
in JAGS? I'm running some tests comparing results from this method in JAGS to results from a weighted regression via lm() and can only find accordance using the latter. Here's a simple example:
aggregated <-
data.frame(x=1:5) %>%
mutate( y = round(2 * x + 2 + rnorm(length(x)) ),
freq = as.numeric(table(sample(1:5, 100,
replace=TRUE, prob=c(.3, .4, .5, .4, .3)))))
x <- aggregated$x
y <- aggregated$y
weight <- aggregated$freq
N <- length(y)
# via lm()
lm(y ~ x, data = aggregated, weight = freq)
and compare to
lin_wt_mod <- function() {
for (i in 1:N) {
y[i] ~ dnorm(mu[i], tau*weight[i])
mu[i] <- beta[1] + beta[2] * x[i]
}
for(j in 1:2){
beta[j] ~ dnorm(0,0.0001)
}
tau ~ dgamma(0.001, 0.001)
sigma <- 1/sqrt(tau)
}
dat <- list("N","x","y","weight")
params <- c("beta","tau","sigma")
library(R2jags)
fit_wt_lm1 <- jags.parallel(data = dat, parameters.to.save = params,
model.file = lin_wt_mod, n.iter = 3000, n.burnin = 1000)
fit_wt_lm1$BUGSoutput$summary | Weighted generalized regression in BUGS, JAGS
Tried adding to comment above, but my rep is too low.
Should
y[i] ~ dnorm(mu[i], tau / weight[i])
not be
y[i] ~ dnorm(mu[i], tau * weight[i])
in JAGS? I'm running some tests comparing results from |
23,849 | Fisher for dummies? | An annotated Fisher would be an excellent resource!
I don't think that you will be able to understand Fisher without at the same time attempting to understand other major parts of the development of statistics and Fisher's interactions with the other important contributors. I found Statistics in Psychology: An Historical Perspective by Michael Cowles to be very helpful. (Don't let the psych bit of the title put you off: the book is quite general and seems to be a very even-handed account.)
On the topic of annotated Fisher, I quite recently annotated one of his paragraphs when I was asked to justify an assertion that Fisher proposed P-values to be indices of evidence against the null hypothesis. This is how I responded:
I have looked around a bit without finding an exact specification
because, as usual, Fisher's writing is awkward and requires some
effortful interpretation on the reader's part. He says on p. 46 of
Statistical Methods and Scientific Inference (I have the last
edition):
"Though recognizable as a psychological condition of reluctance, or
resistance to the acceptance of a proposition, the feeling induced by
a test of significance has an objective basis in that the probability
statement on which it is based is a fact communicable to, and
verifiable by, other rational minds. The level of significance in such
cases fulfils the conditions of a measure of the rational grounds for
the disbelief it engenders. It is more primitive, or elemental than,
and does not justify, any exact probability statement about the
proposition."
Here it is again, with my editiorial and interpretive statements:
"Though recognizable as a psychological condition of reluctance, or
resistance to the acceptance of a proposition, the feeling induced by
a test of significance has an objective basis [Significance tests are
less well-defined mathematically than Neyman's hypothesis tests, but
they are nonetheless objective. Neyman may have criticised Fisher for
subjectivism bordering on Bayesianism (Gigerenzer et al. 1989, quoted
by Louca ISSN No 0874-4548), and Fisher wouldn't have liked it.] in
that the probability statement on which it is based [i.e. the
probability of having observed data as extreme or more extreme under
the null hypothesis] is a fact communicable to, and verifiable by,
other rational minds [i.e. to everyone except Neyman, whose
misunderstanding or misapplication of significance test principles is
criticised by Fisher in his preceding paragraph.]. The level of
significance in such cases [the P-value] fulfils the conditions of a
measure of the rational grounds for the disbelief it engenders [which
is to say, evidence]. It is more primitive, or elemental than, and
does not justify, any exact probability statement about the
proposition [and hence can be an index, but not a measure of
probability.]." | Fisher for dummies? | An annotated Fisher would be an excellent resource!
I don't think that you will be able to understand Fisher without at the same time attempting to understand other major parts of the development of s | Fisher for dummies?
An annotated Fisher would be an excellent resource!
I don't think that you will be able to understand Fisher without at the same time attempting to understand other major parts of the development of statistics and Fisher's interactions with the other important contributors. I found Statistics in Psychology: An Historical Perspective by Michael Cowles to be very helpful. (Don't let the psych bit of the title put you off: the book is quite general and seems to be a very even-handed account.)
On the topic of annotated Fisher, I quite recently annotated one of his paragraphs when I was asked to justify an assertion that Fisher proposed P-values to be indices of evidence against the null hypothesis. This is how I responded:
I have looked around a bit without finding an exact specification
because, as usual, Fisher's writing is awkward and requires some
effortful interpretation on the reader's part. He says on p. 46 of
Statistical Methods and Scientific Inference (I have the last
edition):
"Though recognizable as a psychological condition of reluctance, or
resistance to the acceptance of a proposition, the feeling induced by
a test of significance has an objective basis in that the probability
statement on which it is based is a fact communicable to, and
verifiable by, other rational minds. The level of significance in such
cases fulfils the conditions of a measure of the rational grounds for
the disbelief it engenders. It is more primitive, or elemental than,
and does not justify, any exact probability statement about the
proposition."
Here it is again, with my editiorial and interpretive statements:
"Though recognizable as a psychological condition of reluctance, or
resistance to the acceptance of a proposition, the feeling induced by
a test of significance has an objective basis [Significance tests are
less well-defined mathematically than Neyman's hypothesis tests, but
they are nonetheless objective. Neyman may have criticised Fisher for
subjectivism bordering on Bayesianism (Gigerenzer et al. 1989, quoted
by Louca ISSN No 0874-4548), and Fisher wouldn't have liked it.] in
that the probability statement on which it is based [i.e. the
probability of having observed data as extreme or more extreme under
the null hypothesis] is a fact communicable to, and verifiable by,
other rational minds [i.e. to everyone except Neyman, whose
misunderstanding or misapplication of significance test principles is
criticised by Fisher in his preceding paragraph.]. The level of
significance in such cases [the P-value] fulfils the conditions of a
measure of the rational grounds for the disbelief it engenders [which
is to say, evidence]. It is more primitive, or elemental than, and
does not justify, any exact probability statement about the
proposition [and hence can be an index, but not a measure of
probability.]." | Fisher for dummies?
An annotated Fisher would be an excellent resource!
I don't think that you will be able to understand Fisher without at the same time attempting to understand other major parts of the development of s |
23,850 | Fisher for dummies? | Excellent web page at
http://www.economics.soton.ac.uk/staff/aldrich/fisherguide/rafreader.htm
with very full bibliography. | Fisher for dummies? | Excellent web page at
http://www.economics.soton.ac.uk/staff/aldrich/fisherguide/rafreader.htm
with very full bibliography. | Fisher for dummies?
Excellent web page at
http://www.economics.soton.ac.uk/staff/aldrich/fisherguide/rafreader.htm
with very full bibliography. | Fisher for dummies?
Excellent web page at
http://www.economics.soton.ac.uk/staff/aldrich/fisherguide/rafreader.htm
with very full bibliography. |
23,851 | Mathematically modeling neural networks as graphical models | Another good introduction on the subject is the CSC321 course at the University of Toronto, and the neuralnets-2012-001 course on Coursera, both taught by Geoffrey Hinton.
From the video on Belief Nets:
Graphical models
Early graphical models used experts to define the graph structure and the conditional probabilities. The graphs were sparsely connected, and the focus was on performing correct inference, and not on learning (the knowledge came from the experts).
Neural networks
For neural nets, learning was central. Hard-wiring the knowledge was not cool (OK, maybe a little bit). Learning came from learning the training data, not from experts. Neural networks did not aim for interpretability of sparse connectivity to make inference easy. Nevertheless, there are neural network versions of belief nets.
My understanding is that belief nets are usually too densely connected, and their cliques are too large, to be interpretable. Belief nets use the sigmoid function to integrate inputs, while continuous graphical models typically use the Gaussian function. The sigmoid makes the network easier to train, but it is more difficult to interpret in terms of the probability. I believe both are in the exponential family.
I am far from an expert on this, but the lecture notes and videos are a great resource. | Mathematically modeling neural networks as graphical models | Another good introduction on the subject is the CSC321 course at the University of Toronto, and the neuralnets-2012-001 course on Coursera, both taught by Geoffrey Hinton.
From the video on Belief Net | Mathematically modeling neural networks as graphical models
Another good introduction on the subject is the CSC321 course at the University of Toronto, and the neuralnets-2012-001 course on Coursera, both taught by Geoffrey Hinton.
From the video on Belief Nets:
Graphical models
Early graphical models used experts to define the graph structure and the conditional probabilities. The graphs were sparsely connected, and the focus was on performing correct inference, and not on learning (the knowledge came from the experts).
Neural networks
For neural nets, learning was central. Hard-wiring the knowledge was not cool (OK, maybe a little bit). Learning came from learning the training data, not from experts. Neural networks did not aim for interpretability of sparse connectivity to make inference easy. Nevertheless, there are neural network versions of belief nets.
My understanding is that belief nets are usually too densely connected, and their cliques are too large, to be interpretable. Belief nets use the sigmoid function to integrate inputs, while continuous graphical models typically use the Gaussian function. The sigmoid makes the network easier to train, but it is more difficult to interpret in terms of the probability. I believe both are in the exponential family.
I am far from an expert on this, but the lecture notes and videos are a great resource. | Mathematically modeling neural networks as graphical models
Another good introduction on the subject is the CSC321 course at the University of Toronto, and the neuralnets-2012-001 course on Coursera, both taught by Geoffrey Hinton.
From the video on Belief Net |
23,852 | Mathematically modeling neural networks as graphical models | Radford Neal has done a good bit of work in this area that might interest you, including some direct work in equating Bayesian graphical models with neural networks. (His dissertation was apparently on this specific topic.)
I'm not familiar enough with this work to provide an intelligent summary, but I wanted to give you the pointer in case you find it helpful. | Mathematically modeling neural networks as graphical models | Radford Neal has done a good bit of work in this area that might interest you, including some direct work in equating Bayesian graphical models with neural networks. (His dissertation was apparently o | Mathematically modeling neural networks as graphical models
Radford Neal has done a good bit of work in this area that might interest you, including some direct work in equating Bayesian graphical models with neural networks. (His dissertation was apparently on this specific topic.)
I'm not familiar enough with this work to provide an intelligent summary, but I wanted to give you the pointer in case you find it helpful. | Mathematically modeling neural networks as graphical models
Radford Neal has done a good bit of work in this area that might interest you, including some direct work in equating Bayesian graphical models with neural networks. (His dissertation was apparently o |
23,853 | Mathematically modeling neural networks as graphical models | This may be a old thread, but still a relevant question.
The most prominent example of the connections between Neural Networks (NN) and Probabilistic Graphical Models (PGM) is the one between Boltzmann Machines (and its variations like Restricted BM, Deep BM etc) and undirected PGMs of Markov Random Field.
Similarly, Belief Networks (and it's variations like Deep BN etc) are a type of directed PGMs of Bayesian graphs
For more, see:
Yann Lecun, "A tutorial on energy-based learning" (2006)
Yoshua Bengio, Ian Goodfellow and Aaron Courville, "Deep Learning", Ch 16 & 20 (book in preparation, at the time of writing this) | Mathematically modeling neural networks as graphical models | This may be a old thread, but still a relevant question.
The most prominent example of the connections between Neural Networks (NN) and Probabilistic Graphical Models (PGM) is the one between Boltzma | Mathematically modeling neural networks as graphical models
This may be a old thread, but still a relevant question.
The most prominent example of the connections between Neural Networks (NN) and Probabilistic Graphical Models (PGM) is the one between Boltzmann Machines (and its variations like Restricted BM, Deep BM etc) and undirected PGMs of Markov Random Field.
Similarly, Belief Networks (and it's variations like Deep BN etc) are a type of directed PGMs of Bayesian graphs
For more, see:
Yann Lecun, "A tutorial on energy-based learning" (2006)
Yoshua Bengio, Ian Goodfellow and Aaron Courville, "Deep Learning", Ch 16 & 20 (book in preparation, at the time of writing this) | Mathematically modeling neural networks as graphical models
This may be a old thread, but still a relevant question.
The most prominent example of the connections between Neural Networks (NN) and Probabilistic Graphical Models (PGM) is the one between Boltzma |
23,854 | Log-Cauchy Random Number Generation | A variable $X$ has a log-cauchy distribution if $\log(X)$ has a cauchy distribution. So, we just need to generate cauchy random variables and exponentiate them to get something that is log-cauchy distributed.
We can generate from the cauchy distribution using inverse transform sampling, which says that if you plug random uniforms into the inverse CDF of a distribution, then what you get out has that distribution. The cauchy distribution with location $\mu$ and scale $\sigma$ has CDF:
$$
F(x) = \frac{1}{\pi} \arctan\left(\frac{x-\mu}{\sigma}\right)+\frac{1}{2} $$
it is straightforward to invert this function to find that
$$
F^{-1}(y) =\mu + \sigma\,\tan\left[\pi\left(y-\tfrac{1}{2}\right)\right] $$
Therefore if $U \sim {\rm Uniform}(0,1)$ then $Y = \mu + \sigma\,\tan\left[\pi\left(U-\tfrac{1}{2}\right)\right]$ has a cauchy distribution with location $\mu$ and scale $\sigma$ and $\exp(Y)$ has a log-cauchy distribution. Some R code to generate from this distribution (without using rcauchy :))
rlogcauchy <- function(n, mu, sigma)
{
u = runif(n)
x = mu + sigma*tan(pi*(u-.5))
return( exp(x) )
}
Note: since the cauchy distribution is very long tailed, when you exponentiate them on a computer you may get values that are numerically "infinite". I'm not sure there's anything to be done about that.
Also note that if you were to do inverse transform sampling using the log-cauchy quantile function directly, you'd have the same problem, since, after doing the calculation, you actually end up with the act same thing - $\exp \left( \mu + \sigma\,\tan\left[\pi\left(U-\tfrac{1}{2}\right)\right] \right)$ | Log-Cauchy Random Number Generation | A variable $X$ has a log-cauchy distribution if $\log(X)$ has a cauchy distribution. So, we just need to generate cauchy random variables and exponentiate them to get something that is log-cauchy dist | Log-Cauchy Random Number Generation
A variable $X$ has a log-cauchy distribution if $\log(X)$ has a cauchy distribution. So, we just need to generate cauchy random variables and exponentiate them to get something that is log-cauchy distributed.
We can generate from the cauchy distribution using inverse transform sampling, which says that if you plug random uniforms into the inverse CDF of a distribution, then what you get out has that distribution. The cauchy distribution with location $\mu$ and scale $\sigma$ has CDF:
$$
F(x) = \frac{1}{\pi} \arctan\left(\frac{x-\mu}{\sigma}\right)+\frac{1}{2} $$
it is straightforward to invert this function to find that
$$
F^{-1}(y) =\mu + \sigma\,\tan\left[\pi\left(y-\tfrac{1}{2}\right)\right] $$
Therefore if $U \sim {\rm Uniform}(0,1)$ then $Y = \mu + \sigma\,\tan\left[\pi\left(U-\tfrac{1}{2}\right)\right]$ has a cauchy distribution with location $\mu$ and scale $\sigma$ and $\exp(Y)$ has a log-cauchy distribution. Some R code to generate from this distribution (without using rcauchy :))
rlogcauchy <- function(n, mu, sigma)
{
u = runif(n)
x = mu + sigma*tan(pi*(u-.5))
return( exp(x) )
}
Note: since the cauchy distribution is very long tailed, when you exponentiate them on a computer you may get values that are numerically "infinite". I'm not sure there's anything to be done about that.
Also note that if you were to do inverse transform sampling using the log-cauchy quantile function directly, you'd have the same problem, since, after doing the calculation, you actually end up with the act same thing - $\exp \left( \mu + \sigma\,\tan\left[\pi\left(U-\tfrac{1}{2}\right)\right] \right)$ | Log-Cauchy Random Number Generation
A variable $X$ has a log-cauchy distribution if $\log(X)$ has a cauchy distribution. So, we just need to generate cauchy random variables and exponentiate them to get something that is log-cauchy dist |
23,855 | The relationship between the number of support vectors and the number of features | If you look at the optimization problem that SVM solves:
$\min_{\mathbf{w},\mathbf{\xi}, b } \left\{\frac{1}{2} \|\mathbf{w}\|^2 + C \sum_{i=1}^n \xi_i \right\}$
s.t.
$y_i(\mathbf{w}\cdot\mathbf{x_i} - b) \ge 1 - \xi_i, ~~~~\xi_i \ge 0,$
for all $ i=1,\dots n$
the support vectors are those $x_i$ where the corresponding $\xi_i \gt 0$. In other words, they are the data points that are either misclassified, or close to the boundary.
Now let's compare the solution to this problem when you have a full set of features, to the case where you throw some features away. Throwing a feature away is functionally equivalent to keeping the feature, but adding a contraint $w_j=0$ for the feature $j$ that we want to discard.
When you compare these two optimization problems, and work through the math, it turns out there is no hard relationship between the number of features and the number of support vectors. It could go either way.
It's useful to think about a simple case. Imagine a 2-dim case where your negative and positive features are clustered around (-1,-1) and (1,1), respectively, and are separable with a diagonal separating hyperplane with 3 support vectors. Now imagine dropping the y-axis feature, so your data in now projected on the x-axis. If the data are still separable, say at x=0, you'd probably be left with only 2 support vectors, one on each side, so adding the y-feature would increases the number of support vectors. However, if the data are no longer separable, you'd get at least one support vector for each point that's on the wrong side of x=0, in which case adding the y-feature would reduce the number of support vectors.
So, if this intuition is correct, if you're working in very high-dimensional feature spaces, or using a kernel that maps to a high dimensional feature space, then your data is more likely to be separable, so adding a feature will tend to just add another support vector. Whereas if your data is not currently separable, and you add a feature that significantly improves separability, then you're more likely to see a decrease in the number of support vectors. | The relationship between the number of support vectors and the number of features | If you look at the optimization problem that SVM solves:
$\min_{\mathbf{w},\mathbf{\xi}, b } \left\{\frac{1}{2} \|\mathbf{w}\|^2 + C \sum_{i=1}^n \xi_i \right\}$
s.t.
$y_i(\mathbf{w}\cdot\mathbf{x_ | The relationship between the number of support vectors and the number of features
If you look at the optimization problem that SVM solves:
$\min_{\mathbf{w},\mathbf{\xi}, b } \left\{\frac{1}{2} \|\mathbf{w}\|^2 + C \sum_{i=1}^n \xi_i \right\}$
s.t.
$y_i(\mathbf{w}\cdot\mathbf{x_i} - b) \ge 1 - \xi_i, ~~~~\xi_i \ge 0,$
for all $ i=1,\dots n$
the support vectors are those $x_i$ where the corresponding $\xi_i \gt 0$. In other words, they are the data points that are either misclassified, or close to the boundary.
Now let's compare the solution to this problem when you have a full set of features, to the case where you throw some features away. Throwing a feature away is functionally equivalent to keeping the feature, but adding a contraint $w_j=0$ for the feature $j$ that we want to discard.
When you compare these two optimization problems, and work through the math, it turns out there is no hard relationship between the number of features and the number of support vectors. It could go either way.
It's useful to think about a simple case. Imagine a 2-dim case where your negative and positive features are clustered around (-1,-1) and (1,1), respectively, and are separable with a diagonal separating hyperplane with 3 support vectors. Now imagine dropping the y-axis feature, so your data in now projected on the x-axis. If the data are still separable, say at x=0, you'd probably be left with only 2 support vectors, one on each side, so adding the y-feature would increases the number of support vectors. However, if the data are no longer separable, you'd get at least one support vector for each point that's on the wrong side of x=0, in which case adding the y-feature would reduce the number of support vectors.
So, if this intuition is correct, if you're working in very high-dimensional feature spaces, or using a kernel that maps to a high dimensional feature space, then your data is more likely to be separable, so adding a feature will tend to just add another support vector. Whereas if your data is not currently separable, and you add a feature that significantly improves separability, then you're more likely to see a decrease in the number of support vectors. | The relationship between the number of support vectors and the number of features
If you look at the optimization problem that SVM solves:
$\min_{\mathbf{w},\mathbf{\xi}, b } \left\{\frac{1}{2} \|\mathbf{w}\|^2 + C \sum_{i=1}^n \xi_i \right\}$
s.t.
$y_i(\mathbf{w}\cdot\mathbf{x_ |
23,856 | Ridge and LASSO given a covariance structure? | If we know the Cholesky decomposition $V^{-1} = L^TL$, say, then
$$(y - X\beta)^T V^{-1} (y - X\beta) = (Ly - LX\beta)^T (Ly - LX\beta)$$
and we can use standard algorithms (with whatever penalization function one prefers) by replacing the response with the vector $Ly$ and the predictors with the matrix $LX$. | Ridge and LASSO given a covariance structure? | If we know the Cholesky decomposition $V^{-1} = L^TL$, say, then
$$(y - X\beta)^T V^{-1} (y - X\beta) = (Ly - LX\beta)^T (Ly - LX\beta)$$
and we can use standard algorithms (with whatever penalizat | Ridge and LASSO given a covariance structure?
If we know the Cholesky decomposition $V^{-1} = L^TL$, say, then
$$(y - X\beta)^T V^{-1} (y - X\beta) = (Ly - LX\beta)^T (Ly - LX\beta)$$
and we can use standard algorithms (with whatever penalization function one prefers) by replacing the response with the vector $Ly$ and the predictors with the matrix $LX$. | Ridge and LASSO given a covariance structure?
If we know the Cholesky decomposition $V^{-1} = L^TL$, say, then
$$(y - X\beta)^T V^{-1} (y - X\beta) = (Ly - LX\beta)^T (Ly - LX\beta)$$
and we can use standard algorithms (with whatever penalizat |
23,857 | Can I hypothesis test for skew normal data? | Regarding how to fit data to a skew-normal distribution You could calculate the maximum likelihood estimator from first principles. First note that the probability density function for the skew normal distribution with location parameter $\xi$, scale parameter $\omega$ and shape parameter $\alpha$ is
$$ \frac{2}{\omega} \phi\left(\frac{x-\xi}{\omega}\right) \Phi\left(\alpha \left(\frac{x-\xi}{\omega}\right)\right) $$
where $\phi(\cdot)$ is the standard normal density function and $\Phi(\cdot)$ is the standard normal CDF. Note that this density is a member of the class described in my answer to this question.
The log-likelihood based on a sample of $n$ independent observations from this distribution is:
$$ -n\log(\omega) + \sum_{i=1}^{n} \log \phi\left(\frac{x-\xi}{\omega}\right) + \log \Phi\left(\alpha \left(\frac{x-\xi}{\omega}\right)\right)$$
It's a fact that there is no closed form solution for this MLE. But, it can be solved numerically. For example, in R, you could code up the likelihood function as (note, I've made it less compact/efficient than possible to make it completely transparent how this calculates the likelihood function above):
set.seed(2345)
# generate standard normal data, which is a special case
n = 100
X = rnorm(n)
# Calculate (negative) log likelihood for minimization
# P[1] is omega, P[2] is xi and P[3] is alpha
L = function(P)
{
# positivity constraint on omega
if( P[1] <= 0 ) return(Inf)
S = 0
for(i in 1:n)
{
S = S - log( dnorm( (X[i] - P[2])/P[1] ) )
S = S - log( pnorm( P[3]*(X[i] - P[2])/P[1] ) )
}
return(S + n*log(P[1]))
}
Now we just numerically minimize this function (i.e. maximize the likelihood). You can do this without having to calculate derivatives by using the Simplex Algorithm, which is the default implementation in the optim() package in R.
Regarding how to test for skewness: We can explicitly test for skew-normal vs. normal (since normal is a submodel) by constraining $\alpha = 0$ and doing a likelihood ratio test.
# log likelihood constraining alpha=0.
L2 = function(Q) L(c(Q[1],Q[2],0))
# log likelihood from the constrained model
-optim(c(1,1),L2)$value
[1] -202.8816
# log likelihood from the full model
-optim(c(1,1,1),L)$value
[1] -202.0064
# likelihood ratio test statistic
LRT = 2*(202.8816-202.0064)
# p-value under the null distribution (chi square 1)
1-pchisq(LRT,1)
[1] 0.1858265
So we so not reject the null hypothesis that $\alpha=0$ (i.e. no skew).
Here the comparison was simple, since the normal distribution was a submodel. In other, more general cases, you could compare the skew-normal to other reference distributions by comparing, for example, AICs (as done here) if you're using maximum likelihood estimators in all competing fits. For example, you could fit the data by maximum likelihood under a gamma distribution and under the skew normal and see if the added likelihood justifies the added complexity of the skew-normal (3 parameters instead of 2). You could also consider using the one sample Kolmogorov Smirnov test to compare your data with the best fitting estimate from the skew-normal family. | Can I hypothesis test for skew normal data? | Regarding how to fit data to a skew-normal distribution You could calculate the maximum likelihood estimator from first principles. First note that the probability density function for the skew normal | Can I hypothesis test for skew normal data?
Regarding how to fit data to a skew-normal distribution You could calculate the maximum likelihood estimator from first principles. First note that the probability density function for the skew normal distribution with location parameter $\xi$, scale parameter $\omega$ and shape parameter $\alpha$ is
$$ \frac{2}{\omega} \phi\left(\frac{x-\xi}{\omega}\right) \Phi\left(\alpha \left(\frac{x-\xi}{\omega}\right)\right) $$
where $\phi(\cdot)$ is the standard normal density function and $\Phi(\cdot)$ is the standard normal CDF. Note that this density is a member of the class described in my answer to this question.
The log-likelihood based on a sample of $n$ independent observations from this distribution is:
$$ -n\log(\omega) + \sum_{i=1}^{n} \log \phi\left(\frac{x-\xi}{\omega}\right) + \log \Phi\left(\alpha \left(\frac{x-\xi}{\omega}\right)\right)$$
It's a fact that there is no closed form solution for this MLE. But, it can be solved numerically. For example, in R, you could code up the likelihood function as (note, I've made it less compact/efficient than possible to make it completely transparent how this calculates the likelihood function above):
set.seed(2345)
# generate standard normal data, which is a special case
n = 100
X = rnorm(n)
# Calculate (negative) log likelihood for minimization
# P[1] is omega, P[2] is xi and P[3] is alpha
L = function(P)
{
# positivity constraint on omega
if( P[1] <= 0 ) return(Inf)
S = 0
for(i in 1:n)
{
S = S - log( dnorm( (X[i] - P[2])/P[1] ) )
S = S - log( pnorm( P[3]*(X[i] - P[2])/P[1] ) )
}
return(S + n*log(P[1]))
}
Now we just numerically minimize this function (i.e. maximize the likelihood). You can do this without having to calculate derivatives by using the Simplex Algorithm, which is the default implementation in the optim() package in R.
Regarding how to test for skewness: We can explicitly test for skew-normal vs. normal (since normal is a submodel) by constraining $\alpha = 0$ and doing a likelihood ratio test.
# log likelihood constraining alpha=0.
L2 = function(Q) L(c(Q[1],Q[2],0))
# log likelihood from the constrained model
-optim(c(1,1),L2)$value
[1] -202.8816
# log likelihood from the full model
-optim(c(1,1,1),L)$value
[1] -202.0064
# likelihood ratio test statistic
LRT = 2*(202.8816-202.0064)
# p-value under the null distribution (chi square 1)
1-pchisq(LRT,1)
[1] 0.1858265
So we so not reject the null hypothesis that $\alpha=0$ (i.e. no skew).
Here the comparison was simple, since the normal distribution was a submodel. In other, more general cases, you could compare the skew-normal to other reference distributions by comparing, for example, AICs (as done here) if you're using maximum likelihood estimators in all competing fits. For example, you could fit the data by maximum likelihood under a gamma distribution and under the skew normal and see if the added likelihood justifies the added complexity of the skew-normal (3 parameters instead of 2). You could also consider using the one sample Kolmogorov Smirnov test to compare your data with the best fitting estimate from the skew-normal family. | Can I hypothesis test for skew normal data?
Regarding how to fit data to a skew-normal distribution You could calculate the maximum likelihood estimator from first principles. First note that the probability density function for the skew normal |
23,858 | Can I hypothesis test for skew normal data? | I am a statistician who has been working in this profession for over 30 years and before reading this post I had never heard of the skew normal distribution. If you have highly skewed data why do specifically want to look at skew normal as opposed to lognormal or gamma? Anytime you have a parametric family of distributions such as the gamma, lognormal or skew normal you can apply a goodness of fit test such as chi-square or Kolmogorov-Smirnov. | Can I hypothesis test for skew normal data? | I am a statistician who has been working in this profession for over 30 years and before reading this post I had never heard of the skew normal distribution. If you have highly skewed data why do spec | Can I hypothesis test for skew normal data?
I am a statistician who has been working in this profession for over 30 years and before reading this post I had never heard of the skew normal distribution. If you have highly skewed data why do specifically want to look at skew normal as opposed to lognormal or gamma? Anytime you have a parametric family of distributions such as the gamma, lognormal or skew normal you can apply a goodness of fit test such as chi-square or Kolmogorov-Smirnov. | Can I hypothesis test for skew normal data?
I am a statistician who has been working in this profession for over 30 years and before reading this post I had never heard of the skew normal distribution. If you have highly skewed data why do spec |
23,859 | Can I hypothesis test for skew normal data? | So my solution in the end was to download the fGarch package, and snormFit provided by fGarch to get MLEs for the parameters to a Skewed-Normal.
Then I plugged those parameters, with the dsnorm function provided by fGarch, in to a Kolmogorov-Smirnov test. | Can I hypothesis test for skew normal data? | So my solution in the end was to download the fGarch package, and snormFit provided by fGarch to get MLEs for the parameters to a Skewed-Normal.
Then I plugged those parameters, with the dsnorm funct | Can I hypothesis test for skew normal data?
So my solution in the end was to download the fGarch package, and snormFit provided by fGarch to get MLEs for the parameters to a Skewed-Normal.
Then I plugged those parameters, with the dsnorm function provided by fGarch, in to a Kolmogorov-Smirnov test. | Can I hypothesis test for skew normal data?
So my solution in the end was to download the fGarch package, and snormFit provided by fGarch to get MLEs for the parameters to a Skewed-Normal.
Then I plugged those parameters, with the dsnorm funct |
23,860 | Can I hypothesis test for skew normal data? | Check out http://www.egyankosh.ac.in/bitstream/123456789/25807/1/Unit6.pdf and http://en.wikipedia.org/wiki/Skewness
You could use the Karl Pearson test for skewness. The ratio of the third moment to the cube of standard deviation is called the coefficient of skewness. Symmetrical distributions would have skewness = 0 | Can I hypothesis test for skew normal data? | Check out http://www.egyankosh.ac.in/bitstream/123456789/25807/1/Unit6.pdf and http://en.wikipedia.org/wiki/Skewness
You could use the Karl Pearson test for skewness. The ratio of the third moment to | Can I hypothesis test for skew normal data?
Check out http://www.egyankosh.ac.in/bitstream/123456789/25807/1/Unit6.pdf and http://en.wikipedia.org/wiki/Skewness
You could use the Karl Pearson test for skewness. The ratio of the third moment to the cube of standard deviation is called the coefficient of skewness. Symmetrical distributions would have skewness = 0 | Can I hypothesis test for skew normal data?
Check out http://www.egyankosh.ac.in/bitstream/123456789/25807/1/Unit6.pdf and http://en.wikipedia.org/wiki/Skewness
You could use the Karl Pearson test for skewness. The ratio of the third moment to |
23,861 | Can I hypothesis test for skew normal data? | in SPSS you can get a estimate of the skewness (by going to analyze and then descriptives and then mark skewness) then you get a score of skewness and S.E (standard error) of skewness. Divide the skewness by its S.E and if your score is between +-1.96 its normally skewd.
If its not skewd then there are many non-parametric tests out there!
Good luck and all the best! | Can I hypothesis test for skew normal data? | in SPSS you can get a estimate of the skewness (by going to analyze and then descriptives and then mark skewness) then you get a score of skewness and S.E (standard error) of skewness. Divide the skew | Can I hypothesis test for skew normal data?
in SPSS you can get a estimate of the skewness (by going to analyze and then descriptives and then mark skewness) then you get a score of skewness and S.E (standard error) of skewness. Divide the skewness by its S.E and if your score is between +-1.96 its normally skewd.
If its not skewd then there are many non-parametric tests out there!
Good luck and all the best! | Can I hypothesis test for skew normal data?
in SPSS you can get a estimate of the skewness (by going to analyze and then descriptives and then mark skewness) then you get a score of skewness and S.E (standard error) of skewness. Divide the skew |
23,862 | Doubling the tails in two-sample permutation test | The permutation distribution of your test statistic is not guaranteed to be symmetric, so you can't do it that way. Instead, you add both tails. In your case of two independent samples, the null hypothesis is that the two location parameters are equal. Assuming continuous distributions and equal spread in both groups, we have exchangeability under the null hypothesis. Test statistic $T$ is the difference in means, with $E(T) = 0$ under the null.
The value for $T$ in the original sample is $T_{\text{emp}}$, and its values for the permutations $T^{\star}$. $\sharp(\cdot)$ is short for "number of" something, e.g., $\sharp(T^{\star})$ is the number of permutation test statistics. Then the $p$-value for the two-sided hypothesis is $p_{\text{ts}} = p_{\text{left}} + p_{\text{right}}$, where
$p_{\text{left}} = \frac{\sharp(T^{\star} \, <= \, \text{min}(T_{\text{emp}}, -T_{\text{emp}}))}{\sharp(T^{\star})}$
$p_{\text{right}} = \frac{\sharp(T^{\star} \, >= \, \text{max}(T_{\text{emp}}, -T_{\text{emp}}))}{\sharp(T^{\star})}$
(assuming we have the complete permutation distribution). Let's compare both approaches for the case of two independent samples when we can calculate the exact (complete) permutation distribution.
set.seed(1234)
Nj <- c(9, 8) # group sizes
DVa <- rnorm(Nj[1], 5, 20)^2 # data group 1
DVb <- rnorm(Nj[2], 10, 20)^2 # data group 2
DVab <- c(DVa, DVb) # data from both groups
IV <- factor(rep(c("A", "B"), Nj)) # grouping factor
idx <- seq(along=DVab) # all indices
idxA <- combn(idx, Nj[1]) # all possible first groups
# function to calculate test statistic for a given permutation x
getDM <- function(x) { mean(DVab[x]) - mean(DVab[!(idx %in% x)]) }
resDM <- apply(idxA, 2, getDM) # test statistic for all permutations
diffM <- mean(DVa) - mean(DVb) # empirical stest statistic
Now calculate the $p$-values and validate the proposed solution with the implementation in R's coin package. Observe that $p_{\text{left}} \neq p_{\text{right}}$, so it matters which way you calculate $p_{ts}$.
> (pL <- sum(resDM <= min(diffM, -diffM)) / length(resDM)) # left p-value
[1] 0.1755245
> (pR <- sum(resDM >= max(diffM, -diffM)) / length(resDM)) # right p-value
[1] 0.1585356
> 2*pL # doubling left p-value
[1] 0.351049
> 2*pR # doubling right p-value
[1] 0.3170712
> pL+pR # two-sided p-value
[1] 0.3340601
> sum(abs(resDM) >= abs(diffM)) / length(resDM) # two-sided p-value (more concise)
[1] 0.3340601
# validate with coin implementation
> library(coin) # for oneway_test()
> oneway_test(DVab ~ IV, alternative="two.sided", distribution="exact")
Exact 2-Sample Permutation Test
data: DVab by IV (A, B)
Z = 1.0551, p-value = 0.3341
alternative hypothesis: true mu is not equal to 0
P.S. For the Monte-Carlo case where we only sample from the permutation distribution, the $p$-values would be defined like this:
$p_{\text{left}} = \frac{\sharp(T^{\star} \, <= \, \text{min}(T_{\text{emp}}, -T_{\text{emp}})) + 1}{\sharp(T^{\star}) \, + \, 1}$
$p_{\text{right}} = \frac{\sharp(T^{\star} \, >= \, \text{max}(T_{\text{emp}}, -T_{\text{emp}})) +1 }{\sharp(T^{\star}) \, + \, 1}$
$p_{\text{ts}} = \frac{\sharp(\text{abs}(T^{\star}) \, >= \, \text{abs}(T_{\text{emp}})) \, + \, 1 }{\sharp(T^{\star}) + 1}$
The reason for adding one more extreme permutation case intuitively is that we need to count the empirical sample as well. Otherwise, the permutation $p$-value could be 0 which cannot happen in the continuous case (see here, note: some texts recommend this correction, some don't). | Doubling the tails in two-sample permutation test | The permutation distribution of your test statistic is not guaranteed to be symmetric, so you can't do it that way. Instead, you add both tails. In your case of two independent samples, the null hypot | Doubling the tails in two-sample permutation test
The permutation distribution of your test statistic is not guaranteed to be symmetric, so you can't do it that way. Instead, you add both tails. In your case of two independent samples, the null hypothesis is that the two location parameters are equal. Assuming continuous distributions and equal spread in both groups, we have exchangeability under the null hypothesis. Test statistic $T$ is the difference in means, with $E(T) = 0$ under the null.
The value for $T$ in the original sample is $T_{\text{emp}}$, and its values for the permutations $T^{\star}$. $\sharp(\cdot)$ is short for "number of" something, e.g., $\sharp(T^{\star})$ is the number of permutation test statistics. Then the $p$-value for the two-sided hypothesis is $p_{\text{ts}} = p_{\text{left}} + p_{\text{right}}$, where
$p_{\text{left}} = \frac{\sharp(T^{\star} \, <= \, \text{min}(T_{\text{emp}}, -T_{\text{emp}}))}{\sharp(T^{\star})}$
$p_{\text{right}} = \frac{\sharp(T^{\star} \, >= \, \text{max}(T_{\text{emp}}, -T_{\text{emp}}))}{\sharp(T^{\star})}$
(assuming we have the complete permutation distribution). Let's compare both approaches for the case of two independent samples when we can calculate the exact (complete) permutation distribution.
set.seed(1234)
Nj <- c(9, 8) # group sizes
DVa <- rnorm(Nj[1], 5, 20)^2 # data group 1
DVb <- rnorm(Nj[2], 10, 20)^2 # data group 2
DVab <- c(DVa, DVb) # data from both groups
IV <- factor(rep(c("A", "B"), Nj)) # grouping factor
idx <- seq(along=DVab) # all indices
idxA <- combn(idx, Nj[1]) # all possible first groups
# function to calculate test statistic for a given permutation x
getDM <- function(x) { mean(DVab[x]) - mean(DVab[!(idx %in% x)]) }
resDM <- apply(idxA, 2, getDM) # test statistic for all permutations
diffM <- mean(DVa) - mean(DVb) # empirical stest statistic
Now calculate the $p$-values and validate the proposed solution with the implementation in R's coin package. Observe that $p_{\text{left}} \neq p_{\text{right}}$, so it matters which way you calculate $p_{ts}$.
> (pL <- sum(resDM <= min(diffM, -diffM)) / length(resDM)) # left p-value
[1] 0.1755245
> (pR <- sum(resDM >= max(diffM, -diffM)) / length(resDM)) # right p-value
[1] 0.1585356
> 2*pL # doubling left p-value
[1] 0.351049
> 2*pR # doubling right p-value
[1] 0.3170712
> pL+pR # two-sided p-value
[1] 0.3340601
> sum(abs(resDM) >= abs(diffM)) / length(resDM) # two-sided p-value (more concise)
[1] 0.3340601
# validate with coin implementation
> library(coin) # for oneway_test()
> oneway_test(DVab ~ IV, alternative="two.sided", distribution="exact")
Exact 2-Sample Permutation Test
data: DVab by IV (A, B)
Z = 1.0551, p-value = 0.3341
alternative hypothesis: true mu is not equal to 0
P.S. For the Monte-Carlo case where we only sample from the permutation distribution, the $p$-values would be defined like this:
$p_{\text{left}} = \frac{\sharp(T^{\star} \, <= \, \text{min}(T_{\text{emp}}, -T_{\text{emp}})) + 1}{\sharp(T^{\star}) \, + \, 1}$
$p_{\text{right}} = \frac{\sharp(T^{\star} \, >= \, \text{max}(T_{\text{emp}}, -T_{\text{emp}})) +1 }{\sharp(T^{\star}) \, + \, 1}$
$p_{\text{ts}} = \frac{\sharp(\text{abs}(T^{\star}) \, >= \, \text{abs}(T_{\text{emp}})) \, + \, 1 }{\sharp(T^{\star}) + 1}$
The reason for adding one more extreme permutation case intuitively is that we need to count the empirical sample as well. Otherwise, the permutation $p$-value could be 0 which cannot happen in the continuous case (see here, note: some texts recommend this correction, some don't). | Doubling the tails in two-sample permutation test
The permutation distribution of your test statistic is not guaranteed to be symmetric, so you can't do it that way. Instead, you add both tails. In your case of two independent samples, the null hypot |
23,863 | What is the difference between GLM and GEE? | There may be a better and more detailed answer out there, but I can give you some simple, quick thoughts. It appears that you are talking about using a Generalized Linear Model (e.g., a typical logistic regression) to fit to fit data gathered from some subjects at multiple time points. At first blush, I see two glaring problems with this approach.
First, this model assumes that your data are independent given the covariates (that is, after having accounted for a dummy code for each subject, akin to an individual intercept term, and a linear time trend that is equal for everybody). This is wildly unlikely to be true. Instead, there will almost certainly be autocorrelations, for example, two observations of the same individual closer in time will be more similar than two observations further apart in time, even after having accounted for time. (Although they may well be independent if you also included a subject ID x time interaction--i.e., a unique time trend for everybody--but this would exacerbate the next problem.)
Second, you are going to burn up an enormous number of degrees of freedom estimating a parameter for each participant. You are likely to have relatively few degrees of freedom left with which to try to accurately estimate your parameters of interest (of course, this depends on how many measurements you have per person).
Ironically, the first problem means that your confidence intervals are too narrow, whereas the second means your CIs will be much wider than they would have been if you hadn't wasted most of your degrees of freedom. However, I wouldn't count on these two balancing each other out. For what it's worth, I believe that your parameter estimates would be unbiased (although I may be wrong here).
Using the Generalized Estimating Equations is appropriate in this case. When you fit a model using GEE, you specify a correlational structure (such as AR(1)), and it can be quite reasonable that your data are independent conditional on both your covariates and the correlation matrix you specified. In addition, the GEE estimate the population mean association, so you needn't burn a degree of freedom for each participant--in essence you are averaging over them.
As for the interpretation, as far as I am aware, it would be the same in both cases: given that the other factors remain constant, a one-unit change in X3 is associated with a B3 change in the log odds of 'success'. | What is the difference between GLM and GEE? | There may be a better and more detailed answer out there, but I can give you some simple, quick thoughts. It appears that you are talking about using a Generalized Linear Model (e.g., a typical logis | What is the difference between GLM and GEE?
There may be a better and more detailed answer out there, but I can give you some simple, quick thoughts. It appears that you are talking about using a Generalized Linear Model (e.g., a typical logistic regression) to fit to fit data gathered from some subjects at multiple time points. At first blush, I see two glaring problems with this approach.
First, this model assumes that your data are independent given the covariates (that is, after having accounted for a dummy code for each subject, akin to an individual intercept term, and a linear time trend that is equal for everybody). This is wildly unlikely to be true. Instead, there will almost certainly be autocorrelations, for example, two observations of the same individual closer in time will be more similar than two observations further apart in time, even after having accounted for time. (Although they may well be independent if you also included a subject ID x time interaction--i.e., a unique time trend for everybody--but this would exacerbate the next problem.)
Second, you are going to burn up an enormous number of degrees of freedom estimating a parameter for each participant. You are likely to have relatively few degrees of freedom left with which to try to accurately estimate your parameters of interest (of course, this depends on how many measurements you have per person).
Ironically, the first problem means that your confidence intervals are too narrow, whereas the second means your CIs will be much wider than they would have been if you hadn't wasted most of your degrees of freedom. However, I wouldn't count on these two balancing each other out. For what it's worth, I believe that your parameter estimates would be unbiased (although I may be wrong here).
Using the Generalized Estimating Equations is appropriate in this case. When you fit a model using GEE, you specify a correlational structure (such as AR(1)), and it can be quite reasonable that your data are independent conditional on both your covariates and the correlation matrix you specified. In addition, the GEE estimate the population mean association, so you needn't burn a degree of freedom for each participant--in essence you are averaging over them.
As for the interpretation, as far as I am aware, it would be the same in both cases: given that the other factors remain constant, a one-unit change in X3 is associated with a B3 change in the log odds of 'success'. | What is the difference between GLM and GEE?
There may be a better and more detailed answer out there, but I can give you some simple, quick thoughts. It appears that you are talking about using a Generalized Linear Model (e.g., a typical logis |
23,864 | Can I use PCA to do variable selection for cluster analysis? | I will, as is my custom, take a step back and ask what it is you are trying to do, exactly. Factor analysis is designed to find latent variables. If you want to find latent variables and cluster them, then what you are doing is correct. But you say you simply want to reduce the number of variables - that suggests principal component analysis, instead.
However, with either of those, you have to interpret cluster analysis on new variables, and those new variables are simply weighted sums of the old ones.
How many variables have you got? How correlated are they? If there are far too many, and they are very strongly correlated, then you could look for all correlations over some very high number, and randomly delete one variable from each pair. This reduces the number of variables and leaves the variables as they are.
Let me also echo @StasK about the need to do this at all, and @rolando2 about the usefulness of finding something different from what has been found before. As my favorite professor in grad school used to say "If you're not surprised, you haven't learned anything". | Can I use PCA to do variable selection for cluster analysis? | I will, as is my custom, take a step back and ask what it is you are trying to do, exactly. Factor analysis is designed to find latent variables. If you want to find latent variables and cluster them, | Can I use PCA to do variable selection for cluster analysis?
I will, as is my custom, take a step back and ask what it is you are trying to do, exactly. Factor analysis is designed to find latent variables. If you want to find latent variables and cluster them, then what you are doing is correct. But you say you simply want to reduce the number of variables - that suggests principal component analysis, instead.
However, with either of those, you have to interpret cluster analysis on new variables, and those new variables are simply weighted sums of the old ones.
How many variables have you got? How correlated are they? If there are far too many, and they are very strongly correlated, then you could look for all correlations over some very high number, and randomly delete one variable from each pair. This reduces the number of variables and leaves the variables as they are.
Let me also echo @StasK about the need to do this at all, and @rolando2 about the usefulness of finding something different from what has been found before. As my favorite professor in grad school used to say "If you're not surprised, you haven't learned anything". | Can I use PCA to do variable selection for cluster analysis?
I will, as is my custom, take a step back and ask what it is you are trying to do, exactly. Factor analysis is designed to find latent variables. If you want to find latent variables and cluster them, |
23,865 | Can I use PCA to do variable selection for cluster analysis? | A way to perform factor analysis and cluster analysis at the same time is through structural equation mixture models. In these models, you postulate that there are separate models (in this case, factor models) for each cluster. You would need to have the mean analysis along with the covariance analysis, and be concerned with identification to a greater extent that in plain vanilla factor analysis. The idea approached from SEM side appears in Jedidi et. al. (1997), and from clustering side, in model-based clustering by Adrian Raftery. This type of analysis is, apparently, available in Mplus. | Can I use PCA to do variable selection for cluster analysis? | A way to perform factor analysis and cluster analysis at the same time is through structural equation mixture models. In these models, you postulate that there are separate models (in this case, facto | Can I use PCA to do variable selection for cluster analysis?
A way to perform factor analysis and cluster analysis at the same time is through structural equation mixture models. In these models, you postulate that there are separate models (in this case, factor models) for each cluster. You would need to have the mean analysis along with the covariance analysis, and be concerned with identification to a greater extent that in plain vanilla factor analysis. The idea approached from SEM side appears in Jedidi et. al. (1997), and from clustering side, in model-based clustering by Adrian Raftery. This type of analysis is, apparently, available in Mplus. | Can I use PCA to do variable selection for cluster analysis?
A way to perform factor analysis and cluster analysis at the same time is through structural equation mixture models. In these models, you postulate that there are separate models (in this case, facto |
23,866 | Can I use PCA to do variable selection for cluster analysis? | I don't think it's a matter of "correctness" pure and simple, but rather whether it will accomplish what you are looking to do. The approach you describe will end up clustering according to certain factors, in a watered-down way, since you will be using only one indicator to represent each factor. Each such indicator figures to be an imperfect stand-in for the underlying, latent factor. That's one issue.
Another issue is that factor analysis itself, as I (and many other people) have recounted, is full of subjective decisions involving how to deal with missing data, number of factors to extract, how to extract, whether and how to rotate, and so on. So it may be far from clear that the factors you may have extracted in a quick, software-default manner (as I think you have implied) are the "best" in any sense.
Altogether, then, you may have used watered-down versions of factors that are themselves debatable as being the best ways to characterize the themes underlying your data. I wouldn't expect that the clusters resulting from such input variables would be the most informative or the most distinct.
On another note, it seems interesting that you consider it a problem to have cluster memberships/profiles that don't line up with what other researchers have found. Sometimes disconfirming findings can be very healthy! | Can I use PCA to do variable selection for cluster analysis? | I don't think it's a matter of "correctness" pure and simple, but rather whether it will accomplish what you are looking to do. The approach you describe will end up clustering according to certain f | Can I use PCA to do variable selection for cluster analysis?
I don't think it's a matter of "correctness" pure and simple, but rather whether it will accomplish what you are looking to do. The approach you describe will end up clustering according to certain factors, in a watered-down way, since you will be using only one indicator to represent each factor. Each such indicator figures to be an imperfect stand-in for the underlying, latent factor. That's one issue.
Another issue is that factor analysis itself, as I (and many other people) have recounted, is full of subjective decisions involving how to deal with missing data, number of factors to extract, how to extract, whether and how to rotate, and so on. So it may be far from clear that the factors you may have extracted in a quick, software-default manner (as I think you have implied) are the "best" in any sense.
Altogether, then, you may have used watered-down versions of factors that are themselves debatable as being the best ways to characterize the themes underlying your data. I wouldn't expect that the clusters resulting from such input variables would be the most informative or the most distinct.
On another note, it seems interesting that you consider it a problem to have cluster memberships/profiles that don't line up with what other researchers have found. Sometimes disconfirming findings can be very healthy! | Can I use PCA to do variable selection for cluster analysis?
I don't think it's a matter of "correctness" pure and simple, but rather whether it will accomplish what you are looking to do. The approach you describe will end up clustering according to certain f |
23,867 | Can I use PCA to do variable selection for cluster analysis? | What could be happening in your case is that the factors extracted in Factor Analysis have compensating positive and negative loads from the original variables. This would diminish differentiability that is the purpose of clustering.
Can you break up each extracted factor into 2 - one having just the positive loadings, the other just the negative loadings?
Replace the factor scores for each case for each factor by positive scores and negative scores and try clustering on this new set of scores.
Please drop in a line if this works for you. | Can I use PCA to do variable selection for cluster analysis? | What could be happening in your case is that the factors extracted in Factor Analysis have compensating positive and negative loads from the original variables. This would diminish differentiability t | Can I use PCA to do variable selection for cluster analysis?
What could be happening in your case is that the factors extracted in Factor Analysis have compensating positive and negative loads from the original variables. This would diminish differentiability that is the purpose of clustering.
Can you break up each extracted factor into 2 - one having just the positive loadings, the other just the negative loadings?
Replace the factor scores for each case for each factor by positive scores and negative scores and try clustering on this new set of scores.
Please drop in a line if this works for you. | Can I use PCA to do variable selection for cluster analysis?
What could be happening in your case is that the factors extracted in Factor Analysis have compensating positive and negative loads from the original variables. This would diminish differentiability t |
23,868 | Can I use PCA to do variable selection for cluster analysis? | You could scan both for high values and also for low values and leave all variables in the factors. This way, there is no need to cut up the factors. If you split Factor 1 (say) a certain way based on the signs of the loadings, in Factor 2, the signs may be quite different. Would you then cut up Factor 2 differently from Factor 1? This seems to be confusing. | Can I use PCA to do variable selection for cluster analysis? | You could scan both for high values and also for low values and leave all variables in the factors. This way, there is no need to cut up the factors. If you split Factor 1 (say) a certain way based on | Can I use PCA to do variable selection for cluster analysis?
You could scan both for high values and also for low values and leave all variables in the factors. This way, there is no need to cut up the factors. If you split Factor 1 (say) a certain way based on the signs of the loadings, in Factor 2, the signs may be quite different. Would you then cut up Factor 2 differently from Factor 1? This seems to be confusing. | Can I use PCA to do variable selection for cluster analysis?
You could scan both for high values and also for low values and leave all variables in the factors. This way, there is no need to cut up the factors. If you split Factor 1 (say) a certain way based on |
23,869 | Order statistics (e.g., minimum) of infinite collection of chi-square variates? | The zeros of the infinite product will be the union of the zeros of the terms. Computing out to the 20th term shows the general pattern:
This plot of the zeros in the complex plane distinguishes the contributions of the individual terms in the product by means of different symbols: at each step, the apparent curves are extended further and a new curve is started even further left.
The complexity of this picture demonstrates there exists no closed-form solution in terms of well-known functions of higher analysis (such as gammas, thetas, hypergeometric functions, etc, as well as the elementary functions, as surveyed in a classic text like Whittaker & Watson).
Thus, the problem might be more fruitfully posed a little differently: what do you need to know about the distributions of the order statistics? Estimates of their characteristic functions? Low order moments? Approximations to quantiles? Something else? | Order statistics (e.g., minimum) of infinite collection of chi-square variates? | The zeros of the infinite product will be the union of the zeros of the terms. Computing out to the 20th term shows the general pattern:
This plot of the zeros in the complex plane distinguishes the | Order statistics (e.g., minimum) of infinite collection of chi-square variates?
The zeros of the infinite product will be the union of the zeros of the terms. Computing out to the 20th term shows the general pattern:
This plot of the zeros in the complex plane distinguishes the contributions of the individual terms in the product by means of different symbols: at each step, the apparent curves are extended further and a new curve is started even further left.
The complexity of this picture demonstrates there exists no closed-form solution in terms of well-known functions of higher analysis (such as gammas, thetas, hypergeometric functions, etc, as well as the elementary functions, as surveyed in a classic text like Whittaker & Watson).
Thus, the problem might be more fruitfully posed a little differently: what do you need to know about the distributions of the order statistics? Estimates of their characteristic functions? Low order moments? Approximations to quantiles? Something else? | Order statistics (e.g., minimum) of infinite collection of chi-square variates?
The zeros of the infinite product will be the union of the zeros of the terms. Computing out to the 20th term shows the general pattern:
This plot of the zeros in the complex plane distinguishes the |
23,870 | Order statistics (e.g., minimum) of infinite collection of chi-square variates? | what is the distribution of the minimum of (independent) $\chi^2_2,\chi^2_4,\chi^2_6,\ldots$?
Apologies for arriving some 6 years late. Even though the OP has likely now moved onto other problems, the question remains fresh, and I thought I might suggest a different approach.
We are given $(X_1, X_2, X_3, \dots)$ where $X_i \sim \text{Chisquared}(v_i)$ where $v_i= 2i$ with pdf's $f_i(x_i)$:
Here is a plot of the corresponding pdf's $f_i(x_i)$, as the sample size increases, for $i = 1 \text{ to } 8$:
We are interested in the distribution of $\text{min}(X_1, X_2, X_3, \dots)$.
Each time we add an extra term, the pdf of the marginal last term added shifts out further and further to the right, so that the effect of adding more and more terms becomes not only less and less relevant, but after a just a few terms, becomes almost negligible -- on the sample minimum. This means, in effect, that only a very small number of terms are likely to actually matter ... and adding additional terms (or the presence of an infinite number of terms) is largely irrelevant for the sample minimum problem.
Test
To test this, I have calculated the pdf of $\text{min}(X_1, X_2, X_3, \dots)$ to 1 term, 2 terms, 3 terms, 4 terms, 5 terms, 6 terms, 7 terms, 8 terms, to 9 terms, and to 10 terms. To do this, I have used the OrderStatNonIdentical function from mathStatica, instructing it here to calculate the pdf of the sample minimum (the $1^{\text{st}}$ order statistic) in a sample of size $j$, and where parameter $i$ (instead of being fixed) is $v_i$:
It gets a bit complicated as the number of terms increase ... but I have shown the output for 1 term (1st row), 2 terms (second row), 3 terms (3rd row) and 4 terms above.
The following diagram compares the pdf of the sample minimum with 1 term (blue), 2 terms (orange), 3 terms, and 10 terms (red). Note how similar the results are with just 3 terms vs 10 terms:
The following diagram compares 5 terms (blue) and 10 terms (orange) -- the plots are so similar, they obliterate each other, and one cannot even see the difference:
In other words, increasing the number of terms from 5 to 10 has almost no discernible visual impact on the distribution of the sample minimum.
Half-Logistic Approximation
Finally, an excellent simple approximation of the pdf of the sample min is the half-Logistic distribution with pdf:
$$g(x) = \frac{2 e^{-x}}{\left(e^{-x}+1\right)^2} \quad \text{ for } x>0$$
The following diagram compares the exact solution with 10 terms (which is indistinguishable from 5 terms or 20 terms) and the half-Logistic approximation (dashed):
Increasing to 20 terms makes no discernible difference. | Order statistics (e.g., minimum) of infinite collection of chi-square variates? | what is the distribution of the minimum of (independent) $\chi^2_2,\chi^2_4,\chi^2_6,\ldots$?
Apologies for arriving some 6 years late. Even though the OP has likely now moved onto other problems, th | Order statistics (e.g., minimum) of infinite collection of chi-square variates?
what is the distribution of the minimum of (independent) $\chi^2_2,\chi^2_4,\chi^2_6,\ldots$?
Apologies for arriving some 6 years late. Even though the OP has likely now moved onto other problems, the question remains fresh, and I thought I might suggest a different approach.
We are given $(X_1, X_2, X_3, \dots)$ where $X_i \sim \text{Chisquared}(v_i)$ where $v_i= 2i$ with pdf's $f_i(x_i)$:
Here is a plot of the corresponding pdf's $f_i(x_i)$, as the sample size increases, for $i = 1 \text{ to } 8$:
We are interested in the distribution of $\text{min}(X_1, X_2, X_3, \dots)$.
Each time we add an extra term, the pdf of the marginal last term added shifts out further and further to the right, so that the effect of adding more and more terms becomes not only less and less relevant, but after a just a few terms, becomes almost negligible -- on the sample minimum. This means, in effect, that only a very small number of terms are likely to actually matter ... and adding additional terms (or the presence of an infinite number of terms) is largely irrelevant for the sample minimum problem.
Test
To test this, I have calculated the pdf of $\text{min}(X_1, X_2, X_3, \dots)$ to 1 term, 2 terms, 3 terms, 4 terms, 5 terms, 6 terms, 7 terms, 8 terms, to 9 terms, and to 10 terms. To do this, I have used the OrderStatNonIdentical function from mathStatica, instructing it here to calculate the pdf of the sample minimum (the $1^{\text{st}}$ order statistic) in a sample of size $j$, and where parameter $i$ (instead of being fixed) is $v_i$:
It gets a bit complicated as the number of terms increase ... but I have shown the output for 1 term (1st row), 2 terms (second row), 3 terms (3rd row) and 4 terms above.
The following diagram compares the pdf of the sample minimum with 1 term (blue), 2 terms (orange), 3 terms, and 10 terms (red). Note how similar the results are with just 3 terms vs 10 terms:
The following diagram compares 5 terms (blue) and 10 terms (orange) -- the plots are so similar, they obliterate each other, and one cannot even see the difference:
In other words, increasing the number of terms from 5 to 10 has almost no discernible visual impact on the distribution of the sample minimum.
Half-Logistic Approximation
Finally, an excellent simple approximation of the pdf of the sample min is the half-Logistic distribution with pdf:
$$g(x) = \frac{2 e^{-x}}{\left(e^{-x}+1\right)^2} \quad \text{ for } x>0$$
The following diagram compares the exact solution with 10 terms (which is indistinguishable from 5 terms or 20 terms) and the half-Logistic approximation (dashed):
Increasing to 20 terms makes no discernible difference. | Order statistics (e.g., minimum) of infinite collection of chi-square variates?
what is the distribution of the minimum of (independent) $\chi^2_2,\chi^2_4,\chi^2_6,\ldots$?
Apologies for arriving some 6 years late. Even though the OP has likely now moved onto other problems, th |
23,871 | How to test whether a regression coefficient is moderated by a grouping variable? | Your method does not appear to address the question, assuming that a "moderating effect" is a change in one or more regression coefficients between the two groups. Significance tests in regression assess whether the coefficients are nonzero. Comparing p-values in two regressions tells you little (if anything) about differences in those coefficients between the two samples.
Instead, introduce gender as a dummy variable and interact it with all the coefficients of interest. Then test for significance of the associated coefficients.
For example, in the simplest case (of one independent variable) your data can be expressed as a list of $(x_i, y_i, g_i)$ tuples where $g_i$ are the genders, coded as $0$ and $1$. The model for gender $0$ is
$$y_i = \alpha_0 + \beta_0 x_i + \varepsilon_i$$
(where $i$ indexes the data for which $g_i = 0$) and the model for gender $1$ is
$$y_i = \alpha_1 + \beta_1 x_i + \varepsilon_i$$
(where $i$ indexes the data for which $g_i = 1$). The parameters are $\alpha_0$, $\alpha_1$, $\beta_0$, and $\beta_1$. The errors are the $\varepsilon_i$. Let's assume they are independent and identically distributed with zero means. A combined model to test for a difference in slopes (the $\beta$'s) can be written as
$$y_i = \alpha + \beta_0 x_i + (\beta_1 - \beta_0) (x_i g_i) + \varepsilon_i$$
(where $i$ ranges over all the data) because when you set $g_i=0$ the last term drops out, giving the first model with $\alpha = \alpha_0$, and when you set $g_i=1$ the two multiples of $x_i$ combine to give $\beta_1$, yielding the second model with $\alpha = \alpha_1$. Therefore, you can test whether the slopes are the same (the "moderating effect") by fitting the model
$$y_i = \alpha + \beta x_i + \gamma (x_i g_i) + \varepsilon_i$$
and testing whether the estimated moderating effect size, $\hat{\gamma}$, is zero. If you're not sure the intercepts will be the same, include a fourth term:
$$y_i = \alpha + \delta g_i + \beta x_i + \gamma (x_i g_i) + \varepsilon_i.$$
You don't necessarily have to test whether $\hat{\delta}$ is zero, if that is not of any interest: it's included to allow separate linear fits to the two genders without forcing them to have the same intercept.
The main limitation of this approach is the assumption that the variances of the errors $\varepsilon_i$ are the same for both genders. If not, you need to incorporate that possibility and that requires a little more work with the software to fit the model and deeper thought about how to test the significance of the coefficients. | How to test whether a regression coefficient is moderated by a grouping variable? | Your method does not appear to address the question, assuming that a "moderating effect" is a change in one or more regression coefficients between the two groups. Significance tests in regression as | How to test whether a regression coefficient is moderated by a grouping variable?
Your method does not appear to address the question, assuming that a "moderating effect" is a change in one or more regression coefficients between the two groups. Significance tests in regression assess whether the coefficients are nonzero. Comparing p-values in two regressions tells you little (if anything) about differences in those coefficients between the two samples.
Instead, introduce gender as a dummy variable and interact it with all the coefficients of interest. Then test for significance of the associated coefficients.
For example, in the simplest case (of one independent variable) your data can be expressed as a list of $(x_i, y_i, g_i)$ tuples where $g_i$ are the genders, coded as $0$ and $1$. The model for gender $0$ is
$$y_i = \alpha_0 + \beta_0 x_i + \varepsilon_i$$
(where $i$ indexes the data for which $g_i = 0$) and the model for gender $1$ is
$$y_i = \alpha_1 + \beta_1 x_i + \varepsilon_i$$
(where $i$ indexes the data for which $g_i = 1$). The parameters are $\alpha_0$, $\alpha_1$, $\beta_0$, and $\beta_1$. The errors are the $\varepsilon_i$. Let's assume they are independent and identically distributed with zero means. A combined model to test for a difference in slopes (the $\beta$'s) can be written as
$$y_i = \alpha + \beta_0 x_i + (\beta_1 - \beta_0) (x_i g_i) + \varepsilon_i$$
(where $i$ ranges over all the data) because when you set $g_i=0$ the last term drops out, giving the first model with $\alpha = \alpha_0$, and when you set $g_i=1$ the two multiples of $x_i$ combine to give $\beta_1$, yielding the second model with $\alpha = \alpha_1$. Therefore, you can test whether the slopes are the same (the "moderating effect") by fitting the model
$$y_i = \alpha + \beta x_i + \gamma (x_i g_i) + \varepsilon_i$$
and testing whether the estimated moderating effect size, $\hat{\gamma}$, is zero. If you're not sure the intercepts will be the same, include a fourth term:
$$y_i = \alpha + \delta g_i + \beta x_i + \gamma (x_i g_i) + \varepsilon_i.$$
You don't necessarily have to test whether $\hat{\delta}$ is zero, if that is not of any interest: it's included to allow separate linear fits to the two genders without forcing them to have the same intercept.
The main limitation of this approach is the assumption that the variances of the errors $\varepsilon_i$ are the same for both genders. If not, you need to incorporate that possibility and that requires a little more work with the software to fit the model and deeper thought about how to test the significance of the coefficients. | How to test whether a regression coefficient is moderated by a grouping variable?
Your method does not appear to address the question, assuming that a "moderating effect" is a change in one or more regression coefficients between the two groups. Significance tests in regression as |
23,872 | How to test whether a regression coefficient is moderated by a grouping variable? | I guess moderating a grouping variable would work equally well when comparing regression coefficients across independent waves of cross-sectional data (e.g, year1, year2 and year3 as group1 group2 and group3)? | How to test whether a regression coefficient is moderated by a grouping variable? | I guess moderating a grouping variable would work equally well when comparing regression coefficients across independent waves of cross-sectional data (e.g, year1, year2 and year3 as group1 group2 and | How to test whether a regression coefficient is moderated by a grouping variable?
I guess moderating a grouping variable would work equally well when comparing regression coefficients across independent waves of cross-sectional data (e.g, year1, year2 and year3 as group1 group2 and group3)? | How to test whether a regression coefficient is moderated by a grouping variable?
I guess moderating a grouping variable would work equally well when comparing regression coefficients across independent waves of cross-sectional data (e.g, year1, year2 and year3 as group1 group2 and |
23,873 | What's a component in gaussian mixture model? | A mixture of Gaussians is defined as a linear combination of multiple Gaussian distributions. Thus it has multiple modes. The dimension refers to the data (e.g. the color, length, width, height and material of a shoe) while the number of components refers to the model. Each Gaussian in your mixture is one component. Thus each component will correspond to one mode, in most of the cases.
I suggest you read up on mixture models on wikipedia. | What's a component in gaussian mixture model? | A mixture of Gaussians is defined as a linear combination of multiple Gaussian distributions. Thus it has multiple modes. The dimension refers to the data (e.g. the color, length, width, height and ma | What's a component in gaussian mixture model?
A mixture of Gaussians is defined as a linear combination of multiple Gaussian distributions. Thus it has multiple modes. The dimension refers to the data (e.g. the color, length, width, height and material of a shoe) while the number of components refers to the model. Each Gaussian in your mixture is one component. Thus each component will correspond to one mode, in most of the cases.
I suggest you read up on mixture models on wikipedia. | What's a component in gaussian mixture model?
A mixture of Gaussians is defined as a linear combination of multiple Gaussian distributions. Thus it has multiple modes. The dimension refers to the data (e.g. the color, length, width, height and ma |
23,874 | What's a component in gaussian mixture model? | A mixture of Gaussians algorithm is a probabilistic generalization of the $k$-means algorithm. Each mean vector in $k$-means is component. The number of elements in each of the $k$ vectors is the dimension of the model. Thus, if you have $n$ dimensions, you have a $k\times n$ matrix of mean vectors.
It is no different in a mixture of Gaussians except that now you have to deal with covariance matrices in your model. | What's a component in gaussian mixture model? | A mixture of Gaussians algorithm is a probabilistic generalization of the $k$-means algorithm. Each mean vector in $k$-means is component. The number of elements in each of the $k$ vectors is the dime | What's a component in gaussian mixture model?
A mixture of Gaussians algorithm is a probabilistic generalization of the $k$-means algorithm. Each mean vector in $k$-means is component. The number of elements in each of the $k$ vectors is the dimension of the model. Thus, if you have $n$ dimensions, you have a $k\times n$ matrix of mean vectors.
It is no different in a mixture of Gaussians except that now you have to deal with covariance matrices in your model. | What's a component in gaussian mixture model?
A mixture of Gaussians algorithm is a probabilistic generalization of the $k$-means algorithm. Each mean vector in $k$-means is component. The number of elements in each of the $k$ vectors is the dime |
23,875 | How to handle non existent (not missing) data? | Instead of assigning special value for non-existent first time runner previous lap time, simply use interaction term for previous lap time with the inverse of first time runner dummy:
$$Y_i=\beta_0+\beta_1 FTR_i+\beta_2 (NFTR_i)\times PLT_i+...$$
here
$Y_i$ is your input variable,
$...$ is your other variables,
$FTR_i$ is
dummy for the first time runner,
$PLT_i$ is the previous lap time and
$NFTR_i$ is dummy for non first time
runner equaling 1, when $FTR_i=0$ and
0 otherwise.
Then the model for first time runners will be:
$$Y_i=(\beta_0+\beta_1) + ...$$
and for non first time runners:
$$Y_i=\beta_0+ \beta_2 PLT_i + ...$$ | How to handle non existent (not missing) data? | Instead of assigning special value for non-existent first time runner previous lap time, simply use interaction term for previous lap time with the inverse of first time runner dummy:
$$Y_i=\beta_0+\b | How to handle non existent (not missing) data?
Instead of assigning special value for non-existent first time runner previous lap time, simply use interaction term for previous lap time with the inverse of first time runner dummy:
$$Y_i=\beta_0+\beta_1 FTR_i+\beta_2 (NFTR_i)\times PLT_i+...$$
here
$Y_i$ is your input variable,
$...$ is your other variables,
$FTR_i$ is
dummy for the first time runner,
$PLT_i$ is the previous lap time and
$NFTR_i$ is dummy for non first time
runner equaling 1, when $FTR_i=0$ and
0 otherwise.
Then the model for first time runners will be:
$$Y_i=(\beta_0+\beta_1) + ...$$
and for non first time runners:
$$Y_i=\beta_0+ \beta_2 PLT_i + ...$$ | How to handle non existent (not missing) data?
Instead of assigning special value for non-existent first time runner previous lap time, simply use interaction term for previous lap time with the inverse of first time runner dummy:
$$Y_i=\beta_0+\b |
23,876 | How to handle non existent (not missing) data? | For a logistic regression fitted by maximum likelihood, as long as you have both (1) and (2) in the model, then no matter what "default" value that you give new runners for (2), the estimate for (1) will adjust accordingly.
For example, let $X_1$ be the indicator variable for "is a new runner", and $X_2$ be the variable "previous laptime in seconds". Then the linear predictor is:
$\eta = \alpha + \beta_1 X_1 + \beta_2 X_2 + \ldots$
If the default for $X_2$ is zero, then the linear predictor for a new runner is:
$\eta = \alpha + \beta_1 + \ldots$
whereas for an existing runner, it will be:
$\eta = \alpha + \beta_2 X_2 + \ldots$
Now suppose that you change the default for $X_2$ from 0 to -99. Then the linear predictor for a new runner is now:
$\eta = \alpha + \beta'_1 - 99 \beta_2 + \ldots$
but for an existing runner, it will remain the same. So all you've done is reparameterise the model, such that $\beta'_1 - 99 \beta_2 = \beta_1$, and since maximum likelihood is paremeterisation invariant, the estimates will adjust accordingly.
Of course, if you're not using maximum likelihood (i.e. you're using some sort of penalisation or prior on the parameters), then you're going to get different values unless you adjust the penalisation/prior accordingly. And if the model is non-linear (e.g. SVM, NN & Decision trees), then this argument doesn't work at all. | How to handle non existent (not missing) data? | For a logistic regression fitted by maximum likelihood, as long as you have both (1) and (2) in the model, then no matter what "default" value that you give new runners for (2), the estimate for (1) w | How to handle non existent (not missing) data?
For a logistic regression fitted by maximum likelihood, as long as you have both (1) and (2) in the model, then no matter what "default" value that you give new runners for (2), the estimate for (1) will adjust accordingly.
For example, let $X_1$ be the indicator variable for "is a new runner", and $X_2$ be the variable "previous laptime in seconds". Then the linear predictor is:
$\eta = \alpha + \beta_1 X_1 + \beta_2 X_2 + \ldots$
If the default for $X_2$ is zero, then the linear predictor for a new runner is:
$\eta = \alpha + \beta_1 + \ldots$
whereas for an existing runner, it will be:
$\eta = \alpha + \beta_2 X_2 + \ldots$
Now suppose that you change the default for $X_2$ from 0 to -99. Then the linear predictor for a new runner is now:
$\eta = \alpha + \beta'_1 - 99 \beta_2 + \ldots$
but for an existing runner, it will remain the same. So all you've done is reparameterise the model, such that $\beta'_1 - 99 \beta_2 = \beta_1$, and since maximum likelihood is paremeterisation invariant, the estimates will adjust accordingly.
Of course, if you're not using maximum likelihood (i.e. you're using some sort of penalisation or prior on the parameters), then you're going to get different values unless you adjust the penalisation/prior accordingly. And if the model is non-linear (e.g. SVM, NN & Decision trees), then this argument doesn't work at all. | How to handle non existent (not missing) data?
For a logistic regression fitted by maximum likelihood, as long as you have both (1) and (2) in the model, then no matter what "default" value that you give new runners for (2), the estimate for (1) w |
23,877 | Censoring/Truncation in JAGS | Perhaps this is what you are looking for:
x_obs[i] ~ dnorm(x_true[i],prec_x)T(x_true[i], )
JAGS has options for both censoring and truncation. It sounds like you want truncation, since you know a-priori that the observation lies within a particular range
Read the user's manual for more details about how jags uses truncation and censoring. | Censoring/Truncation in JAGS | Perhaps this is what you are looking for:
x_obs[i] ~ dnorm(x_true[i],prec_x)T(x_true[i], )
JAGS has options for both censoring and truncation. It sounds like you want truncation, since you know a-pri | Censoring/Truncation in JAGS
Perhaps this is what you are looking for:
x_obs[i] ~ dnorm(x_true[i],prec_x)T(x_true[i], )
JAGS has options for both censoring and truncation. It sounds like you want truncation, since you know a-priori that the observation lies within a particular range
Read the user's manual for more details about how jags uses truncation and censoring. | Censoring/Truncation in JAGS
Perhaps this is what you are looking for:
x_obs[i] ~ dnorm(x_true[i],prec_x)T(x_true[i], )
JAGS has options for both censoring and truncation. It sounds like you want truncation, since you know a-pri |
23,878 | Censoring/Truncation in JAGS | Thanks for the tips David. I posted this question on the JAGS support forum and got a useful answer. The key was to use a two dimensional array for the 'true' values.
for (j in 1:n){
x_obs[j] ~ dnorm(xy_true[j,1], prec_x)T(xy_true[j,1],)
y_obs[j] ~ dnorm(xy_true[j,2], prec_y)
xy_true[j, ] ~ dmnorm(mu[ z [j],1:2], tau[z[j],1:2,1:2])
z[j]~dcat(prob[ ])
}
#priors for measurement error
e_x~dunif(.1,.9)
prec_x<-1/pow(e_x,2)
e_y~dunif(2,4)
prec_y<-1/pow(e_y,2) | Censoring/Truncation in JAGS | Thanks for the tips David. I posted this question on the JAGS support forum and got a useful answer. The key was to use a two dimensional array for the 'true' values.
for (j in 1:n){
x_obs[j] ~ d | Censoring/Truncation in JAGS
Thanks for the tips David. I posted this question on the JAGS support forum and got a useful answer. The key was to use a two dimensional array for the 'true' values.
for (j in 1:n){
x_obs[j] ~ dnorm(xy_true[j,1], prec_x)T(xy_true[j,1],)
y_obs[j] ~ dnorm(xy_true[j,2], prec_y)
xy_true[j, ] ~ dmnorm(mu[ z [j],1:2], tau[z[j],1:2,1:2])
z[j]~dcat(prob[ ])
}
#priors for measurement error
e_x~dunif(.1,.9)
prec_x<-1/pow(e_x,2)
e_y~dunif(2,4)
prec_y<-1/pow(e_y,2) | Censoring/Truncation in JAGS
Thanks for the tips David. I posted this question on the JAGS support forum and got a useful answer. The key was to use a two dimensional array for the 'true' values.
for (j in 1:n){
x_obs[j] ~ d |
23,879 | How do I know which method of parameter estimation to choose? | There's a slight confusion of two things here: methods for deriving estimators, and criteria for evaluating estimators. Maximum likelihood (ML) and method-of-moments (MoM) are ways of deriving estimators; Uniformly minimum variance unbiasedness (UMVU) and decision theory are criteria for evaluating different estimators once you have them, but they won't tell you how to derive them.
Of the methods for deriving estimators, ML usually produces estimators that are more efficient (i.e. lower variance) than MoM if you know the model under which your data were derived (the 'data-generating process' (DGP) in the jargon). But MoM makes fewer assumptions about the model; as its name implies, it only uses one or more moments, usually just the mean or just the mean and variance, so it's sometimes more robust if you're not sure about the DGP. There can be more than one MoM estimator for the same problem, while if you know the DGP, there is only one ML estimator.
Of the methods for evaluating estimators, decision-theoretic depends on having a loss function to use to judge your estimator, although the results can be fairly robust to a range of 'reasonable' loss functions. UMVU estimators often don't even exist; in many cases there is no unbiased estimator that always has minimum variance. And the criterion of unbiasedness is also of questionable usefulness, as it's not invariant to transformations. For example, would you prefer an unbiased estimator of the odds ratio, or of the log odds ratio? The two will be different. | How do I know which method of parameter estimation to choose? | There's a slight confusion of two things here: methods for deriving estimators, and criteria for evaluating estimators. Maximum likelihood (ML) and method-of-moments (MoM) are ways of deriving estimat | How do I know which method of parameter estimation to choose?
There's a slight confusion of two things here: methods for deriving estimators, and criteria for evaluating estimators. Maximum likelihood (ML) and method-of-moments (MoM) are ways of deriving estimators; Uniformly minimum variance unbiasedness (UMVU) and decision theory are criteria for evaluating different estimators once you have them, but they won't tell you how to derive them.
Of the methods for deriving estimators, ML usually produces estimators that are more efficient (i.e. lower variance) than MoM if you know the model under which your data were derived (the 'data-generating process' (DGP) in the jargon). But MoM makes fewer assumptions about the model; as its name implies, it only uses one or more moments, usually just the mean or just the mean and variance, so it's sometimes more robust if you're not sure about the DGP. There can be more than one MoM estimator for the same problem, while if you know the DGP, there is only one ML estimator.
Of the methods for evaluating estimators, decision-theoretic depends on having a loss function to use to judge your estimator, although the results can be fairly robust to a range of 'reasonable' loss functions. UMVU estimators often don't even exist; in many cases there is no unbiased estimator that always has minimum variance. And the criterion of unbiasedness is also of questionable usefulness, as it's not invariant to transformations. For example, would you prefer an unbiased estimator of the odds ratio, or of the log odds ratio? The two will be different. | How do I know which method of parameter estimation to choose?
There's a slight confusion of two things here: methods for deriving estimators, and criteria for evaluating estimators. Maximum likelihood (ML) and method-of-moments (MoM) are ways of deriving estimat |
23,880 | How do I know which method of parameter estimation to choose? | I would suggest that the type of estimator depends on a few things:
What are the consequences of getting the estimate wrong? (e.g. is it less bad if your estimator is too high, compared to being too low? or are you indifferent about the direction of error? if an error is twice as big, is this twice as bad? is it percentage error or absolute error that is important? Is the estimation only intermediate step that is required for prediction? is large sample behaviour more or less important than small sample behaviour?)
What is your prior information about the quantity you are estimating? (e.g. how is the data functionally related to your quantity? do you know if the quantity is positive? discrete? have you estimated this quantity before? how much data do you have? Is there any "group invariance" structure in your data?)
What software do you have? (e.g. no good suggesting MCMC if you don't have the software to do it, or using a GLMM if you don't know how to do it.)
The first two points are context specific, and by thinking about your specific application, you will generally be able to define certain properties that you would like your estimator to have. You then choose the estimator which you can actually calculate, which has as many of the properties which you want it to have.
I think the lack of context that a teaching course has with estimation, means that often "default" criterion are used, similarly for prior information (the most obvious "default" being that you know the sampling distribution of your data). Having said that, some of the default methods are good, especially if you don't know enough about the context. But if you do know the context, and you have the tools to incorporate that context, then you should, for otherwise you may get counter-intuitive results (because of what you ignored).
The I'm not a big fan of MVUE as a general rule, because you often need to sacrifice too much variance to get unbiased-ness. For example, imagine you are throwing darts at a dartboard, and you want to hit to the bulls-eye. Supposing that the maximum deviation from the bulls-eye is 6cm for a particular throwing strategy, but the center of the dart points is 1cm above of the bullseye. This is not MVUE, because the center should be on the bullseye. But suppose that in order to shift the distribution down 1cm (on the average), you have to increase your radius to at least 10cm (so the maximum error is now 10cm, and not 6cm). This is the kind of thing that can happen with MVUE, unless the variance is already small. Suppose I was a much more accurate throw, and could narrow my error to 0.1cm. Now the bias really matters, because I will never hit the bullseye!
In short, for me, bias only matters when it is small compared to the variance. And you will usually only get small variances when you have a large sample. | How do I know which method of parameter estimation to choose? | I would suggest that the type of estimator depends on a few things:
What are the consequences of getting the estimate wrong? (e.g. is it less bad if your estimator is too high, compared to being too | How do I know which method of parameter estimation to choose?
I would suggest that the type of estimator depends on a few things:
What are the consequences of getting the estimate wrong? (e.g. is it less bad if your estimator is too high, compared to being too low? or are you indifferent about the direction of error? if an error is twice as big, is this twice as bad? is it percentage error or absolute error that is important? Is the estimation only intermediate step that is required for prediction? is large sample behaviour more or less important than small sample behaviour?)
What is your prior information about the quantity you are estimating? (e.g. how is the data functionally related to your quantity? do you know if the quantity is positive? discrete? have you estimated this quantity before? how much data do you have? Is there any "group invariance" structure in your data?)
What software do you have? (e.g. no good suggesting MCMC if you don't have the software to do it, or using a GLMM if you don't know how to do it.)
The first two points are context specific, and by thinking about your specific application, you will generally be able to define certain properties that you would like your estimator to have. You then choose the estimator which you can actually calculate, which has as many of the properties which you want it to have.
I think the lack of context that a teaching course has with estimation, means that often "default" criterion are used, similarly for prior information (the most obvious "default" being that you know the sampling distribution of your data). Having said that, some of the default methods are good, especially if you don't know enough about the context. But if you do know the context, and you have the tools to incorporate that context, then you should, for otherwise you may get counter-intuitive results (because of what you ignored).
The I'm not a big fan of MVUE as a general rule, because you often need to sacrifice too much variance to get unbiased-ness. For example, imagine you are throwing darts at a dartboard, and you want to hit to the bulls-eye. Supposing that the maximum deviation from the bulls-eye is 6cm for a particular throwing strategy, but the center of the dart points is 1cm above of the bullseye. This is not MVUE, because the center should be on the bullseye. But suppose that in order to shift the distribution down 1cm (on the average), you have to increase your radius to at least 10cm (so the maximum error is now 10cm, and not 6cm). This is the kind of thing that can happen with MVUE, unless the variance is already small. Suppose I was a much more accurate throw, and could narrow my error to 0.1cm. Now the bias really matters, because I will never hit the bullseye!
In short, for me, bias only matters when it is small compared to the variance. And you will usually only get small variances when you have a large sample. | How do I know which method of parameter estimation to choose?
I would suggest that the type of estimator depends on a few things:
What are the consequences of getting the estimate wrong? (e.g. is it less bad if your estimator is too high, compared to being too |
23,881 | Why are exact tests preferred over chi-squared for small sample sizes? | In a classical hypothesis test, you have a test statistic that orders the evidence from that which is most conducive to the null hypothesis to that which is most conducive to the alternative hypothesis. (Without loss of generality, suppose that a higher value of this statistic is more conducive to the alternative hypothesis.) The p-value of the test is the probability of observing evidence at least as conducive to the alternative hypothesis as what you actually observed (a test statistic at least as large as the observed value) under the assumption that the null hypothesis is true. This is computed from the null distribution of the test statistic, which is its distribution under the assumption that the null hypothesis is true.
Now, an "exact test" is a test that computes the p-value exactly ---i.e., it computes this from the true null distribution of the test statistic. In many statistical tests, the true null distribution is complicated, but it can be approximated by another distribution, and it converges to that approximating distribution as $n \rightarrow \infty$. In particular, the so-called "chi-squared tests" are hypothesis tests where the true null distribution converges to a chi-squared distribution.
So, in a "chi-squared test" of this kind, when you compute the p-value of the test using the chi-squared distribution, this is just an approximation to the true p-value. The true p-value of the test is given by the exact test, and you are approximating this value using the approximating null distribution of the test statistic. When $n$ is large this approximation is very good, but when $n$ is small the approximation may be poor. For this reason, statisticians counsel against using the "chi-squared tests" (i.e., using the chi-squared approximation to the true null distribution) when $n$ is small.
Chi-squared tests for independence in contingency tables: Now I will examine your specific questions in relation to chi-squared tests for testing independence in contingency tables. In this context, if we have a contingency table with observed counts $O_1,...,O_K$ summing to $n \equiv \sum O_i$ then the test statistic is the Pearson statistic:
$$\chi^2 = \sum_{i=1}^K \frac{(O_i-E_i)^2}{E_i},$$
where $E_1,...,E_K$ are the expected cell values under the null hypothesis.$^\dagger$ The first thing to note here is that the observed counts $O_1,...,O_K$ are non-negative integers. For any $n<\infty$ this limits the possible values of the test statistic to a finite set of possible values, so its true null distribution will be a discrete distribution on this finite set of values. Note that the chi-squared distribution cannot be the true null distribution because it is a continuous distribution over all non-negative real numbers --- an (uncountable) infinite set of values.
As in other "chi-squared tests" the null distribution of the test statistic here is well approximated by the chi-squared distribution when $n$ is large. You are not correct to say that this is a matter of failing to "adequately approximate the theoretical chi-squared distribution" --- on the contrary, the theoretical chi-squared distribution is the approximation, not the true null distribution. The chi-squared approximation is good so long as none of the values $E_1,...,E_K$ is small. The reason that these expected values are small for low values of $n$ is that when you have a low total count value, you must expect the counts in at least some cells to be low.
$^\dagger$ For analysis of contingency tables, these expected cell counts are obtained by conditioning on the marginal totals under the null hypothesis of independence. It is not necessary for us to go into any further detail on these values. | Why are exact tests preferred over chi-squared for small sample sizes? | In a classical hypothesis test, you have a test statistic that orders the evidence from that which is most conducive to the null hypothesis to that which is most conducive to the alternative hypothesi | Why are exact tests preferred over chi-squared for small sample sizes?
In a classical hypothesis test, you have a test statistic that orders the evidence from that which is most conducive to the null hypothesis to that which is most conducive to the alternative hypothesis. (Without loss of generality, suppose that a higher value of this statistic is more conducive to the alternative hypothesis.) The p-value of the test is the probability of observing evidence at least as conducive to the alternative hypothesis as what you actually observed (a test statistic at least as large as the observed value) under the assumption that the null hypothesis is true. This is computed from the null distribution of the test statistic, which is its distribution under the assumption that the null hypothesis is true.
Now, an "exact test" is a test that computes the p-value exactly ---i.e., it computes this from the true null distribution of the test statistic. In many statistical tests, the true null distribution is complicated, but it can be approximated by another distribution, and it converges to that approximating distribution as $n \rightarrow \infty$. In particular, the so-called "chi-squared tests" are hypothesis tests where the true null distribution converges to a chi-squared distribution.
So, in a "chi-squared test" of this kind, when you compute the p-value of the test using the chi-squared distribution, this is just an approximation to the true p-value. The true p-value of the test is given by the exact test, and you are approximating this value using the approximating null distribution of the test statistic. When $n$ is large this approximation is very good, but when $n$ is small the approximation may be poor. For this reason, statisticians counsel against using the "chi-squared tests" (i.e., using the chi-squared approximation to the true null distribution) when $n$ is small.
Chi-squared tests for independence in contingency tables: Now I will examine your specific questions in relation to chi-squared tests for testing independence in contingency tables. In this context, if we have a contingency table with observed counts $O_1,...,O_K$ summing to $n \equiv \sum O_i$ then the test statistic is the Pearson statistic:
$$\chi^2 = \sum_{i=1}^K \frac{(O_i-E_i)^2}{E_i},$$
where $E_1,...,E_K$ are the expected cell values under the null hypothesis.$^\dagger$ The first thing to note here is that the observed counts $O_1,...,O_K$ are non-negative integers. For any $n<\infty$ this limits the possible values of the test statistic to a finite set of possible values, so its true null distribution will be a discrete distribution on this finite set of values. Note that the chi-squared distribution cannot be the true null distribution because it is a continuous distribution over all non-negative real numbers --- an (uncountable) infinite set of values.
As in other "chi-squared tests" the null distribution of the test statistic here is well approximated by the chi-squared distribution when $n$ is large. You are not correct to say that this is a matter of failing to "adequately approximate the theoretical chi-squared distribution" --- on the contrary, the theoretical chi-squared distribution is the approximation, not the true null distribution. The chi-squared approximation is good so long as none of the values $E_1,...,E_K$ is small. The reason that these expected values are small for low values of $n$ is that when you have a low total count value, you must expect the counts in at least some cells to be low.
$^\dagger$ For analysis of contingency tables, these expected cell counts are obtained by conditioning on the marginal totals under the null hypothesis of independence. It is not necessary for us to go into any further detail on these values. | Why are exact tests preferred over chi-squared for small sample sizes?
In a classical hypothesis test, you have a test statistic that orders the evidence from that which is most conducive to the null hypothesis to that which is most conducive to the alternative hypothesi |
23,882 | What's intended by "Let the data speak for itself"? | The interpretation depends on context, but there are some common contexts in which this comes up. The statement is often used in Bayesian analysis to stress the fact that we would ideally like the posterior distribution in the analysis to be robust to prior assumptions, so that the effect of the data "dominates" the posterior. More generally, the quote usually means that we want our statistical model to conform to the structure of the data, rather than forcing the data into an interpretation that is a non-verifiable structural assumption of the model.
The particular quote you are referring to is supplemented by the additional quotation: "The model must follow the data, not the other way around" (translated from Benzécri J (1973) L’Analyse des Données. Tome II: L’Analyse des Correspondances. Dunod, p. 6). Benzécri argued that statistical models should extract structure from the data, rather than imposing structure. He regarded the use of exploratory graphical methods as very important to allow the analyst to "let the data speak". | What's intended by "Let the data speak for itself"? | The interpretation depends on context, but there are some common contexts in which this comes up. The statement is often used in Bayesian analysis to stress the fact that we would ideally like the po | What's intended by "Let the data speak for itself"?
The interpretation depends on context, but there are some common contexts in which this comes up. The statement is often used in Bayesian analysis to stress the fact that we would ideally like the posterior distribution in the analysis to be robust to prior assumptions, so that the effect of the data "dominates" the posterior. More generally, the quote usually means that we want our statistical model to conform to the structure of the data, rather than forcing the data into an interpretation that is a non-verifiable structural assumption of the model.
The particular quote you are referring to is supplemented by the additional quotation: "The model must follow the data, not the other way around" (translated from Benzécri J (1973) L’Analyse des Données. Tome II: L’Analyse des Correspondances. Dunod, p. 6). Benzécri argued that statistical models should extract structure from the data, rather than imposing structure. He regarded the use of exploratory graphical methods as very important to allow the analyst to "let the data speak". | What's intended by "Let the data speak for itself"?
The interpretation depends on context, but there are some common contexts in which this comes up. The statement is often used in Bayesian analysis to stress the fact that we would ideally like the po |
23,883 | What's intended by "Let the data speak for itself"? | Back in around 2005 when "Data Mining" was the latest threat to the statistical profession, I remember seeing a poster with "Data Mining Principles," one of which was "let the data speak" (can't remember if "for itself" was included). If you think about algorithms that might be considered "Data Mining," apriori and recursive partitioning come to mind, two algorithms that can be motivated without statistical assumptions and result in pretty basic summaries of the underlying data set.
@Ben understands more of the history of the phrase then I do, but thinking about the quote as cited in the paper:
MCA can be seen as the counterpart of PCA for categorical data and
involves reducing data dimensionality to provide a subspace that best
represents the data in the sense of maximizing the variability of the
projected points. As mentioned, it is often presented without any
reference to probabilistic models, in line with Benz´ecri [1973]’s
idea to “let the data speak for itself.”
it appears to me that the procedure of MCA does resemble apriori or recursive partitioning (or hell, the arithmetic mean for that matter) in that it can be motivated without any modeling at all and is a mechanical operation on a data set that makes sense based on some first principles.
There is a spectrum of letting the data speak. Fully bayesian models with strong priors would be on one end. Frequentist nonparametric models would be closer to the other end. | What's intended by "Let the data speak for itself"? | Back in around 2005 when "Data Mining" was the latest threat to the statistical profession, I remember seeing a poster with "Data Mining Principles," one of which was "let the data speak" (can't remem | What's intended by "Let the data speak for itself"?
Back in around 2005 when "Data Mining" was the latest threat to the statistical profession, I remember seeing a poster with "Data Mining Principles," one of which was "let the data speak" (can't remember if "for itself" was included). If you think about algorithms that might be considered "Data Mining," apriori and recursive partitioning come to mind, two algorithms that can be motivated without statistical assumptions and result in pretty basic summaries of the underlying data set.
@Ben understands more of the history of the phrase then I do, but thinking about the quote as cited in the paper:
MCA can be seen as the counterpart of PCA for categorical data and
involves reducing data dimensionality to provide a subspace that best
represents the data in the sense of maximizing the variability of the
projected points. As mentioned, it is often presented without any
reference to probabilistic models, in line with Benz´ecri [1973]’s
idea to “let the data speak for itself.”
it appears to me that the procedure of MCA does resemble apriori or recursive partitioning (or hell, the arithmetic mean for that matter) in that it can be motivated without any modeling at all and is a mechanical operation on a data set that makes sense based on some first principles.
There is a spectrum of letting the data speak. Fully bayesian models with strong priors would be on one end. Frequentist nonparametric models would be closer to the other end. | What's intended by "Let the data speak for itself"?
Back in around 2005 when "Data Mining" was the latest threat to the statistical profession, I remember seeing a poster with "Data Mining Principles," one of which was "let the data speak" (can't remem |
23,884 | Interpretation of Radon-Nikodym derivative between probability measures? | First, we don't need probability measures, just $\sigma$-finiteness. So let $\mathcal M = (\Omega, \mathscr F)$ be a measurable space and let $\mu$ and $\nu$ be $\sigma$-finite measures on $\mathcal M$.
The Radon-Nikodym theorem states that if $\mu(A) = 0 \implies \nu(A) = 0$ for all $A \in \mathscr F$, denoted by $\mu \gg \nu$, then there exists a non-negative Borel function $f$ such that
$$
\nu(A) = \int_A f \,\text d\mu
$$
for all $A \in \mathscr F$.
Here's how I like to think of this. First, for any two measures on $\mathcal M$, let's define $\mu \sim \nu$ to mean $\mu(A) = 0 \iff \nu(A) = 0$. This is a valid equivalence relationship and we say that $\mu$ and $\nu$ are equivalent in this case. Why is this a sensible equivalence for measures? Measures are just functions but their domains are tricky to visualize. What about if two ordinary functions $f, g :\mathbb R \to \mathbb R$ have this property, i.e. $f(x) = 0 \iff g(x) = 0$? Well, define
$$
h(x) = \begin{cases} f(x) / g(x) & g(x) \neq 0 \\ \pi^e & \text{o.w.}\end{cases}
$$
and note that anywhere on the support of $g$ we have $gh = f$, and outside of the support of $g$ $gh = 0 \cdot \pi^e = 0 = f$ (since $f$ and $g$ share supports) so $h$ lets us rescale $g$ into $f$. As @whuber points out, the key idea here is not that $0/0$ is somehow "safe" to do or ignore, but rather when $g = 0$ then it doesn't matter what $h$ does so we can just define it arbitrarily (like to be $\pi^e$ which has no special significance here) and things still work. Also in this case we can define the analogous function $h'$ with $g / f$ so that $fh' = g$.
Next suppose that $g(x) = 0 \implies f(x) = 0$, but the other direction does not necessarily hold. This means that our previous definition of $h$ still works, but now $h'$ doesn't work since it'll have actual divisions by $0$. Thus we can rescale $g$ into $f$ via $gh = f$, but we can't go the other direction because we'd need to rescale something $0$ into something non-zero.
Now let's return to $\mu$ and $\nu$ and denote our RND by $f$. If $\mu \sim \nu$, then this intuitively means that one can be rescaled into the other, and vice versa. But generally we only want to go one direction with this (i.e. rescale a nice measure like the Lebesgue measure into a more abstract measure) so we only need $\mu \gg \nu$ to do useful things. This rescaling is the heart of the RND.
Returning to @whuber's point in the comments, there is an extra subtlety to why it is safe to ignore the issue of $0/0$. That's because with measures we're only ever defining things up to sets of measure $0$ so on any set $A$ with $\mu(A) = 0$ we can just make our RND take any value, say $1$. So it is not that $0/0$ is intrinsically safe but rather anywhere that we would have $0/0$ is a set of measure $0$ w.r.t. $\mu$ so we can just define our RND to be something nice there without affecting anything.
As an example, suppose $k \cdot \mu = \nu$ for some $k > 0$. Then
$$
\nu(A) = \int_A \,\text d\nu = \int_A k \,\text d \mu
$$
so we have that $f(x) = k = \frac{\text d\nu}{\text d\mu}$ is the RND (this can be justified more formally by the change of measures theorem). This is good because we have exactly recovered the scaling factor.
Here's a second example to emphasize how changing RNDs on sets of measure $0$ doesn't affect them. Let $f(x) = \varphi(x) + 1_{\mathbb Q}(x)$, i.e. it's the standard normal PDF plus $1$ if the input is rational, and let $X$ be a RV with this density. This means
$$
P(X \in A) = \int_A \left(\varphi + 1_{\mathbb Q}\right) \,\text d\lambda
$$
$$
= \int_A \varphi \,\text d\lambda + \lambda\left(\mathbb Q \right) =\int_A \varphi \,\text d\lambda
$$
so actually $X$ is still a standard Gaussian RV. It has not affected the distribution in any way to change $X$ on $\mathbb Q$ because it is a set of measure $0$ w.r.t. $\lambda$.
As a final example, suppose $X \sim \text{Pois}(\eta)$ and $Y \sim \text{Bin}(n, p)$ and let $P_X$ and $P_Y$ be their respective distributions. Recall that a pmf is a RND with respect to the counting measure $c$, and since $c$ has the property that $c(A) = 0 \iff A = \emptyset$, it turns out that
$$
\frac{\text dP_Y}{\text dP_X} = \frac{\text dP_Y / \text dc}{\text dP_X / \text dc} = \frac{f_Y}{f_X}
$$
so we can compute
$$
P_Y(A) = \int_A \,\text dP_Y
$$
$$
= \int_A \frac{\text dP_Y}{\text dP_X}\,\text dP_X = \int_A \frac{\text dP_Y}{\text dP_X}\frac{\text dP_X}{\text dc}\,\text dc
$$
$$
= \sum_{y \in A} \frac{\text dP_Y}{\text dP_X}(y)\frac{\text dP_X}{\text dc}(y) = \sum_{y \in A} \frac{f_Y(y)}{f_X(y)}f_X(y) = \sum_{y \in A} f_Y(y).
$$
Thus because $P(X = n) > 0$ for all $n$ in the support of $Y$, we can rescale integration with respect to a Poisson distribution into integration with respect to a binomial distribution, although because everything's discrete it turns out to look like a trivial result.
I addressed your more general question but didn't touch on KL divergences. For me, at least, I find KL divergence much easier to interpret in terms of hypothesis testing like @kjetil b halvorsen's answer here. If $P \ll Q$ and there exists a measure $\mu$ that dominates both then using $\frac{\text dP}{\text dQ} = \frac{\text dP / \text d\mu}{\text dQ / \text d\mu} := p / q$ we can recover the form with densities, so for me I find that easier. | Interpretation of Radon-Nikodym derivative between probability measures? | First, we don't need probability measures, just $\sigma$-finiteness. So let $\mathcal M = (\Omega, \mathscr F)$ be a measurable space and let $\mu$ and $\nu$ be $\sigma$-finite measures on $\mathcal M | Interpretation of Radon-Nikodym derivative between probability measures?
First, we don't need probability measures, just $\sigma$-finiteness. So let $\mathcal M = (\Omega, \mathscr F)$ be a measurable space and let $\mu$ and $\nu$ be $\sigma$-finite measures on $\mathcal M$.
The Radon-Nikodym theorem states that if $\mu(A) = 0 \implies \nu(A) = 0$ for all $A \in \mathscr F$, denoted by $\mu \gg \nu$, then there exists a non-negative Borel function $f$ such that
$$
\nu(A) = \int_A f \,\text d\mu
$$
for all $A \in \mathscr F$.
Here's how I like to think of this. First, for any two measures on $\mathcal M$, let's define $\mu \sim \nu$ to mean $\mu(A) = 0 \iff \nu(A) = 0$. This is a valid equivalence relationship and we say that $\mu$ and $\nu$ are equivalent in this case. Why is this a sensible equivalence for measures? Measures are just functions but their domains are tricky to visualize. What about if two ordinary functions $f, g :\mathbb R \to \mathbb R$ have this property, i.e. $f(x) = 0 \iff g(x) = 0$? Well, define
$$
h(x) = \begin{cases} f(x) / g(x) & g(x) \neq 0 \\ \pi^e & \text{o.w.}\end{cases}
$$
and note that anywhere on the support of $g$ we have $gh = f$, and outside of the support of $g$ $gh = 0 \cdot \pi^e = 0 = f$ (since $f$ and $g$ share supports) so $h$ lets us rescale $g$ into $f$. As @whuber points out, the key idea here is not that $0/0$ is somehow "safe" to do or ignore, but rather when $g = 0$ then it doesn't matter what $h$ does so we can just define it arbitrarily (like to be $\pi^e$ which has no special significance here) and things still work. Also in this case we can define the analogous function $h'$ with $g / f$ so that $fh' = g$.
Next suppose that $g(x) = 0 \implies f(x) = 0$, but the other direction does not necessarily hold. This means that our previous definition of $h$ still works, but now $h'$ doesn't work since it'll have actual divisions by $0$. Thus we can rescale $g$ into $f$ via $gh = f$, but we can't go the other direction because we'd need to rescale something $0$ into something non-zero.
Now let's return to $\mu$ and $\nu$ and denote our RND by $f$. If $\mu \sim \nu$, then this intuitively means that one can be rescaled into the other, and vice versa. But generally we only want to go one direction with this (i.e. rescale a nice measure like the Lebesgue measure into a more abstract measure) so we only need $\mu \gg \nu$ to do useful things. This rescaling is the heart of the RND.
Returning to @whuber's point in the comments, there is an extra subtlety to why it is safe to ignore the issue of $0/0$. That's because with measures we're only ever defining things up to sets of measure $0$ so on any set $A$ with $\mu(A) = 0$ we can just make our RND take any value, say $1$. So it is not that $0/0$ is intrinsically safe but rather anywhere that we would have $0/0$ is a set of measure $0$ w.r.t. $\mu$ so we can just define our RND to be something nice there without affecting anything.
As an example, suppose $k \cdot \mu = \nu$ for some $k > 0$. Then
$$
\nu(A) = \int_A \,\text d\nu = \int_A k \,\text d \mu
$$
so we have that $f(x) = k = \frac{\text d\nu}{\text d\mu}$ is the RND (this can be justified more formally by the change of measures theorem). This is good because we have exactly recovered the scaling factor.
Here's a second example to emphasize how changing RNDs on sets of measure $0$ doesn't affect them. Let $f(x) = \varphi(x) + 1_{\mathbb Q}(x)$, i.e. it's the standard normal PDF plus $1$ if the input is rational, and let $X$ be a RV with this density. This means
$$
P(X \in A) = \int_A \left(\varphi + 1_{\mathbb Q}\right) \,\text d\lambda
$$
$$
= \int_A \varphi \,\text d\lambda + \lambda\left(\mathbb Q \right) =\int_A \varphi \,\text d\lambda
$$
so actually $X$ is still a standard Gaussian RV. It has not affected the distribution in any way to change $X$ on $\mathbb Q$ because it is a set of measure $0$ w.r.t. $\lambda$.
As a final example, suppose $X \sim \text{Pois}(\eta)$ and $Y \sim \text{Bin}(n, p)$ and let $P_X$ and $P_Y$ be their respective distributions. Recall that a pmf is a RND with respect to the counting measure $c$, and since $c$ has the property that $c(A) = 0 \iff A = \emptyset$, it turns out that
$$
\frac{\text dP_Y}{\text dP_X} = \frac{\text dP_Y / \text dc}{\text dP_X / \text dc} = \frac{f_Y}{f_X}
$$
so we can compute
$$
P_Y(A) = \int_A \,\text dP_Y
$$
$$
= \int_A \frac{\text dP_Y}{\text dP_X}\,\text dP_X = \int_A \frac{\text dP_Y}{\text dP_X}\frac{\text dP_X}{\text dc}\,\text dc
$$
$$
= \sum_{y \in A} \frac{\text dP_Y}{\text dP_X}(y)\frac{\text dP_X}{\text dc}(y) = \sum_{y \in A} \frac{f_Y(y)}{f_X(y)}f_X(y) = \sum_{y \in A} f_Y(y).
$$
Thus because $P(X = n) > 0$ for all $n$ in the support of $Y$, we can rescale integration with respect to a Poisson distribution into integration with respect to a binomial distribution, although because everything's discrete it turns out to look like a trivial result.
I addressed your more general question but didn't touch on KL divergences. For me, at least, I find KL divergence much easier to interpret in terms of hypothesis testing like @kjetil b halvorsen's answer here. If $P \ll Q$ and there exists a measure $\mu$ that dominates both then using $\frac{\text dP}{\text dQ} = \frac{\text dP / \text d\mu}{\text dQ / \text d\mu} := p / q$ we can recover the form with densities, so for me I find that easier. | Interpretation of Radon-Nikodym derivative between probability measures?
First, we don't need probability measures, just $\sigma$-finiteness. So let $\mathcal M = (\Omega, \mathscr F)$ be a measurable space and let $\mu$ and $\nu$ be $\sigma$-finite measures on $\mathcal M |
23,885 | Origin and spelling of (multi)collinear/colinear | Collinear follows the model of collaborate, collide, &c.: the m of the Latin prefix com- ("together") is assimilated to the initial l of the Latinate stem (cf. commiserate, contemporary, coæval, corrode). It's pronounced with the stress on the second syllable, & therefore an indeterminate vowel or at most a short o.
An excuse for colinear might be that you're treating linear as a native word, following the model of copilot, co-worker, &c.—cases in which the prefix is reduced to co- irrespective of the initial letter of the stem.† I imagine people who write it thus also pronounce it with at least a secondary stress on the first syllable, & with a long o.
Pace @Carl I don't think a general predilection among the British for writing double l has much to do with it, though a preference for more traditional word-forms might. Counts of occurrences in published works from Google Ngrams suggest that colinear & its derivatives are disfavoured only slightly less in U.S. than in British English (an odds ratio of 1.4 over 1999 – 2008).
library(ngramr)
#define word list & corpora
words <- "collinear, colinear, collinearity, colinearity, multicollinear, multicolinear, multicollinearity, multicolinearity"
corpora <- c("eng_gb_2012", "eng_us_2012")
# fetch word counts
dd <- ngram(words, corpora, year_start = 1999, smoothing = 0, count = T, tag = NULL, case_ins = TRUE)
# reduce derivatives
dd$stem <- factor(gsub("multi|ity", "", tolower(dd$Phrase)))
# tabulate
xtabs(Count~stem+Corpus, data=dd) -> tb
What may well be muddying the waters, however, is that there are more recent coinings of the word with different senses from the geometric one "together in a line" (first known use in 1863 according to my dictionary); in these we'd naturally expect the form colinear just because people don't make up Latin words any more. Wikipedia has an article on colinear maps & the on-line Merriam-Webster dictionary gives a second sense of colinear (but not collinear), "having corresponding parts arranged in the same linear order", that finds its use in Genetics & Molecular Biology.
† If you really want to write colinear & anyone's picking on you because of it, ask them if they write complanar. | Origin and spelling of (multi)collinear/colinear | Collinear follows the model of collaborate, collide, &c.: the m of the Latin prefix com- ("together") is assimilated to the initial l of the Latinate stem (cf. commiserate, contemporary, coæval, corro | Origin and spelling of (multi)collinear/colinear
Collinear follows the model of collaborate, collide, &c.: the m of the Latin prefix com- ("together") is assimilated to the initial l of the Latinate stem (cf. commiserate, contemporary, coæval, corrode). It's pronounced with the stress on the second syllable, & therefore an indeterminate vowel or at most a short o.
An excuse for colinear might be that you're treating linear as a native word, following the model of copilot, co-worker, &c.—cases in which the prefix is reduced to co- irrespective of the initial letter of the stem.† I imagine people who write it thus also pronounce it with at least a secondary stress on the first syllable, & with a long o.
Pace @Carl I don't think a general predilection among the British for writing double l has much to do with it, though a preference for more traditional word-forms might. Counts of occurrences in published works from Google Ngrams suggest that colinear & its derivatives are disfavoured only slightly less in U.S. than in British English (an odds ratio of 1.4 over 1999 – 2008).
library(ngramr)
#define word list & corpora
words <- "collinear, colinear, collinearity, colinearity, multicollinear, multicolinear, multicollinearity, multicolinearity"
corpora <- c("eng_gb_2012", "eng_us_2012")
# fetch word counts
dd <- ngram(words, corpora, year_start = 1999, smoothing = 0, count = T, tag = NULL, case_ins = TRUE)
# reduce derivatives
dd$stem <- factor(gsub("multi|ity", "", tolower(dd$Phrase)))
# tabulate
xtabs(Count~stem+Corpus, data=dd) -> tb
What may well be muddying the waters, however, is that there are more recent coinings of the word with different senses from the geometric one "together in a line" (first known use in 1863 according to my dictionary); in these we'd naturally expect the form colinear just because people don't make up Latin words any more. Wikipedia has an article on colinear maps & the on-line Merriam-Webster dictionary gives a second sense of colinear (but not collinear), "having corresponding parts arranged in the same linear order", that finds its use in Genetics & Molecular Biology.
† If you really want to write colinear & anyone's picking on you because of it, ask them if they write complanar. | Origin and spelling of (multi)collinear/colinear
Collinear follows the model of collaborate, collide, &c.: the m of the Latin prefix com- ("together") is assimilated to the initial l of the Latinate stem (cf. commiserate, contemporary, coæval, corro |
23,886 | Origin and spelling of (multi)collinear/colinear | Colinear is a U.S. English spelling. In the U.S. "collinear", with two l's is also used. In British English, the collinear spelling would be the accepted form.
Another example of doubled versus single "l" appears in the word modelling (especially British) and modeling (especially U.S.). Multicollinearity versus multicolinearity follows this same pattern.
This general American versus British pattern for spelling with one or two l's occurs for many words. However, "spelling" always has two l's. The divergence probably occurred during the 1800's during one of the U.S. spelling reformations.
The correct spelling is whatever fits the journal's style, and many journals insist on either American or British spellings. However, Canadian journals often accept both spellings.
When to use collinearity and when to use multicollinearity is discussed here. @whuber Indeed, we have discussed this before.
Collinearity was first discussed in the $3^{rd}$ century, and rediscovered 1500 years later. | Origin and spelling of (multi)collinear/colinear | Colinear is a U.S. English spelling. In the U.S. "collinear", with two l's is also used. In British English, the collinear spelling would be the accepted form.
Another example of doubled versus singl | Origin and spelling of (multi)collinear/colinear
Colinear is a U.S. English spelling. In the U.S. "collinear", with two l's is also used. In British English, the collinear spelling would be the accepted form.
Another example of doubled versus single "l" appears in the word modelling (especially British) and modeling (especially U.S.). Multicollinearity versus multicolinearity follows this same pattern.
This general American versus British pattern for spelling with one or two l's occurs for many words. However, "spelling" always has two l's. The divergence probably occurred during the 1800's during one of the U.S. spelling reformations.
The correct spelling is whatever fits the journal's style, and many journals insist on either American or British spellings. However, Canadian journals often accept both spellings.
When to use collinearity and when to use multicollinearity is discussed here. @whuber Indeed, we have discussed this before.
Collinearity was first discussed in the $3^{rd}$ century, and rediscovered 1500 years later. | Origin and spelling of (multi)collinear/colinear
Colinear is a U.S. English spelling. In the U.S. "collinear", with two l's is also used. In British English, the collinear spelling would be the accepted form.
Another example of doubled versus singl |
23,887 | Clustering as dimensionality reduction | I think this is the "centroid method" (or the closely-related "centroidQR" method) described by Park, Jeon and Rosen. From Moon-Gu Jeon's thesis abstract:
Our Centroid method projects full dimensional data onto the centroid
space of its classes, which gives tremendous dimensional reduction,
reducing the number of dimension to the number of classes while
improving the original class structure. One of its interesting
properties is that even when using two different similarity measures,
the results of classification for the full and the reduced dimensional
space formed by the Centroid are identical when the centroid-based
classification is applied. The second method, called CentroidQR, is a
variant of our Centroid method, which uses as a projection space, k
columns of orthogonal matrix Q from QR decomposition of the centroid
matrix.
It also seems to be equivalent to the "multiple group" method from Factor Analysis. | Clustering as dimensionality reduction | I think this is the "centroid method" (or the closely-related "centroidQR" method) described by Park, Jeon and Rosen. From Moon-Gu Jeon's thesis abstract:
Our Centroid method projects full dimensiona | Clustering as dimensionality reduction
I think this is the "centroid method" (or the closely-related "centroidQR" method) described by Park, Jeon and Rosen. From Moon-Gu Jeon's thesis abstract:
Our Centroid method projects full dimensional data onto the centroid
space of its classes, which gives tremendous dimensional reduction,
reducing the number of dimension to the number of classes while
improving the original class structure. One of its interesting
properties is that even when using two different similarity measures,
the results of classification for the full and the reduced dimensional
space formed by the Centroid are identical when the centroid-based
classification is applied. The second method, called CentroidQR, is a
variant of our Centroid method, which uses as a projection space, k
columns of orthogonal matrix Q from QR decomposition of the centroid
matrix.
It also seems to be equivalent to the "multiple group" method from Factor Analysis. | Clustering as dimensionality reduction
I think this is the "centroid method" (or the closely-related "centroidQR" method) described by Park, Jeon and Rosen. From Moon-Gu Jeon's thesis abstract:
Our Centroid method projects full dimensiona |
23,888 | Clustering as dimensionality reduction | Look all the literature on pivot based indexing.
But you gain little by using k-means. Usually, you can just use random points as pivots. If you choose enough, they won't be all similar. | Clustering as dimensionality reduction | Look all the literature on pivot based indexing.
But you gain little by using k-means. Usually, you can just use random points as pivots. If you choose enough, they won't be all similar. | Clustering as dimensionality reduction
Look all the literature on pivot based indexing.
But you gain little by using k-means. Usually, you can just use random points as pivots. If you choose enough, they won't be all similar. | Clustering as dimensionality reduction
Look all the literature on pivot based indexing.
But you gain little by using k-means. Usually, you can just use random points as pivots. If you choose enough, they won't be all similar. |
23,889 | Clustering as dimensionality reduction | There are several ways to use clustering as dimension reduction. For the K-means, you can also project the points (orthogonally) onto the vector (or affine) space generated by the centres. | Clustering as dimensionality reduction | There are several ways to use clustering as dimension reduction. For the K-means, you can also project the points (orthogonally) onto the vector (or affine) space generated by the centres. | Clustering as dimensionality reduction
There are several ways to use clustering as dimension reduction. For the K-means, you can also project the points (orthogonally) onto the vector (or affine) space generated by the centres. | Clustering as dimensionality reduction
There are several ways to use clustering as dimension reduction. For the K-means, you can also project the points (orthogonally) onto the vector (or affine) space generated by the centres. |
23,890 | Central Limit Theorem and t-test | Okay, a few things:
1) A two-sample t-test does not assume the distributions of groups A and B are the same under the null hypothesis, even if the underlying distributions are both normal. That can only occur if you assume the standard deviations are the same, which is a hefty assumption to have. The two-sample t-test tests whether, under the null, the means are the same of the two groups. But yes, the classical two-sample t-test assumes the underlying data is normally distributed. This is the case because you do not only need the numerator to be normally distributed, but the variances also be (a scaled version of) a $\chi^2$. That being said, the t-test is fairly robust against the assumption of normality. See here.
2) It is true under a large enough sample, the underlying distribution of the means of each group is going to be approximately normal. How good that approximation depends on the underlying distribution of each group.
The general idea is this. If $X$ and $Y$ are independent, with $X$ having mean $\mu_X$ and standard deviation $\sigma_X$ and $Y$ having mean $\mu_Y$ with standard deviation $\sigma_Y$, and the respective sample $X_1,\dots,X_n$ and $Y_1,\dots,Y_m$ are large, then you can conclude
$$
\frac{\bar{X}-\bar{Y}-(\mu_X-\mu_Y)}{\sqrt{\frac{\sigma^2_X}{n}+\frac{\sigma^2_Y}{m}}}$$
Is approximately normal with mean 0 and standard deviation 1. So critical values $z_{\alpha/2}$ can be used to do testing. Also, $t_{\alpha/2,\nu}$ are going to be close to $z_{\alpha/2}$, when $\nu$ is large (which occurs if the sample sizes is large). So for large enough sample sizes, a t-test can be used.
There are ways to check for this. (The standard rule of thumb is that each group has a sample size of 30 or larger, but I am usually against those rules because there are plenty of cases where that rule fails). One way you can check it (sort of) is to create a bootstrap distribution of the mean and see.
3) You can do better than approximate tests though. When you are testing to see if the means differ, your real question is really to see if the locations differ. A test that will be correct (almost) all the time will be the Mann Whitney U test. This does not test whether the means differ, but rather if the medians differ. In other words, it again tests whether one location differs from another. It may be a better option, and has a pretty high power overall. | Central Limit Theorem and t-test | Okay, a few things:
1) A two-sample t-test does not assume the distributions of groups A and B are the same under the null hypothesis, even if the underlying distributions are both normal. That can on | Central Limit Theorem and t-test
Okay, a few things:
1) A two-sample t-test does not assume the distributions of groups A and B are the same under the null hypothesis, even if the underlying distributions are both normal. That can only occur if you assume the standard deviations are the same, which is a hefty assumption to have. The two-sample t-test tests whether, under the null, the means are the same of the two groups. But yes, the classical two-sample t-test assumes the underlying data is normally distributed. This is the case because you do not only need the numerator to be normally distributed, but the variances also be (a scaled version of) a $\chi^2$. That being said, the t-test is fairly robust against the assumption of normality. See here.
2) It is true under a large enough sample, the underlying distribution of the means of each group is going to be approximately normal. How good that approximation depends on the underlying distribution of each group.
The general idea is this. If $X$ and $Y$ are independent, with $X$ having mean $\mu_X$ and standard deviation $\sigma_X$ and $Y$ having mean $\mu_Y$ with standard deviation $\sigma_Y$, and the respective sample $X_1,\dots,X_n$ and $Y_1,\dots,Y_m$ are large, then you can conclude
$$
\frac{\bar{X}-\bar{Y}-(\mu_X-\mu_Y)}{\sqrt{\frac{\sigma^2_X}{n}+\frac{\sigma^2_Y}{m}}}$$
Is approximately normal with mean 0 and standard deviation 1. So critical values $z_{\alpha/2}$ can be used to do testing. Also, $t_{\alpha/2,\nu}$ are going to be close to $z_{\alpha/2}$, when $\nu$ is large (which occurs if the sample sizes is large). So for large enough sample sizes, a t-test can be used.
There are ways to check for this. (The standard rule of thumb is that each group has a sample size of 30 or larger, but I am usually against those rules because there are plenty of cases where that rule fails). One way you can check it (sort of) is to create a bootstrap distribution of the mean and see.
3) You can do better than approximate tests though. When you are testing to see if the means differ, your real question is really to see if the locations differ. A test that will be correct (almost) all the time will be the Mann Whitney U test. This does not test whether the means differ, but rather if the medians differ. In other words, it again tests whether one location differs from another. It may be a better option, and has a pretty high power overall. | Central Limit Theorem and t-test
Okay, a few things:
1) A two-sample t-test does not assume the distributions of groups A and B are the same under the null hypothesis, even if the underlying distributions are both normal. That can on |
23,891 | Central Limit Theorem and t-test | Short answer: your colleague is right.
In the end, the t statistic depends only on the mean and variance of the two samples. The CLT says that (under most circumstances) those rapidly become normal even when the underlying population distribution is not.
So the t-test is quite robust to (most) departures from normality. This has been verified by many simulation studies. Note, by the way, that it is not at all robust to departures from homogeneity of variance. | Central Limit Theorem and t-test | Short answer: your colleague is right.
In the end, the t statistic depends only on the mean and variance of the two samples. The CLT says that (under most circumstances) those rapidly become normal ev | Central Limit Theorem and t-test
Short answer: your colleague is right.
In the end, the t statistic depends only on the mean and variance of the two samples. The CLT says that (under most circumstances) those rapidly become normal even when the underlying population distribution is not.
So the t-test is quite robust to (most) departures from normality. This has been verified by many simulation studies. Note, by the way, that it is not at all robust to departures from homogeneity of variance. | Central Limit Theorem and t-test
Short answer: your colleague is right.
In the end, the t statistic depends only on the mean and variance of the two samples. The CLT says that (under most circumstances) those rapidly become normal ev |
23,892 | Violated Normality of Residuals Assumption in Linear Mixed Model | My answer to your questions would be (1) "yes" (I would worry a bit about the initial degree of non-Normality), (2) "no" (log-transformation seems to have improved the situation), (3) N/A (since I'm not worried), but a few more things to try if you are worried would be:
use robustlmm::rlmer() to do a robust LMM fit;
try the fit without the points that give the most extreme residuals (try lattice::qqmath(log_fit,id=0.1,idLabels=~.obs) to identify them by observation number) and see if it makes much of a difference
try another transformation to get closer to Normality (although I played around with this a little bit and it doesn't seem to help)
I'm a little surprised by the apparent mismatch between your sims (these examples look farther from Normality by eye) and the Shapiro test results (fairly strong evidence against the null hypothesis of Normality). | Violated Normality of Residuals Assumption in Linear Mixed Model | My answer to your questions would be (1) "yes" (I would worry a bit about the initial degree of non-Normality), (2) "no" (log-transformation seems to have improved the situation), (3) N/A (since I'm n | Violated Normality of Residuals Assumption in Linear Mixed Model
My answer to your questions would be (1) "yes" (I would worry a bit about the initial degree of non-Normality), (2) "no" (log-transformation seems to have improved the situation), (3) N/A (since I'm not worried), but a few more things to try if you are worried would be:
use robustlmm::rlmer() to do a robust LMM fit;
try the fit without the points that give the most extreme residuals (try lattice::qqmath(log_fit,id=0.1,idLabels=~.obs) to identify them by observation number) and see if it makes much of a difference
try another transformation to get closer to Normality (although I played around with this a little bit and it doesn't seem to help)
I'm a little surprised by the apparent mismatch between your sims (these examples look farther from Normality by eye) and the Shapiro test results (fairly strong evidence against the null hypothesis of Normality). | Violated Normality of Residuals Assumption in Linear Mixed Model
My answer to your questions would be (1) "yes" (I would worry a bit about the initial degree of non-Normality), (2) "no" (log-transformation seems to have improved the situation), (3) N/A (since I'm n |
23,893 | Variational Inference in plain english | Not based on my knowledge, but here's a paper (in fairly plain English) that I think is very relevant to the question:
Blei, Kucukelbir & McAuliffe 2016. Variational Inference: A Review for Statisticians. https://arxiv.org/abs/1601.00670
From the abstract:
One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms.
They also offer guidance in when statisticians should use Markov chain Monte Carlo sampling and when variational inference (see paragraph Comparing variational inference and MCMC in the article). | Variational Inference in plain english | Not based on my knowledge, but here's a paper (in fairly plain English) that I think is very relevant to the question:
Blei, Kucukelbir & McAuliffe 2016. Variational Inference: A Review for Statistici | Variational Inference in plain english
Not based on my knowledge, but here's a paper (in fairly plain English) that I think is very relevant to the question:
Blei, Kucukelbir & McAuliffe 2016. Variational Inference: A Review for Statisticians. https://arxiv.org/abs/1601.00670
From the abstract:
One of the core problems of modern statistics is to approximate difficult-to-compute probability densities. This problem is especially important in Bayesian statistics, which frames all inference about unknown quantities as a calculation involving the posterior density. In this paper, we review variational inference (VI), a method from machine learning that approximates probability densities through optimization. VI has been used in many applications and tends to be faster than classical methods, such as Markov chain Monte Carlo sampling. The idea behind VI is to first posit a family of densities and then to find the member of that family which is close to the target. Closeness is measured by Kullback-Leibler divergence. We review the ideas behind mean-field variational inference, discuss the special case of VI applied to exponential family models, present a full example with a Bayesian mixture of Gaussians, and derive a variant that uses stochastic optimization to scale up to massive data. We discuss modern research in VI and highlight important open problems. VI is powerful, but it is not yet well understood. Our hope in writing this paper is to catalyze statistical research on this class of algorithms.
They also offer guidance in when statisticians should use Markov chain Monte Carlo sampling and when variational inference (see paragraph Comparing variational inference and MCMC in the article). | Variational Inference in plain english
Not based on my knowledge, but here's a paper (in fairly plain English) that I think is very relevant to the question:
Blei, Kucukelbir & McAuliffe 2016. Variational Inference: A Review for Statistici |
23,894 | difference between neural network and deep learning | Deep learning = deep artificial neural networks + other kind of deep models.
Deep artificial neural networks = artificial neural networks with more than 1 layer. (see minimum number of layers in a deep neural network) | difference between neural network and deep learning | Deep learning = deep artificial neural networks + other kind of deep models.
Deep artificial neural networks = artificial neural networks with more than 1 layer. (see minimum number of layers in a dee | difference between neural network and deep learning
Deep learning = deep artificial neural networks + other kind of deep models.
Deep artificial neural networks = artificial neural networks with more than 1 layer. (see minimum number of layers in a deep neural network) | difference between neural network and deep learning
Deep learning = deep artificial neural networks + other kind of deep models.
Deep artificial neural networks = artificial neural networks with more than 1 layer. (see minimum number of layers in a dee |
23,895 | difference between neural network and deep learning | Frank Dernoncourt has a better general purpose answer, but I think it's worth mentioning that when people use the broad term "Deep Learning" they're often implying the use of recent techniques, like convolution, that you wouldn't find in older/traditional (fully-connected) neural networks. For image recognition problems, convolution can enable deeper neural networks because convoluted neurons/filters reduce the risk of overfitting somewhat by sharing weights. | difference between neural network and deep learning | Frank Dernoncourt has a better general purpose answer, but I think it's worth mentioning that when people use the broad term "Deep Learning" they're often implying the use of recent techniques, like c | difference between neural network and deep learning
Frank Dernoncourt has a better general purpose answer, but I think it's worth mentioning that when people use the broad term "Deep Learning" they're often implying the use of recent techniques, like convolution, that you wouldn't find in older/traditional (fully-connected) neural networks. For image recognition problems, convolution can enable deeper neural networks because convoluted neurons/filters reduce the risk of overfitting somewhat by sharing weights. | difference between neural network and deep learning
Frank Dernoncourt has a better general purpose answer, but I think it's worth mentioning that when people use the broad term "Deep Learning" they're often implying the use of recent techniques, like c |
23,896 | difference between neural network and deep learning | Neural networks with a lot of layers are deep architectures.
However, the backpropagation learning algorithm used in neural networks doesn't work well when the network is very deep. Learning architectures in deep architectures ("deep learning") have to address this. For example, Boltzmann machines use a contrastive learning algorithm instead.
Coming up with a deep architecture is easy. Coming up with a learning algorithm that works well for a deep architecture has proven difficult. | difference between neural network and deep learning | Neural networks with a lot of layers are deep architectures.
However, the backpropagation learning algorithm used in neural networks doesn't work well when the network is very deep. Learning archite | difference between neural network and deep learning
Neural networks with a lot of layers are deep architectures.
However, the backpropagation learning algorithm used in neural networks doesn't work well when the network is very deep. Learning architectures in deep architectures ("deep learning") have to address this. For example, Boltzmann machines use a contrastive learning algorithm instead.
Coming up with a deep architecture is easy. Coming up with a learning algorithm that works well for a deep architecture has proven difficult. | difference between neural network and deep learning
Neural networks with a lot of layers are deep architectures.
However, the backpropagation learning algorithm used in neural networks doesn't work well when the network is very deep. Learning archite |
23,897 | difference between neural network and deep learning | Deep learning requires a neural network having multiple layers — each layer doing mathematical transformations and feeding into the next layer. The output from the last layer is the decision of the network for a given input. The layers between the input and output layer are called hidden layers.
A deep learning neural network is a massive collection of perceptrons interconnected in layers. The weights and bias of each perceptron in the network influence the nature of the output decision of the entire network. In a perfectly tuned neural network, all the values of weights and bias of all the perceptron are such that the output decision is always correct (as expected) for all possible inputs. How are the weights and bias configured? This happens iteratively during the training of the network — called deep learning. (Sharad Gandhi) | difference between neural network and deep learning | Deep learning requires a neural network having multiple layers — each layer doing mathematical transformations and feeding into the next layer. The output from the last layer is the decision of the n | difference between neural network and deep learning
Deep learning requires a neural network having multiple layers — each layer doing mathematical transformations and feeding into the next layer. The output from the last layer is the decision of the network for a given input. The layers between the input and output layer are called hidden layers.
A deep learning neural network is a massive collection of perceptrons interconnected in layers. The weights and bias of each perceptron in the network influence the nature of the output decision of the entire network. In a perfectly tuned neural network, all the values of weights and bias of all the perceptron are such that the output decision is always correct (as expected) for all possible inputs. How are the weights and bias configured? This happens iteratively during the training of the network — called deep learning. (Sharad Gandhi) | difference between neural network and deep learning
Deep learning requires a neural network having multiple layers — each layer doing mathematical transformations and feeding into the next layer. The output from the last layer is the decision of the n |
23,898 | How to fit weights into Q-values with linear function approximation | Function approximation is basically a regression problem (in the general sense, i.e. opposed to classification where the class is discrete), i.e. one tries to learn a function mapping from input (in your case $f(s,a)$) to a real-valued output $Q(s,a)$. Since we do not have a full table of all input / output values, but instead learn and estimate $Q(s,a)$ at the same time, the parameters (here: the weights $w$) cannot be calculated directly from the data. A common approach here is to use gradient descent.
Here is the general algorithm for learning $Q(s,a)$ with Value Function Approximation
Init parameter-vector $w=(w_1,w_2,....,w_n)$ randomly (e.g. in [0,1])
For each episode:
$s\leftarrow$initial state of episode
$a\leftarrow$action given by policy $\pi$ (recommend: $\epsilon$-greedy)
Take action $a$, observe reward $r$ and next state $s'$
$w\leftarrow w+ \alpha(r+\gamma * max_{a'}Q(s',a') - Q(s,a))
\vec\nabla_wQ(s,a)$
$s\leftarrow s'$
Repeat 2-5 until $s$ is terminal
where ...
$\alpha\in[0,1]$ is the learning rate
$\gamma\in[0,1]$ is the discount rate
$max_{a'}Q(s',a')$ is the action $a'$ in state $s'$ maximizing $Q(s',a)$
$\vec\nabla_wQ(s,a)$ is the gradient of $Q(s,a)$ in $w$. In your linear case, the gradient is simply a vector $(f_1(s,a),...,f_n(s,a))$
The parameters/weights-update (4th step) can be read in such a way:
$(r+\gamma * max_a'Q(s',a')) - (Q(s,a))$ is the error between prediction $Q(s,a)$ and the "actual" value for $Q(s,a)$, which is the reward $r$ obtained now PLUS the expected, discounted reward following the greedy policy afterwards $\gamma * max_a'Q(s',a')$
So the parameter/weight-vector is shifted into the steepest direction (given by the gradient $\vec\nabla_wQ(s,a)$) by the amount of the measured error, adjusted by $\alpha$.
Main Source:
Chapter 8 Value Approximation of the (overall recommended) book Reinforcement Learning: An Introduction by Sutton and Barto (First Edition). The general algorithm has been modified as it is commonly done to calculate $Q(s,a)$ instead of $V(s)$. I have also dropped the eligibility traces $e$ to focus on gradient descent, hence using only one-step-backups
More references
Playing Atari with Deep Reinforcement Learning by Mnih shows a great practical example learning $Q(s,a)$ with backpropagated Neural Networks (where Gradient Descent is incorporated into the regression algorithm).
A Brief Survey of Parametric Value Function Approximation by Geist and Pietquin. Looks promising, but I have not read it yet. | How to fit weights into Q-values with linear function approximation | Function approximation is basically a regression problem (in the general sense, i.e. opposed to classification where the class is discrete), i.e. one tries to learn a function mapping from input (in y | How to fit weights into Q-values with linear function approximation
Function approximation is basically a regression problem (in the general sense, i.e. opposed to classification where the class is discrete), i.e. one tries to learn a function mapping from input (in your case $f(s,a)$) to a real-valued output $Q(s,a)$. Since we do not have a full table of all input / output values, but instead learn and estimate $Q(s,a)$ at the same time, the parameters (here: the weights $w$) cannot be calculated directly from the data. A common approach here is to use gradient descent.
Here is the general algorithm for learning $Q(s,a)$ with Value Function Approximation
Init parameter-vector $w=(w_1,w_2,....,w_n)$ randomly (e.g. in [0,1])
For each episode:
$s\leftarrow$initial state of episode
$a\leftarrow$action given by policy $\pi$ (recommend: $\epsilon$-greedy)
Take action $a$, observe reward $r$ and next state $s'$
$w\leftarrow w+ \alpha(r+\gamma * max_{a'}Q(s',a') - Q(s,a))
\vec\nabla_wQ(s,a)$
$s\leftarrow s'$
Repeat 2-5 until $s$ is terminal
where ...
$\alpha\in[0,1]$ is the learning rate
$\gamma\in[0,1]$ is the discount rate
$max_{a'}Q(s',a')$ is the action $a'$ in state $s'$ maximizing $Q(s',a)$
$\vec\nabla_wQ(s,a)$ is the gradient of $Q(s,a)$ in $w$. In your linear case, the gradient is simply a vector $(f_1(s,a),...,f_n(s,a))$
The parameters/weights-update (4th step) can be read in such a way:
$(r+\gamma * max_a'Q(s',a')) - (Q(s,a))$ is the error between prediction $Q(s,a)$ and the "actual" value for $Q(s,a)$, which is the reward $r$ obtained now PLUS the expected, discounted reward following the greedy policy afterwards $\gamma * max_a'Q(s',a')$
So the parameter/weight-vector is shifted into the steepest direction (given by the gradient $\vec\nabla_wQ(s,a)$) by the amount of the measured error, adjusted by $\alpha$.
Main Source:
Chapter 8 Value Approximation of the (overall recommended) book Reinforcement Learning: An Introduction by Sutton and Barto (First Edition). The general algorithm has been modified as it is commonly done to calculate $Q(s,a)$ instead of $V(s)$. I have also dropped the eligibility traces $e$ to focus on gradient descent, hence using only one-step-backups
More references
Playing Atari with Deep Reinforcement Learning by Mnih shows a great practical example learning $Q(s,a)$ with backpropagated Neural Networks (where Gradient Descent is incorporated into the regression algorithm).
A Brief Survey of Parametric Value Function Approximation by Geist and Pietquin. Looks promising, but I have not read it yet. | How to fit weights into Q-values with linear function approximation
Function approximation is basically a regression problem (in the general sense, i.e. opposed to classification where the class is discrete), i.e. one tries to learn a function mapping from input (in y |
23,899 | Why regret is used in online machine learning and is there any intuitive explanation about it? | "Regret" as a term that applies to online machine learning is one that lends itself very easily to an intuitive explanation.
Minimizing (or, alternatively, optimizing for) "regret" is simply reducing the number of actions taken which, in hindsight, it is apparent that there was a better choice. By minimizing regret, we are minimizing subobtimal actions by the algorithm.
Depending on the application of the online machine learning algorithm, there can be many, many other measurements to be optimized.
Several specific papers you may be interested discuss the topic in depth:
Learning, Regret minimization, and Equilibria - A. Blum and Y. Mansour
Optimization for Machine Learning - Hazan
Online Learning and Online
Convex Optimization - Shalev-Shwartz | Why regret is used in online machine learning and is there any intuitive explanation about it? | "Regret" as a term that applies to online machine learning is one that lends itself very easily to an intuitive explanation.
Minimizing (or, alternatively, optimizing for) "regret" is simply reducing | Why regret is used in online machine learning and is there any intuitive explanation about it?
"Regret" as a term that applies to online machine learning is one that lends itself very easily to an intuitive explanation.
Minimizing (or, alternatively, optimizing for) "regret" is simply reducing the number of actions taken which, in hindsight, it is apparent that there was a better choice. By minimizing regret, we are minimizing subobtimal actions by the algorithm.
Depending on the application of the online machine learning algorithm, there can be many, many other measurements to be optimized.
Several specific papers you may be interested discuss the topic in depth:
Learning, Regret minimization, and Equilibria - A. Blum and Y. Mansour
Optimization for Machine Learning - Hazan
Online Learning and Online
Convex Optimization - Shalev-Shwartz | Why regret is used in online machine learning and is there any intuitive explanation about it?
"Regret" as a term that applies to online machine learning is one that lends itself very easily to an intuitive explanation.
Minimizing (or, alternatively, optimizing for) "regret" is simply reducing |
23,900 | Splines - basis functions - clarification | You can't get there from here. The basis splines in your graph do not emerge as a straightforward algebraic manipulation of the equation you have supplied -- at least, not straightforward to me. But that's not where they come from.
The basis functions come from various theoretical results about B-splines. The spline function is the smoothest function that passes close to (or that interpolates) the sampled function values (the knot points). It can be shown that the solution to this optimization lies in a finite dimentional function space composed of piecewise polynomials -- the degree of which depends on how much smoothness you want. The kink in the polynomials happens at the knot points.
So now we go in search of sensible basis functions. In your case, you have piecewise linear functions .. so the simplest piecewise linear function is a tent function ... and the tent pole has to occur at a knot point, because that's where we get the break in differentiability. The basis functions supplied by R are not the only choice, but they produce nice band matrices, cheap and easy to invert and otherwise manipulate.
Note that your basis functions must also respect the boundary conditions of the problem you have set yourself. The basis functions above will only give functions with $s(0)=0$. If I add the constant function to my basis, I can interpolate functions with $s(0)=c$.
Now convince yourself that the tent functions shown above are a basis for the requisite spline space. Consider what happens when you add linear combinations of the functions you illustrated above: they will all have linear portions, with kinks at the knot points. None of these can be obtained from the others. Finally, you need to show that you have the right number of them (I can't remember the formula, off hand, for the dimension of the spline space in terms of the number of knots and the degree of the polys).
Smoother results would be obtained by increasing the degree of the polynomials -- you could have piecewise quadratics, or cubics (the usual choice) $\ldots$ and then your basis functions will look like a sequence of bells centered about the knot points.
The truncated polynomials from your equation can also be used to build the spline smoother or interpolant, but they do not have the attractive numeric properties of the tent functions ... so that's why R does not supply them.
I would not attempt to learn about spline functions from the references cited above. Ramsay and Hooker's Functional Data Analysis with R and Matlab ties in the theory with implementations in R. You could also dig up the original papers by Kimeldorf and Wahba on smoothing and interpolating splines. | Splines - basis functions - clarification | You can't get there from here. The basis splines in your graph do not emerge as a straightforward algebraic manipulation of the equation you have supplied -- at least, not straightforward to me. But t | Splines - basis functions - clarification
You can't get there from here. The basis splines in your graph do not emerge as a straightforward algebraic manipulation of the equation you have supplied -- at least, not straightforward to me. But that's not where they come from.
The basis functions come from various theoretical results about B-splines. The spline function is the smoothest function that passes close to (or that interpolates) the sampled function values (the knot points). It can be shown that the solution to this optimization lies in a finite dimentional function space composed of piecewise polynomials -- the degree of which depends on how much smoothness you want. The kink in the polynomials happens at the knot points.
So now we go in search of sensible basis functions. In your case, you have piecewise linear functions .. so the simplest piecewise linear function is a tent function ... and the tent pole has to occur at a knot point, because that's where we get the break in differentiability. The basis functions supplied by R are not the only choice, but they produce nice band matrices, cheap and easy to invert and otherwise manipulate.
Note that your basis functions must also respect the boundary conditions of the problem you have set yourself. The basis functions above will only give functions with $s(0)=0$. If I add the constant function to my basis, I can interpolate functions with $s(0)=c$.
Now convince yourself that the tent functions shown above are a basis for the requisite spline space. Consider what happens when you add linear combinations of the functions you illustrated above: they will all have linear portions, with kinks at the knot points. None of these can be obtained from the others. Finally, you need to show that you have the right number of them (I can't remember the formula, off hand, for the dimension of the spline space in terms of the number of knots and the degree of the polys).
Smoother results would be obtained by increasing the degree of the polynomials -- you could have piecewise quadratics, or cubics (the usual choice) $\ldots$ and then your basis functions will look like a sequence of bells centered about the knot points.
The truncated polynomials from your equation can also be used to build the spline smoother or interpolant, but they do not have the attractive numeric properties of the tent functions ... so that's why R does not supply them.
I would not attempt to learn about spline functions from the references cited above. Ramsay and Hooker's Functional Data Analysis with R and Matlab ties in the theory with implementations in R. You could also dig up the original papers by Kimeldorf and Wahba on smoothing and interpolating splines. | Splines - basis functions - clarification
You can't get there from here. The basis splines in your graph do not emerge as a straightforward algebraic manipulation of the equation you have supplied -- at least, not straightforward to me. But t |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.