idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
26,501
Conditional mean independence implies unbiasedness and consistency of the OLS estimator
You cannot prove this result because it is not true in its general statement. Start with the model in your eq. $(4)$ $$Y=X\beta+Z\delta+\Big(E(U|Z)+V\Big)$$ where the big parenthesis denotes the actual error term (no assumptions yet on the conditional expectation). Define the residual-maker or annihilator matrix $M_Z = I - Z(Z'Z)^{-1}Z'$, which is symmetric, idempotent and we also have $M_ZZ = \mathbf 0$. By "partioned regression results" we have that $$\hat \beta_{OLS} - \beta = (X'M_ZX)^{-1}X'M_ZZ\delta + (X'M_ZX)^{-1}X'M_ZE(U\mid Z) + (X'M_ZX)^{-1}X'M_ZV$$ The first term on the right is already zero. Taking the expected value throughout, and then applying the tower property for the conditional expectation, the third term will also be zero (using the conditional mean independence in its weaker form). But this is as far that this weaker assumption takes us, because we will be left with $$E\Big(\hat \beta_{OLS} \Big)- \beta = E\Big[(X'M_ZX)^{-1}X'M_ZE(U\mid Z) \Big] $$ For unbiasedness we want the right side to be zero. This will hold if $E(U\mid Z)$ is a linear function of $Z$ (as you found also) because we will again obtain the zero $M_ZZ$. But otherwise it is totally arbitrary to directly assume that the whole expected value is zero. We don't have to assume joint nortmality, but we have to assume linearity of this conditional expectation (other distributions also have this property). So the needed assumption for unbiasedness of $\beta$ is $$E(U\mid X, Z)=E(U\mid Z)=Z\gamma$$ and I cannot say whether it is really "weaker" or not, compared to strict exogeneity of all regressors (since strict exogeneity is stated in terms of mean independence for all distributional assumptions, while here we have to restrict the classes of distributions that $U$ and $Z$ follow). It is not difficult to show that under this linearity assumption $\hat \beta_{OLS}$ will also be consistent.
Conditional mean independence implies unbiasedness and consistency of the OLS estimator
You cannot prove this result because it is not true in its general statement. Start with the model in your eq. $(4)$ $$Y=X\beta+Z\delta+\Big(E(U|Z)+V\Big)$$ where the big parenthesis denotes the actu
Conditional mean independence implies unbiasedness and consistency of the OLS estimator You cannot prove this result because it is not true in its general statement. Start with the model in your eq. $(4)$ $$Y=X\beta+Z\delta+\Big(E(U|Z)+V\Big)$$ where the big parenthesis denotes the actual error term (no assumptions yet on the conditional expectation). Define the residual-maker or annihilator matrix $M_Z = I - Z(Z'Z)^{-1}Z'$, which is symmetric, idempotent and we also have $M_ZZ = \mathbf 0$. By "partioned regression results" we have that $$\hat \beta_{OLS} - \beta = (X'M_ZX)^{-1}X'M_ZZ\delta + (X'M_ZX)^{-1}X'M_ZE(U\mid Z) + (X'M_ZX)^{-1}X'M_ZV$$ The first term on the right is already zero. Taking the expected value throughout, and then applying the tower property for the conditional expectation, the third term will also be zero (using the conditional mean independence in its weaker form). But this is as far that this weaker assumption takes us, because we will be left with $$E\Big(\hat \beta_{OLS} \Big)- \beta = E\Big[(X'M_ZX)^{-1}X'M_ZE(U\mid Z) \Big] $$ For unbiasedness we want the right side to be zero. This will hold if $E(U\mid Z)$ is a linear function of $Z$ (as you found also) because we will again obtain the zero $M_ZZ$. But otherwise it is totally arbitrary to directly assume that the whole expected value is zero. We don't have to assume joint nortmality, but we have to assume linearity of this conditional expectation (other distributions also have this property). So the needed assumption for unbiasedness of $\beta$ is $$E(U\mid X, Z)=E(U\mid Z)=Z\gamma$$ and I cannot say whether it is really "weaker" or not, compared to strict exogeneity of all regressors (since strict exogeneity is stated in terms of mean independence for all distributional assumptions, while here we have to restrict the classes of distributions that $U$ and $Z$ follow). It is not difficult to show that under this linearity assumption $\hat \beta_{OLS}$ will also be consistent.
Conditional mean independence implies unbiasedness and consistency of the OLS estimator You cannot prove this result because it is not true in its general statement. Start with the model in your eq. $(4)$ $$Y=X\beta+Z\delta+\Big(E(U|Z)+V\Big)$$ where the big parenthesis denotes the actu
26,502
How to get "eigenvalues" (percentages of explained variance) of vectors that are not PCA eigenvectors?
If the vectors are orthogonal, you can just take the variance of the scalar projection of the data onto each vector. Say we have a data matrix $X$ ($n$ points x $d$ dimensions), and a set of orthonormal column vectors $\{v_1, ..., v_k\}$. Assume the data are centered. The variance of the data along the direction of each vector $v_i$ is given by $\text{Var}(X v_i)$. If there are as many vectors as original dimensions ($k = d$), the sum of the variances of the projections will equal the sum of the variances along the original dimensions. But, if there are fewer vectors than original dimensions ($k < d$), the sum of variances will generally be less than for PCA. One way to think of PCA is that it maximizes this very quantity (subject to the constraint that the vectors are orthogonal). You may also want to calculate $R^2$ (the fraction of variance explained), which is often used to measure how well a given number of PCA dimensions represent the data. Let $S$ represent the sum of the variances along each original dimension of the data. Then: $$R^2 = \frac{1}{S}\sum_{i=1}^{k} \text{Var}(X v_i)$$ This is just the ratio of the summed variances of the projections and the summed variances along the original dimensions. Another way to think about $R^2$ is that it measures the goodness of fit if we try to reconstruct the data from the projections. It then takes the familiar form used for other models (e.g. regression). Say the $i$th data point is a row vector $x_{(i)}$. Store each of the basis vectors along the columns of matrix $V$. The projection of the $i$th data point onto all vectors in $V$ is given by $p_{(i)} = x_{(i)} V$. When there are fewer vectors than original dimensions ($k < d$), we can think of this as mapping the data linearly into a space with reduced dimensionality. We can approximately reconstruct the data point from the low dimensional representation by mapping back into the original data space: $\hat{x}_{(i)} = p_{(i)} V^T$. The mean squared reconstruction error is the mean squared Euclidean distance between each original data point and its reconstruction: $$E = \frac{1}{n} \|x_{(i)} - \hat{x}_{(i)}\|^2$$ The goodness of fit $R^2$ is defined the same way as for other models (i.e. as one minus the fraction of unexplained variance). Given the mean squared error of the model ($\text{MSE}$) and the total variance of the modeled quantity ($\text{Var}_{\text{total}}$), $R^2 = 1 - \text{MSE} / \text{Var}_{\text{total}}$. In the context of our data reconstruction, the mean squared error is $E$ (the reconstruction error). The total variance is $S$ (the sum of variances along each dimension of the data). So: $$R^2 = 1 - \frac{E}{S}$$ $S$ is also equal to the mean squared Euclidean distance from each data point to the mean of all data points, so we can also think of $R^2$ as comparing the reconstruction error to that of the 'worst-case model' that always returns the mean as the reconstruction. The two expressions for $R^2$ are equivalent. As above, if there are as many vectors as original dimensions ($k = d$) then $R^2$ will be one. But, if $k < d$, $R^2$ will generally be less than for PCA. Another way to think about PCA is that it minimizes the squared reconstruction error.
How to get "eigenvalues" (percentages of explained variance) of vectors that are not PCA eigenvector
If the vectors are orthogonal, you can just take the variance of the scalar projection of the data onto each vector. Say we have a data matrix $X$ ($n$ points x $d$ dimensions), and a set of orthonorm
How to get "eigenvalues" (percentages of explained variance) of vectors that are not PCA eigenvectors? If the vectors are orthogonal, you can just take the variance of the scalar projection of the data onto each vector. Say we have a data matrix $X$ ($n$ points x $d$ dimensions), and a set of orthonormal column vectors $\{v_1, ..., v_k\}$. Assume the data are centered. The variance of the data along the direction of each vector $v_i$ is given by $\text{Var}(X v_i)$. If there are as many vectors as original dimensions ($k = d$), the sum of the variances of the projections will equal the sum of the variances along the original dimensions. But, if there are fewer vectors than original dimensions ($k < d$), the sum of variances will generally be less than for PCA. One way to think of PCA is that it maximizes this very quantity (subject to the constraint that the vectors are orthogonal). You may also want to calculate $R^2$ (the fraction of variance explained), which is often used to measure how well a given number of PCA dimensions represent the data. Let $S$ represent the sum of the variances along each original dimension of the data. Then: $$R^2 = \frac{1}{S}\sum_{i=1}^{k} \text{Var}(X v_i)$$ This is just the ratio of the summed variances of the projections and the summed variances along the original dimensions. Another way to think about $R^2$ is that it measures the goodness of fit if we try to reconstruct the data from the projections. It then takes the familiar form used for other models (e.g. regression). Say the $i$th data point is a row vector $x_{(i)}$. Store each of the basis vectors along the columns of matrix $V$. The projection of the $i$th data point onto all vectors in $V$ is given by $p_{(i)} = x_{(i)} V$. When there are fewer vectors than original dimensions ($k < d$), we can think of this as mapping the data linearly into a space with reduced dimensionality. We can approximately reconstruct the data point from the low dimensional representation by mapping back into the original data space: $\hat{x}_{(i)} = p_{(i)} V^T$. The mean squared reconstruction error is the mean squared Euclidean distance between each original data point and its reconstruction: $$E = \frac{1}{n} \|x_{(i)} - \hat{x}_{(i)}\|^2$$ The goodness of fit $R^2$ is defined the same way as for other models (i.e. as one minus the fraction of unexplained variance). Given the mean squared error of the model ($\text{MSE}$) and the total variance of the modeled quantity ($\text{Var}_{\text{total}}$), $R^2 = 1 - \text{MSE} / \text{Var}_{\text{total}}$. In the context of our data reconstruction, the mean squared error is $E$ (the reconstruction error). The total variance is $S$ (the sum of variances along each dimension of the data). So: $$R^2 = 1 - \frac{E}{S}$$ $S$ is also equal to the mean squared Euclidean distance from each data point to the mean of all data points, so we can also think of $R^2$ as comparing the reconstruction error to that of the 'worst-case model' that always returns the mean as the reconstruction. The two expressions for $R^2$ are equivalent. As above, if there are as many vectors as original dimensions ($k = d$) then $R^2$ will be one. But, if $k < d$, $R^2$ will generally be less than for PCA. Another way to think about PCA is that it minimizes the squared reconstruction error.
How to get "eigenvalues" (percentages of explained variance) of vectors that are not PCA eigenvector If the vectors are orthogonal, you can just take the variance of the scalar projection of the data onto each vector. Say we have a data matrix $X$ ($n$ points x $d$ dimensions), and a set of orthonorm
26,503
Relationship Between Percentile and Confidence Interval (On a Mean)
Your coworker is correct, confidence intervals are based on the percentiles of the sampling distribution of the statistic of interest. In this case, the statistic is $\hat{\mu}=\frac{1}{n}\sum X_i$. The percentiles of $X$ are different. You can try yourself to perform your experiment of drawing many $\hat{\mu}_i$ and calculating their percentiles. You will find good agreement with the normal theory formula provided the $n$ for each $\hat{\mu}_i$ is large enough. And if you keep thinking about it, you may end up reinventing the bootstrap, which uses the observed percentiles of $X$ to generate many $\hat{\mu}_i$ and then uses the percentiles of this generated sample to create a confidence interval.
Relationship Between Percentile and Confidence Interval (On a Mean)
Your coworker is correct, confidence intervals are based on the percentiles of the sampling distribution of the statistic of interest. In this case, the statistic is $\hat{\mu}=\frac{1}{n}\sum X_i$. T
Relationship Between Percentile and Confidence Interval (On a Mean) Your coworker is correct, confidence intervals are based on the percentiles of the sampling distribution of the statistic of interest. In this case, the statistic is $\hat{\mu}=\frac{1}{n}\sum X_i$. The percentiles of $X$ are different. You can try yourself to perform your experiment of drawing many $\hat{\mu}_i$ and calculating their percentiles. You will find good agreement with the normal theory formula provided the $n$ for each $\hat{\mu}_i$ is large enough. And if you keep thinking about it, you may end up reinventing the bootstrap, which uses the observed percentiles of $X$ to generate many $\hat{\mu}_i$ and then uses the percentiles of this generated sample to create a confidence interval.
Relationship Between Percentile and Confidence Interval (On a Mean) Your coworker is correct, confidence intervals are based on the percentiles of the sampling distribution of the statistic of interest. In this case, the statistic is $\hat{\mu}=\frac{1}{n}\sum X_i$. T
26,504
Why does probabilistic PCA use Gaussian prior over latent variables?
Probabilistic PCA Probabilistic PCA is a Gaussian latent variable model of the following form. Observations $\mathbf x \in \mathbb R^D$ consist of $D$ variables, latent variables $\mathbf z \in \mathbb R^M$ are assumed to consist of $M<D$ variables; the prior over latent variables is a zero-mean unit-covariance Gaussian: $$\mathbf z \sim \mathcal N(\mathbf 0, \mathbf I),$$ and the conditional distribution of observed variables given the latent variables is $$\mathbf x | \mathbf z \sim \mathcal N(\mathbf W\mathbf z+\boldsymbol \mu, \sigma^2 \mathbf I).$$ It turns out that the maximum likelihood solution to this model is given by the first $M$ PCA components of the data: columns of $\mathbf W_\text{ML}$ are proportional to the top eigenvectors of the covariance matrix (principal axes). See Tipping & Bishop for details. Why using Gaussian prior? For any other prior (or at least for most other priors) the maximum likelihood solution will not correspond to the standard PCA solution, so there would be no reason to call this latent variable model "probabilistic PCA". Gaussian $\mathcal N(\mathbf 0, \mathbf I)$ prior is the one that gives rise to PCA. Most other priors would make the problem much more complicated or even analytically intractable. Having Gaussian prior and Gaussian conditional distribution leads to Gaussian marginal distribution $p(\mathbf x)$, and it is easy to see that its covariance matrix will be given by $\mathbf W^\top \mathbf W + \sigma^2\mathbf I$. Non-Gaussian distributions are much harder to work with. Having Gaussian marginal distribution $p(\mathbf x)$ is also attractive because the task of standard PCA is to model the covariance matrix (i.e. the second moment); PCA is not interested in higher moments of the data distribution. The Gaussian distribution is fully described by the first two moments: mean and covariance. We don't want to use more complicated/flexible distributions, because PCA is not dealing with these aspects of the data. The Gaussian prior has unit covariance matrix because the idea is to have uncorrelated latent variables that give rise to the observed covariances only via loadings $\mathbf W$.
Why does probabilistic PCA use Gaussian prior over latent variables?
Probabilistic PCA Probabilistic PCA is a Gaussian latent variable model of the following form. Observations $\mathbf x \in \mathbb R^D$ consist of $D$ variables, latent variables $\mathbf z \in \mathb
Why does probabilistic PCA use Gaussian prior over latent variables? Probabilistic PCA Probabilistic PCA is a Gaussian latent variable model of the following form. Observations $\mathbf x \in \mathbb R^D$ consist of $D$ variables, latent variables $\mathbf z \in \mathbb R^M$ are assumed to consist of $M<D$ variables; the prior over latent variables is a zero-mean unit-covariance Gaussian: $$\mathbf z \sim \mathcal N(\mathbf 0, \mathbf I),$$ and the conditional distribution of observed variables given the latent variables is $$\mathbf x | \mathbf z \sim \mathcal N(\mathbf W\mathbf z+\boldsymbol \mu, \sigma^2 \mathbf I).$$ It turns out that the maximum likelihood solution to this model is given by the first $M$ PCA components of the data: columns of $\mathbf W_\text{ML}$ are proportional to the top eigenvectors of the covariance matrix (principal axes). See Tipping & Bishop for details. Why using Gaussian prior? For any other prior (or at least for most other priors) the maximum likelihood solution will not correspond to the standard PCA solution, so there would be no reason to call this latent variable model "probabilistic PCA". Gaussian $\mathcal N(\mathbf 0, \mathbf I)$ prior is the one that gives rise to PCA. Most other priors would make the problem much more complicated or even analytically intractable. Having Gaussian prior and Gaussian conditional distribution leads to Gaussian marginal distribution $p(\mathbf x)$, and it is easy to see that its covariance matrix will be given by $\mathbf W^\top \mathbf W + \sigma^2\mathbf I$. Non-Gaussian distributions are much harder to work with. Having Gaussian marginal distribution $p(\mathbf x)$ is also attractive because the task of standard PCA is to model the covariance matrix (i.e. the second moment); PCA is not interested in higher moments of the data distribution. The Gaussian distribution is fully described by the first two moments: mean and covariance. We don't want to use more complicated/flexible distributions, because PCA is not dealing with these aspects of the data. The Gaussian prior has unit covariance matrix because the idea is to have uncorrelated latent variables that give rise to the observed covariances only via loadings $\mathbf W$.
Why does probabilistic PCA use Gaussian prior over latent variables? Probabilistic PCA Probabilistic PCA is a Gaussian latent variable model of the following form. Observations $\mathbf x \in \mathbb R^D$ consist of $D$ variables, latent variables $\mathbf z \in \mathb
26,505
What is the distribution of the cardinality of the intersection of independent random samples without replacement?
Here's another approach, one that doesn't involve recursion. It still uses sums and products whose lengths depend on the parameters, though. First I'll give the expression, then explain. We have \begin{align} P &\bigl( | L_{1} \cap L_{2} \cap \cdots \cap L_{m} | = k \bigr) \\ &= \frac{\binom{n}{k}}{\prod_{i = 1}^{n} \binom{n}{a_{i}}} \sum_{j = 0}^{\min(a_{1}, \ldots, a_{m}) - k} (-1)^{j} \binom{n - k}{j} \prod_{l = 1}^{n} \binom{n - j - k}{a_{l} -j - k}. \end{align} EDIT: At the end of writing all of this, I realized that we can consolidate the expression above a little by combining the binomial coefficients into hypergeometric probabilities and trinomial coefficients. For what it's worth, the revised expression is \begin{equation} \sum_{j = 0}^{\min(a_{1}, \ldots, a_{m}) - k} (-1)^{j} \binom{n}{j, k, n - j - k} \prod_{l = 1}^{n} P( \text{Hyp}(n, j + k, a_{l}) = j + k). \end{equation} Here $\text{Hyp}(n, j + k, a_{l})$ is a hypergeometric random variable where $a_{l}$ draws are taken from a population of size $n$ having $j + k$ success states. Derivation Let's get some notation in order to make the combinatorial arguments a little easier to track (hopefully). Throughout, we consider $S$ and $a_{1}, \ldots, a_{m}$ fixed. We'll use $\mathcal{C}(I)$ to denote the collection of ordered $m$-tuples $(L_{1}, \ldots, L_{m})$, where each $L_{i} \subseteq S$, satisfying $|L_{i}| = a_{i}$; and $L_{1} \cap \cdots \cap L_{m} = I$. We'll also use $\mathcal{C}'(I)$ for a collection identical except that we require $L_{1} \cap \cdots \cap L_{m} \supseteq I$ instead of equality. A key observation is that $\mathcal{C}'(I)$ is relatively easy to count. This is because the condition $L_{1} \cap \cdots \cap L_{m} \supseteq I$ is equivalent to $L_{i} \supseteq I$ for all $i$, so in a sense this removes interactions between different $i$ values. For each $i$, the number of $L_{i}$ satisfying the requirement is $\binom{|S| - |I|}{a_{i} - |I|}$, since we can construct such an $L_{i}$ by choosing a subset of $S \setminus I$ of size $a_{i} - |I|$ and then unioning with $I$. It follows that \begin{equation} | \mathcal{C}'(I) | = \prod_{i = 1}^{n} \binom{|S| - |I|}{a_{i} - |I|}. \end{equation} Now our original probability can be expressed via the $\mathcal{C}$ as follows: \begin{equation} P \bigl( | L_{1} \cap L_{2} \cap \cdots \cap L_{m} | = k \bigr) = \frac{ \sum_{I : |I| = k} | \mathcal{C}(I) | } { \sum_{\text{all $I \subseteq S$}} | \mathcal{C}(I) | }. \end{equation} We can make two simplifications here right away. First, the denominator is the same as \begin{equation} | \mathcal{C}'(\emptyset) | = \prod_{i = 1}^{n} \binom{|S|}{a_{i}} = \prod_{i = 1}^{n} \binom{n}{a_{i}}. \end{equation} Second, a permutation argument shows that $| \mathcal{C}(I) |$ only depends on $I$ through the cardinality $|I|$. Since there are $\binom{n}{k}$ subsets of $S$ having cardinality $k$, it follows that \begin{equation} \sum_{I : |I| = k} | \mathcal{C}(I) | = \binom{n}{k} | \mathcal{C}(I_{0}) |, \end{equation} where $I_{0}$ is an arbitrary, fixed subset of $S$ having cardinality $k$. Taking a step back, we've now reduced the problem to showing that \begin{equation} | \mathcal{C}(I_{0}) | = \sum_{j = 0}^{\min(a_{1}, \ldots, a_{m}) - k} (-1)^{j} \binom{n - k}{j} \prod_{l = 1}^{n} \binom{n - j - k}{a_{l} - j - k}. \end{equation} Let $J_{1}, \ldots, J_{n - k}$ be the distinct subsets of $S$ formed by adding exactly one element to $I_{0}$. Then \begin{equation} \mathcal{C}(I_{0}) = \mathcal{C}'(I_{0}) \setminus \biggl( \bigcup_{i = 1}^{n - k} \mathcal{C}'(J_{i}) \biggr). \end{equation} (This is just saying that if $L_{1} \cap \cdots \cap L_{m} = I_{0}$, then $L_{1} \cap \cdots \cap L_{m}$ contains $I_{0}$ but also does not contain any additional element.) We've now transformed the $\mathcal{C}$-counting problem to a $\mathcal{C}'$-counting problem, which we know more how to handle. More specifically, we have \begin{equation} | \mathcal{C}(I_{0}) | = | \mathcal{C}'(I_{0}) | - \biggl| \bigcup_{i = 1}^{n - k} \mathcal{C}'(J_{i}) \biggr| = \prod_{l = 1}^{n} \binom{n - k}{a_{l} - k} - \biggl| \bigcup_{i = 1}^{n - k} \mathcal{C}'(J_{i}) \biggr|. \end{equation} We can apply inclusion-exclusion to handle the size of the union expression above. The crucial relationship here is that, for any nonempty $\mathcal{I} \subseteq \{ 1, \ldots, n - k \}$, \begin{equation} \bigcap_{i \in \mathcal{I}} \mathcal{C}'(J_{i}) = \mathcal{C}' \biggl( \bigcup_{i \in \mathcal{I}} J_{i} \biggr). \end{equation} This is because if $L_{1} \cap \cdots \cap L_{m}$ contains a number of the $J_{i}$, then it also contains their union. We also note that the set $\bigcup_{i \in \mathcal{I}} J_{i}$ has size $|I_{0}| + |\mathcal{I}| = k + |\mathcal{I}|$. Therefore \begin{align} \biggl| \bigcup_{i = 1}^{n - k} \mathcal{C}'(J_{i}) \biggr| &= \sum_{\emptyset \neq \mathcal{I} \subseteq \{ 1, \ldots, n - k \}} (-1)^{| \mathcal{I} | - 1} \biggl| \bigcap_{i \in \mathcal{I}} \mathcal{C}'(J_{i}) \biggr| \\ &= \sum_{j = 1}^{n - k} \sum_{\mathcal{I} : |\mathcal{I}| = j} (-1)^{j - 1} \prod_{l = 1}^{n} \binom{n - j - k}{a_{l} - j - k} \\ &= \sum_{j = 1}^{n - k} (-1)^{j - 1} \binom{n - k}{j} \prod_{l = 1}^{n} \binom{n - j - k}{a_{l} - j - k}. \end{align} (We can restrict the $j$ values here since the product of the binomial coefficients is zero unless $j \leq a_{l} - k$ for all $l$, i.e. $j \leq \min(a_{1}, \ldots, a_{m}) - k$.) Finally, by substituting the expression at the end into the equation for $| \mathcal{C}(I_{0}) |$ above and consolidating the sum, we obtain \begin{equation} | \mathcal{C}(I_{0}) | = \sum_{j = 0}^{\min(a_{1}, \ldots, a_{m}) - k} (-1)^{j} \binom{n - k}{j} \prod_{l = 1}^{n} \binom{n - j - k}{a_{l} - j - k} \end{equation} as claimed.
What is the distribution of the cardinality of the intersection of independent random samples withou
Here's another approach, one that doesn't involve recursion. It still uses sums and products whose lengths depend on the parameters, though. First I'll give the expression, then explain. We have \begi
What is the distribution of the cardinality of the intersection of independent random samples without replacement? Here's another approach, one that doesn't involve recursion. It still uses sums and products whose lengths depend on the parameters, though. First I'll give the expression, then explain. We have \begin{align} P &\bigl( | L_{1} \cap L_{2} \cap \cdots \cap L_{m} | = k \bigr) \\ &= \frac{\binom{n}{k}}{\prod_{i = 1}^{n} \binom{n}{a_{i}}} \sum_{j = 0}^{\min(a_{1}, \ldots, a_{m}) - k} (-1)^{j} \binom{n - k}{j} \prod_{l = 1}^{n} \binom{n - j - k}{a_{l} -j - k}. \end{align} EDIT: At the end of writing all of this, I realized that we can consolidate the expression above a little by combining the binomial coefficients into hypergeometric probabilities and trinomial coefficients. For what it's worth, the revised expression is \begin{equation} \sum_{j = 0}^{\min(a_{1}, \ldots, a_{m}) - k} (-1)^{j} \binom{n}{j, k, n - j - k} \prod_{l = 1}^{n} P( \text{Hyp}(n, j + k, a_{l}) = j + k). \end{equation} Here $\text{Hyp}(n, j + k, a_{l})$ is a hypergeometric random variable where $a_{l}$ draws are taken from a population of size $n$ having $j + k$ success states. Derivation Let's get some notation in order to make the combinatorial arguments a little easier to track (hopefully). Throughout, we consider $S$ and $a_{1}, \ldots, a_{m}$ fixed. We'll use $\mathcal{C}(I)$ to denote the collection of ordered $m$-tuples $(L_{1}, \ldots, L_{m})$, where each $L_{i} \subseteq S$, satisfying $|L_{i}| = a_{i}$; and $L_{1} \cap \cdots \cap L_{m} = I$. We'll also use $\mathcal{C}'(I)$ for a collection identical except that we require $L_{1} \cap \cdots \cap L_{m} \supseteq I$ instead of equality. A key observation is that $\mathcal{C}'(I)$ is relatively easy to count. This is because the condition $L_{1} \cap \cdots \cap L_{m} \supseteq I$ is equivalent to $L_{i} \supseteq I$ for all $i$, so in a sense this removes interactions between different $i$ values. For each $i$, the number of $L_{i}$ satisfying the requirement is $\binom{|S| - |I|}{a_{i} - |I|}$, since we can construct such an $L_{i}$ by choosing a subset of $S \setminus I$ of size $a_{i} - |I|$ and then unioning with $I$. It follows that \begin{equation} | \mathcal{C}'(I) | = \prod_{i = 1}^{n} \binom{|S| - |I|}{a_{i} - |I|}. \end{equation} Now our original probability can be expressed via the $\mathcal{C}$ as follows: \begin{equation} P \bigl( | L_{1} \cap L_{2} \cap \cdots \cap L_{m} | = k \bigr) = \frac{ \sum_{I : |I| = k} | \mathcal{C}(I) | } { \sum_{\text{all $I \subseteq S$}} | \mathcal{C}(I) | }. \end{equation} We can make two simplifications here right away. First, the denominator is the same as \begin{equation} | \mathcal{C}'(\emptyset) | = \prod_{i = 1}^{n} \binom{|S|}{a_{i}} = \prod_{i = 1}^{n} \binom{n}{a_{i}}. \end{equation} Second, a permutation argument shows that $| \mathcal{C}(I) |$ only depends on $I$ through the cardinality $|I|$. Since there are $\binom{n}{k}$ subsets of $S$ having cardinality $k$, it follows that \begin{equation} \sum_{I : |I| = k} | \mathcal{C}(I) | = \binom{n}{k} | \mathcal{C}(I_{0}) |, \end{equation} where $I_{0}$ is an arbitrary, fixed subset of $S$ having cardinality $k$. Taking a step back, we've now reduced the problem to showing that \begin{equation} | \mathcal{C}(I_{0}) | = \sum_{j = 0}^{\min(a_{1}, \ldots, a_{m}) - k} (-1)^{j} \binom{n - k}{j} \prod_{l = 1}^{n} \binom{n - j - k}{a_{l} - j - k}. \end{equation} Let $J_{1}, \ldots, J_{n - k}$ be the distinct subsets of $S$ formed by adding exactly one element to $I_{0}$. Then \begin{equation} \mathcal{C}(I_{0}) = \mathcal{C}'(I_{0}) \setminus \biggl( \bigcup_{i = 1}^{n - k} \mathcal{C}'(J_{i}) \biggr). \end{equation} (This is just saying that if $L_{1} \cap \cdots \cap L_{m} = I_{0}$, then $L_{1} \cap \cdots \cap L_{m}$ contains $I_{0}$ but also does not contain any additional element.) We've now transformed the $\mathcal{C}$-counting problem to a $\mathcal{C}'$-counting problem, which we know more how to handle. More specifically, we have \begin{equation} | \mathcal{C}(I_{0}) | = | \mathcal{C}'(I_{0}) | - \biggl| \bigcup_{i = 1}^{n - k} \mathcal{C}'(J_{i}) \biggr| = \prod_{l = 1}^{n} \binom{n - k}{a_{l} - k} - \biggl| \bigcup_{i = 1}^{n - k} \mathcal{C}'(J_{i}) \biggr|. \end{equation} We can apply inclusion-exclusion to handle the size of the union expression above. The crucial relationship here is that, for any nonempty $\mathcal{I} \subseteq \{ 1, \ldots, n - k \}$, \begin{equation} \bigcap_{i \in \mathcal{I}} \mathcal{C}'(J_{i}) = \mathcal{C}' \biggl( \bigcup_{i \in \mathcal{I}} J_{i} \biggr). \end{equation} This is because if $L_{1} \cap \cdots \cap L_{m}$ contains a number of the $J_{i}$, then it also contains their union. We also note that the set $\bigcup_{i \in \mathcal{I}} J_{i}$ has size $|I_{0}| + |\mathcal{I}| = k + |\mathcal{I}|$. Therefore \begin{align} \biggl| \bigcup_{i = 1}^{n - k} \mathcal{C}'(J_{i}) \biggr| &= \sum_{\emptyset \neq \mathcal{I} \subseteq \{ 1, \ldots, n - k \}} (-1)^{| \mathcal{I} | - 1} \biggl| \bigcap_{i \in \mathcal{I}} \mathcal{C}'(J_{i}) \biggr| \\ &= \sum_{j = 1}^{n - k} \sum_{\mathcal{I} : |\mathcal{I}| = j} (-1)^{j - 1} \prod_{l = 1}^{n} \binom{n - j - k}{a_{l} - j - k} \\ &= \sum_{j = 1}^{n - k} (-1)^{j - 1} \binom{n - k}{j} \prod_{l = 1}^{n} \binom{n - j - k}{a_{l} - j - k}. \end{align} (We can restrict the $j$ values here since the product of the binomial coefficients is zero unless $j \leq a_{l} - k$ for all $l$, i.e. $j \leq \min(a_{1}, \ldots, a_{m}) - k$.) Finally, by substituting the expression at the end into the equation for $| \mathcal{C}(I_{0}) |$ above and consolidating the sum, we obtain \begin{equation} | \mathcal{C}(I_{0}) | = \sum_{j = 0}^{\min(a_{1}, \ldots, a_{m}) - k} (-1)^{j} \binom{n - k}{j} \prod_{l = 1}^{n} \binom{n - j - k}{a_{l} - j - k} \end{equation} as claimed.
What is the distribution of the cardinality of the intersection of independent random samples withou Here's another approach, one that doesn't involve recursion. It still uses sums and products whose lengths depend on the parameters, though. First I'll give the expression, then explain. We have \begi
26,506
What is the distribution of the cardinality of the intersection of independent random samples without replacement?
I'm not aware of an analytic way to solve this, but here's a recursive way to compute the result. For $m=2$ you're choosing $a_2$ elements out of $n,$ $a_1$ of which have been chosen before. The probability of choosing $k \le \min\{a_1,a_2\}$ elements that intersect with $L_1$ in your second draw is given by the hypergeometric distribution: $$ P(k \mid n, a_1, a_2) = \frac{ {a_1 \choose k} {n - a_1 \choose a_2 - k} } {n \choose a_2}. $$ We can call the result $b_2.$ We can use the same logic to find $P(b_3 = k \mid n, b_2, a_3),$ where $b_3$ is the cardinality of the intersection of three samples. Then, $$ P(b_3=k) = \sum_{l=0}^{\min(a_1,a_2)} P(b_3=k \mid n, b_2=l, a_3) P(b_2 =l \mid n, a_1, a_2). $$ Find this for each $k \in \{0, 1, 2, \dots, \min(a_1,a_2,a_3)\}$. The latter calculation is not numerically difficult, because $P(b_2 = l \mid n, a_1, a_2)$ is simply the result of the previous calculation and $P(b_3 = k \mid n, b_2=l, a_3)$ is an invocation of the hypergeometric distribution. In general, to find $P(b_m)$ you can apply the following recursive formulas: $$ P(b_i=k) = \sum_{l=0}^{\min(a_1, a_2, \dots, a_{i-1})} P(b_i = k \mid n, b_{i-1}=l, a_i) P(b_{i-1}=l), $$ $$ P(b_i = k \mid n, b_{i-1}=l, a_i) = \frac{{l \choose k} {n-l \choose a_i - k}} {n \choose a_i}, $$ for $i \in \{2, 3, \dots, m\},$ and $$ P(b_1) = \delta_{a_1 b_1}, $$ which is just to say that $b_1 = a_1.$ Here it is in R: hypergeom <- function(k, n, K, N) choose(K, k) * choose(N-K, n-k) / choose(N, n) #recursive function for getting P(b_i) given P(b_{i-1}) PNext <- function(n, PPrev, ai, upperBound) { l <- seq(0, upperBound, by=1) newUpperBound <- min(ai, upperBound) kVals <- seq(0, newUpperBound, by=1) PConditional <- lapply(kVals, function(k) { hypergeom(k, ai, l, n) }) PMarginal <- unlist(lapply(PConditional, function(p) sum(p * PPrev) )) PMarginal } #loop for solving P(b_m) P <- function(n, A, m) { P1 <- c(rep(0, A[1]), 1) if (m==1) { return(P1) } else { upperBound <- A[1] P <- P1 for (i in 2:m) { P <- PNext(n, P, A[i], upperBound) upperBound <- min(A[i], upperBound) } return(P) } } #Example n <- 10 m <- 5 A <- sample(4:8, m, replace=TRUE) #[1] 6 8 8 8 5 round(P(n, A, m), 4) #[1] 0.1106 0.3865 0.3716 0.1191 0.0119 0.0003 #These are the probabilities ordered from 0 to 5, which is the minimum of A
What is the distribution of the cardinality of the intersection of independent random samples withou
I'm not aware of an analytic way to solve this, but here's a recursive way to compute the result. For $m=2$ you're choosing $a_2$ elements out of $n,$ $a_1$ of which have been chosen before. The proba
What is the distribution of the cardinality of the intersection of independent random samples without replacement? I'm not aware of an analytic way to solve this, but here's a recursive way to compute the result. For $m=2$ you're choosing $a_2$ elements out of $n,$ $a_1$ of which have been chosen before. The probability of choosing $k \le \min\{a_1,a_2\}$ elements that intersect with $L_1$ in your second draw is given by the hypergeometric distribution: $$ P(k \mid n, a_1, a_2) = \frac{ {a_1 \choose k} {n - a_1 \choose a_2 - k} } {n \choose a_2}. $$ We can call the result $b_2.$ We can use the same logic to find $P(b_3 = k \mid n, b_2, a_3),$ where $b_3$ is the cardinality of the intersection of three samples. Then, $$ P(b_3=k) = \sum_{l=0}^{\min(a_1,a_2)} P(b_3=k \mid n, b_2=l, a_3) P(b_2 =l \mid n, a_1, a_2). $$ Find this for each $k \in \{0, 1, 2, \dots, \min(a_1,a_2,a_3)\}$. The latter calculation is not numerically difficult, because $P(b_2 = l \mid n, a_1, a_2)$ is simply the result of the previous calculation and $P(b_3 = k \mid n, b_2=l, a_3)$ is an invocation of the hypergeometric distribution. In general, to find $P(b_m)$ you can apply the following recursive formulas: $$ P(b_i=k) = \sum_{l=0}^{\min(a_1, a_2, \dots, a_{i-1})} P(b_i = k \mid n, b_{i-1}=l, a_i) P(b_{i-1}=l), $$ $$ P(b_i = k \mid n, b_{i-1}=l, a_i) = \frac{{l \choose k} {n-l \choose a_i - k}} {n \choose a_i}, $$ for $i \in \{2, 3, \dots, m\},$ and $$ P(b_1) = \delta_{a_1 b_1}, $$ which is just to say that $b_1 = a_1.$ Here it is in R: hypergeom <- function(k, n, K, N) choose(K, k) * choose(N-K, n-k) / choose(N, n) #recursive function for getting P(b_i) given P(b_{i-1}) PNext <- function(n, PPrev, ai, upperBound) { l <- seq(0, upperBound, by=1) newUpperBound <- min(ai, upperBound) kVals <- seq(0, newUpperBound, by=1) PConditional <- lapply(kVals, function(k) { hypergeom(k, ai, l, n) }) PMarginal <- unlist(lapply(PConditional, function(p) sum(p * PPrev) )) PMarginal } #loop for solving P(b_m) P <- function(n, A, m) { P1 <- c(rep(0, A[1]), 1) if (m==1) { return(P1) } else { upperBound <- A[1] P <- P1 for (i in 2:m) { P <- PNext(n, P, A[i], upperBound) upperBound <- min(A[i], upperBound) } return(P) } } #Example n <- 10 m <- 5 A <- sample(4:8, m, replace=TRUE) #[1] 6 8 8 8 5 round(P(n, A, m), 4) #[1] 0.1106 0.3865 0.3716 0.1191 0.0119 0.0003 #These are the probabilities ordered from 0 to 5, which is the minimum of A
What is the distribution of the cardinality of the intersection of independent random samples withou I'm not aware of an analytic way to solve this, but here's a recursive way to compute the result. For $m=2$ you're choosing $a_2$ elements out of $n,$ $a_1$ of which have been chosen before. The proba
26,507
Is it always better to extract more factors when they exist?
The issue you're alluding to is the 'approximate unidimensionality' topic when building psychological testing instruments, which has been discussed in the liturature quite a bit in the 80's. The inspiration existed in the past because practitioners wanted to use traditional item response theory (IRT) models for their items, and at the time these IRT models were exclusively limited to measuring unidimensional traits. So, test multidimensionality was hoped to be a nuisance that (hopefully) could be avoided or ignored. This is also what led to the creation of the parallel analysis techniques in factor analysis (Drasgow and Parsons, 1983) and the DETECT methods. These methods were --- and still are --- useful because linear factor analysis (what you are referring to) can be a decent limited-information proxy to full-information factor analysis for categorical data (which is what IRT is at its core), and in some cases (e.g., when polychoric correlations are used with a weighted estimator, such as WLSMV or DWLS) can even be asymptotically equivalent for a small selection of ordinal IRT models. The consequences of ignoring additional traits/factors, other than obviously fitting the wrong model to the data (i.e., ignoring information about potential model misfit; though it may of course be trivial), is that trait estimates on the dominant factor will become biased and therefore less efficient. These conclusions are of course dependent on how the properties of the additional traits (e.g., are they correlated with the primary dimension, do they have strong loadings, how many cross-loadings are there, etc), but the general theme is that secondary estimates for obtaining primary trait scores will be less effective. See the technical report here for a comparison between a miss-fitted unidimensional model and a bi-factor model; the technical report appears to be exactly what you are after. From a practical perspective, using information criteria can be helpful when selecting the most optimal model, as well as model-fit statistics in general (RMSEA, CFI, etc) because the consequences of ignoring multidimensional information will negatively affect the overall fit to the data. But of course, overall model fit is only one indication of using an inappropriate model for the data at hand; it's entirely possible that improper functional forms are used, such as non-linearity or lack of monotonicity, so the respective items/variables should always be inspected as well. See also: Drasgow, F. and Parsons, C. K. (1983). Application of Unidimensional Item Response Theory Models to Multidimensional Data. Applied Psychological Measurement, 7 (2), 189-199. Drasgow, F. & Lissak, R. I. (1983). Modified parallel analysis: A procedure for examining the latent-dimensionality of dichotomously scored item responses. Journal of Applied Psychology, 68, 363-373. Levent Kirisci, Tse-chi Hsu, and Lifa Yu (2001). Robustness of Item Parameter Estimation Programs to Assumptions of Unidimensionality and Normality. Applied Psychological Measurement, 25 (2), 146-162.
Is it always better to extract more factors when they exist?
The issue you're alluding to is the 'approximate unidimensionality' topic when building psychological testing instruments, which has been discussed in the liturature quite a bit in the 80's. The inspi
Is it always better to extract more factors when they exist? The issue you're alluding to is the 'approximate unidimensionality' topic when building psychological testing instruments, which has been discussed in the liturature quite a bit in the 80's. The inspiration existed in the past because practitioners wanted to use traditional item response theory (IRT) models for their items, and at the time these IRT models were exclusively limited to measuring unidimensional traits. So, test multidimensionality was hoped to be a nuisance that (hopefully) could be avoided or ignored. This is also what led to the creation of the parallel analysis techniques in factor analysis (Drasgow and Parsons, 1983) and the DETECT methods. These methods were --- and still are --- useful because linear factor analysis (what you are referring to) can be a decent limited-information proxy to full-information factor analysis for categorical data (which is what IRT is at its core), and in some cases (e.g., when polychoric correlations are used with a weighted estimator, such as WLSMV or DWLS) can even be asymptotically equivalent for a small selection of ordinal IRT models. The consequences of ignoring additional traits/factors, other than obviously fitting the wrong model to the data (i.e., ignoring information about potential model misfit; though it may of course be trivial), is that trait estimates on the dominant factor will become biased and therefore less efficient. These conclusions are of course dependent on how the properties of the additional traits (e.g., are they correlated with the primary dimension, do they have strong loadings, how many cross-loadings are there, etc), but the general theme is that secondary estimates for obtaining primary trait scores will be less effective. See the technical report here for a comparison between a miss-fitted unidimensional model and a bi-factor model; the technical report appears to be exactly what you are after. From a practical perspective, using information criteria can be helpful when selecting the most optimal model, as well as model-fit statistics in general (RMSEA, CFI, etc) because the consequences of ignoring multidimensional information will negatively affect the overall fit to the data. But of course, overall model fit is only one indication of using an inappropriate model for the data at hand; it's entirely possible that improper functional forms are used, such as non-linearity or lack of monotonicity, so the respective items/variables should always be inspected as well. See also: Drasgow, F. and Parsons, C. K. (1983). Application of Unidimensional Item Response Theory Models to Multidimensional Data. Applied Psychological Measurement, 7 (2), 189-199. Drasgow, F. & Lissak, R. I. (1983). Modified parallel analysis: A procedure for examining the latent-dimensionality of dichotomously scored item responses. Journal of Applied Psychology, 68, 363-373. Levent Kirisci, Tse-chi Hsu, and Lifa Yu (2001). Robustness of Item Parameter Estimation Programs to Assumptions of Unidimensionality and Normality. Applied Psychological Measurement, 25 (2), 146-162.
Is it always better to extract more factors when they exist? The issue you're alluding to is the 'approximate unidimensionality' topic when building psychological testing instruments, which has been discussed in the liturature quite a bit in the 80's. The inspi
26,508
Is it always better to extract more factors when they exist?
If you truly do not want to use the second factor, you should just use a one-factor model. But I am puzzled by your remark that the loadings for the first factor will change if you use a second factor. Let's deal with that statement first. If you use principal components to extract the factors and do not use factor rotation, then the loadings will not change -- subject perhaps to scaling (or complete flipping: If $x$ is a factor, then $-x$ is a legitimate way to express it as well). If you use maximum likelihood extraction and/or factor rotations, then the loadings may depend on the number of factors you extracted. Next, for the explanation of the effects of rotations. I am not good at drawing, so I will try to convince you using words. I will assume that your data are (approximately) normal, so that the factor scores are approximately normal also. If you extract one factor, you get a one-dimensional normal distribution, if you extract two factors, you get a bivariate normal distribution. The density of a bivariate distribution looks roughly speaking like a hat, but the exact shape depends on scaling as well as the correlation coefficient. So let's assume that the two components each have unit variance. In the uncorrelated case, you get a nice sombrero, with level curves that look like circles. A picture is here. Correlation "squashes" the hat, so that it looks more like a Napoleon hat. Let's assume that your original data set had three dimensions and yu want to extract two factors out of that. Let's also stick with normality. In this case the density is a four-dimensional object, but the level curves are three-dimensional and can at least be visualized. In the uncorrelated case the level curves are spherical (like a soccer ball). In the presence of correlation, the level curves will again be distorted, into a football, probably an underinflated one, so that the thickness at the seams is smaller than the thickness in the other directions. If you extract two factors using PCA, you completely flatten the football into an ellipse (and you project every data point onto the plane of the ellipse). The unrotated first factor corresponds to the long axis of the ellipse, the second factor is perpendicular to it (i.e., the short axis). Rotation then chooses a coordinate system within this ellipse in order to satisfy some other handy criteria. If you extract just a single factor, rotation is impossible, but you are guaranteed that the extracted PCA factor corresponds to the long axis of the ellipse.
Is it always better to extract more factors when they exist?
If you truly do not want to use the second factor, you should just use a one-factor model. But I am puzzled by your remark that the loadings for the first factor will change if you use a second factor
Is it always better to extract more factors when they exist? If you truly do not want to use the second factor, you should just use a one-factor model. But I am puzzled by your remark that the loadings for the first factor will change if you use a second factor. Let's deal with that statement first. If you use principal components to extract the factors and do not use factor rotation, then the loadings will not change -- subject perhaps to scaling (or complete flipping: If $x$ is a factor, then $-x$ is a legitimate way to express it as well). If you use maximum likelihood extraction and/or factor rotations, then the loadings may depend on the number of factors you extracted. Next, for the explanation of the effects of rotations. I am not good at drawing, so I will try to convince you using words. I will assume that your data are (approximately) normal, so that the factor scores are approximately normal also. If you extract one factor, you get a one-dimensional normal distribution, if you extract two factors, you get a bivariate normal distribution. The density of a bivariate distribution looks roughly speaking like a hat, but the exact shape depends on scaling as well as the correlation coefficient. So let's assume that the two components each have unit variance. In the uncorrelated case, you get a nice sombrero, with level curves that look like circles. A picture is here. Correlation "squashes" the hat, so that it looks more like a Napoleon hat. Let's assume that your original data set had three dimensions and yu want to extract two factors out of that. Let's also stick with normality. In this case the density is a four-dimensional object, but the level curves are three-dimensional and can at least be visualized. In the uncorrelated case the level curves are spherical (like a soccer ball). In the presence of correlation, the level curves will again be distorted, into a football, probably an underinflated one, so that the thickness at the seams is smaller than the thickness in the other directions. If you extract two factors using PCA, you completely flatten the football into an ellipse (and you project every data point onto the plane of the ellipse). The unrotated first factor corresponds to the long axis of the ellipse, the second factor is perpendicular to it (i.e., the short axis). Rotation then chooses a coordinate system within this ellipse in order to satisfy some other handy criteria. If you extract just a single factor, rotation is impossible, but you are guaranteed that the extracted PCA factor corresponds to the long axis of the ellipse.
Is it always better to extract more factors when they exist? If you truly do not want to use the second factor, you should just use a one-factor model. But I am puzzled by your remark that the loadings for the first factor will change if you use a second factor
26,509
Is it always better to extract more factors when they exist?
Why would you not use something like lavaan or MPlus to run two models (unidimensional model and a two dimension model aligned to your EFA results) and compare the relative and absolute fit indices of the different models (i.e., information criteria - AIC and BIC, RMSEA, SRMR, CFI/TLI)? Note that if you go down this road you would not want to use PCA for the EFA, but rather principal factors. Somebody really concerned with measurement would embed the CFA into a full structural equation model. Edit: The approach I'm asking you to consider is more about figuring out how many latent variables actually explain the set of items. If you want to get the best estimate of the larger factor, I would vote for using the factor scores from the CFA model with the better fit, whichever that is.
Is it always better to extract more factors when they exist?
Why would you not use something like lavaan or MPlus to run two models (unidimensional model and a two dimension model aligned to your EFA results) and compare the relative and absolute fit indices of
Is it always better to extract more factors when they exist? Why would you not use something like lavaan or MPlus to run two models (unidimensional model and a two dimension model aligned to your EFA results) and compare the relative and absolute fit indices of the different models (i.e., information criteria - AIC and BIC, RMSEA, SRMR, CFI/TLI)? Note that if you go down this road you would not want to use PCA for the EFA, but rather principal factors. Somebody really concerned with measurement would embed the CFA into a full structural equation model. Edit: The approach I'm asking you to consider is more about figuring out how many latent variables actually explain the set of items. If you want to get the best estimate of the larger factor, I would vote for using the factor scores from the CFA model with the better fit, whichever that is.
Is it always better to extract more factors when they exist? Why would you not use something like lavaan or MPlus to run two models (unidimensional model and a two dimension model aligned to your EFA results) and compare the relative and absolute fit indices of
26,510
Derivative of cross entropy loss in word2vec
$$\frac{\partial{CE}}{\partial{\hat{r}}} = -w_i + \frac{1}{\sum_{j}^{|V|}exp(w_j^T\hat{r})}\sum_{j}^{|V|}exp(w_j^T\hat{r})w_j$$ can be rewritten as $$\frac{\partial{CE}}{\partial{\hat{r}}} = -w_i + \sum_{j}^{|V|} \left( \frac{ \exp(w_j^\top\hat{r}) }{\sum_{j}^{|V|}exp(w_j^T\hat{r})} \cdot w_j \right)$$ note, the sums are both indexed by j but it really should be 2 different variables. This would be more appropriate $$\frac{\partial{CE}}{\partial{\hat{r}}} = -w_i + \sum_{x}^{|V|} \left( \frac{ \exp(w_x^\top\hat{r}) }{\sum_{j}^{|V|}exp(w_j^T\hat{r})} \cdot w_x \right)$$ which translates to $$\frac{\partial{CE}}{\partial{\hat{r}}} = -w_i + \sum_{x}^{|V|} \Pr(word_x\mid\hat{r}, w) \cdot w_x$$
Derivative of cross entropy loss in word2vec
$$\frac{\partial{CE}}{\partial{\hat{r}}} = -w_i + \frac{1}{\sum_{j}^{|V|}exp(w_j^T\hat{r})}\sum_{j}^{|V|}exp(w_j^T\hat{r})w_j$$ can be rewritten as $$\frac{\partial{CE}}{\partial{\hat{r}}} = -w_i +
Derivative of cross entropy loss in word2vec $$\frac{\partial{CE}}{\partial{\hat{r}}} = -w_i + \frac{1}{\sum_{j}^{|V|}exp(w_j^T\hat{r})}\sum_{j}^{|V|}exp(w_j^T\hat{r})w_j$$ can be rewritten as $$\frac{\partial{CE}}{\partial{\hat{r}}} = -w_i + \sum_{j}^{|V|} \left( \frac{ \exp(w_j^\top\hat{r}) }{\sum_{j}^{|V|}exp(w_j^T\hat{r})} \cdot w_j \right)$$ note, the sums are both indexed by j but it really should be 2 different variables. This would be more appropriate $$\frac{\partial{CE}}{\partial{\hat{r}}} = -w_i + \sum_{x}^{|V|} \left( \frac{ \exp(w_x^\top\hat{r}) }{\sum_{j}^{|V|}exp(w_j^T\hat{r})} \cdot w_x \right)$$ which translates to $$\frac{\partial{CE}}{\partial{\hat{r}}} = -w_i + \sum_{x}^{|V|} \Pr(word_x\mid\hat{r}, w) \cdot w_x$$
Derivative of cross entropy loss in word2vec $$\frac{\partial{CE}}{\partial{\hat{r}}} = -w_i + \frac{1}{\sum_{j}^{|V|}exp(w_j^T\hat{r})}\sum_{j}^{|V|}exp(w_j^T\hat{r})w_j$$ can be rewritten as $$\frac{\partial{CE}}{\partial{\hat{r}}} = -w_i +
26,511
What is the proper way of calculating the kernel density estimate from geographical coordinates?
You might consider using a kernel especially suitable for the sphere, such as a von Mises-Fisher density $$f(\mathbf{x};\kappa,\mu) \propto \exp(\kappa \mu^\prime \mathbf{x})$$ where $\mu$ and $\mathbf{x}$ are locations on the unit sphere expressed in 3D Cartesian coordinates. The analog of the bandwidth is the parameter $\kappa$. The contribution to a location $x$ from an input point at location $\mu$ on the sphere, having weight $\omega(\mu)$, therefore is $$\omega(\mu) f(\mathbf{x};\kappa,\mu).$$ For each $\mathbf{x}$, sum these contributions over all input points $\mu_i$. To illustrate, here is R code to compute the von Mises-Fisher density, generate some random locations $\mu_i$ and weights $\omega(\mu_i)$ (12 of them in the code), and display a map of the resulting kernel density for a specified value of $\kappa$ (equal to $6$ in the code). The points $\mu_i$ are shown as black dots sized to have areas proportional to their weights $\omega(\mu_i)$. The contribution of the large dot near $(100,60)$ is evident throughout the northern latitudes. The bright yellow-white patch around it would be approximately circular when shown in a suitable projection, such as an Orthographic (earth from space). # # von Mises-Fisher density. # mu is the location and x the point of evaluation, *each in lon-lat* coordinates. # Optionally, x is a two-column array. # dvonMises <- function(x, mu, kappa, inDegrees=TRUE) { lambda <- ifelse(inDegrees, pi/180, 1) SphereToCartesian <- function(x) { x <- matrix(x, ncol=2) t(apply(x, 1, function(y) c(cos(y[2])*c(cos(y[1]), sin(y[1])), sin(y[2])))) } x <- SphereToCartesian(x * lambda) mu <- matrix(SphereToCartesian(mu * lambda), ncol=1) c.kappa <- kappa / (2*pi*(exp(kappa) - exp(-kappa))) c.kappa * exp(kappa * x %*% mu) } # # Define a grid on which to compute the kernel density estimate. # x.coord <- seq(-180, 180, by=2) y.coord <- seq(-90, 90, by=1) x <- as.matrix(expand.grid(lon=x.coord, lat=y.coord)) # # Give the locations. # n <- 12 set.seed(17) mu <- cbind(runif(n, -180, 180), asin(runif(n, -1, 1))*180/pi) # # Weight them. # weights <- rexp(n) # # Compute the kernel density. # kappa <- 6 z <- numeric(nrow(x)) for (i in 1:nrow(mu)) { z <- z + weights[i] * dvonMises(x, mu[i, ], kappa) } z <- matrix(z, nrow=length(x.coord)) # # Plot the result. # image(x.coord, y.coord, z, xlab="Longitude", ylab="Latitude") points(mu[, 1], mu[, 2], pch=16, cex=sqrt(weights))
What is the proper way of calculating the kernel density estimate from geographical coordinates?
You might consider using a kernel especially suitable for the sphere, such as a von Mises-Fisher density $$f(\mathbf{x};\kappa,\mu) \propto \exp(\kappa \mu^\prime \mathbf{x})$$ where $\mu$ and $\math
What is the proper way of calculating the kernel density estimate from geographical coordinates? You might consider using a kernel especially suitable for the sphere, such as a von Mises-Fisher density $$f(\mathbf{x};\kappa,\mu) \propto \exp(\kappa \mu^\prime \mathbf{x})$$ where $\mu$ and $\mathbf{x}$ are locations on the unit sphere expressed in 3D Cartesian coordinates. The analog of the bandwidth is the parameter $\kappa$. The contribution to a location $x$ from an input point at location $\mu$ on the sphere, having weight $\omega(\mu)$, therefore is $$\omega(\mu) f(\mathbf{x};\kappa,\mu).$$ For each $\mathbf{x}$, sum these contributions over all input points $\mu_i$. To illustrate, here is R code to compute the von Mises-Fisher density, generate some random locations $\mu_i$ and weights $\omega(\mu_i)$ (12 of them in the code), and display a map of the resulting kernel density for a specified value of $\kappa$ (equal to $6$ in the code). The points $\mu_i$ are shown as black dots sized to have areas proportional to their weights $\omega(\mu_i)$. The contribution of the large dot near $(100,60)$ is evident throughout the northern latitudes. The bright yellow-white patch around it would be approximately circular when shown in a suitable projection, such as an Orthographic (earth from space). # # von Mises-Fisher density. # mu is the location and x the point of evaluation, *each in lon-lat* coordinates. # Optionally, x is a two-column array. # dvonMises <- function(x, mu, kappa, inDegrees=TRUE) { lambda <- ifelse(inDegrees, pi/180, 1) SphereToCartesian <- function(x) { x <- matrix(x, ncol=2) t(apply(x, 1, function(y) c(cos(y[2])*c(cos(y[1]), sin(y[1])), sin(y[2])))) } x <- SphereToCartesian(x * lambda) mu <- matrix(SphereToCartesian(mu * lambda), ncol=1) c.kappa <- kappa / (2*pi*(exp(kappa) - exp(-kappa))) c.kappa * exp(kappa * x %*% mu) } # # Define a grid on which to compute the kernel density estimate. # x.coord <- seq(-180, 180, by=2) y.coord <- seq(-90, 90, by=1) x <- as.matrix(expand.grid(lon=x.coord, lat=y.coord)) # # Give the locations. # n <- 12 set.seed(17) mu <- cbind(runif(n, -180, 180), asin(runif(n, -1, 1))*180/pi) # # Weight them. # weights <- rexp(n) # # Compute the kernel density. # kappa <- 6 z <- numeric(nrow(x)) for (i in 1:nrow(mu)) { z <- z + weights[i] * dvonMises(x, mu[i, ], kappa) } z <- matrix(z, nrow=length(x.coord)) # # Plot the result. # image(x.coord, y.coord, z, xlab="Longitude", ylab="Latitude") points(mu[, 1], mu[, 2], pch=16, cex=sqrt(weights))
What is the proper way of calculating the kernel density estimate from geographical coordinates? You might consider using a kernel especially suitable for the sphere, such as a von Mises-Fisher density $$f(\mathbf{x};\kappa,\mu) \propto \exp(\kappa \mu^\prime \mathbf{x})$$ where $\mu$ and $\math
26,512
Does convex ordering imply right tail dominance?
In general it is not true. Consider for example the $\mu=\frac{3}{8} \delta_{-1}(x)+\frac{1}{4}\delta_0(x)+\frac38 \delta_1(x)$ and $\nu=\frac12\delta_{-\frac12}(x)+\frac12\delta_{\frac12}(x)$. You can immediately see that $\nu\leq_{cx}\mu$. However $F_\mu^{-1}(0.6)=0<\frac12 =F_\nu^{-1}(0.6) $. It is however true that from a certain $\bar{q}$ on, $F_\mu^{-1}(q)<F_\nu^{-1}(q)$ for all $q>\bar q$.
Does convex ordering imply right tail dominance?
In general it is not true. Consider for example the $\mu=\frac{3}{8} \delta_{-1}(x)+\frac{1}{4}\delta_0(x)+\frac38 \delta_1(x)$ and $\nu=\frac12\delta_{-\frac12}(x)+\frac12\delta_{\frac12}(x)$. You c
Does convex ordering imply right tail dominance? In general it is not true. Consider for example the $\mu=\frac{3}{8} \delta_{-1}(x)+\frac{1}{4}\delta_0(x)+\frac38 \delta_1(x)$ and $\nu=\frac12\delta_{-\frac12}(x)+\frac12\delta_{\frac12}(x)$. You can immediately see that $\nu\leq_{cx}\mu$. However $F_\mu^{-1}(0.6)=0<\frac12 =F_\nu^{-1}(0.6) $. It is however true that from a certain $\bar{q}$ on, $F_\mu^{-1}(q)<F_\nu^{-1}(q)$ for all $q>\bar q$.
Does convex ordering imply right tail dominance? In general it is not true. Consider for example the $\mu=\frac{3}{8} \delta_{-1}(x)+\frac{1}{4}\delta_0(x)+\frac38 \delta_1(x)$ and $\nu=\frac12\delta_{-\frac12}(x)+\frac12\delta_{\frac12}(x)$. You c
26,513
Does convex ordering imply right tail dominance?
Ok, I think this can be solved like so (comments welcome): Denoting $\mathcal{F}_X$ and $\mathcal{F}_Y$ the distributions of $X$ and $Y$ and recalling that $$\mathcal{F}_X <_c \mathcal{F}_Y$$ implies (Oja, 1981) that $\exists z^*\in\mathbb{R}$ such that: $$F_Y(z)<F_X(z),\forall z>z^*.$$ Since shifting does not affect convex ordering, we can assume without loss of generality that $X$ has been shifted so that: $$z^*\leqslant\min(F^{-1}_X(0.5),F^{-1}_Y(0.5))$$ so that $$F_Y^{-1}(q) \leqslant F_X^{-1}(q),\quad \forall q\in[0.5,1].$$ So, it seems that yes, convex ordering of $\mathcal{F}_X<_c \mathcal{F}_Y$ implies right tail dominance of $F_Y(y)$ over $F_X(x)$ (or to be precise some version $F_{X+b}(x),\;b\in\mathbb{R}$ of $F_X(x)$)
Does convex ordering imply right tail dominance?
Ok, I think this can be solved like so (comments welcome): Denoting $\mathcal{F}_X$ and $\mathcal{F}_Y$ the distributions of $X$ and $Y$ and recalling that $$\mathcal{F}_X <_c \mathcal{F}_Y$$ implies
Does convex ordering imply right tail dominance? Ok, I think this can be solved like so (comments welcome): Denoting $\mathcal{F}_X$ and $\mathcal{F}_Y$ the distributions of $X$ and $Y$ and recalling that $$\mathcal{F}_X <_c \mathcal{F}_Y$$ implies (Oja, 1981) that $\exists z^*\in\mathbb{R}$ such that: $$F_Y(z)<F_X(z),\forall z>z^*.$$ Since shifting does not affect convex ordering, we can assume without loss of generality that $X$ has been shifted so that: $$z^*\leqslant\min(F^{-1}_X(0.5),F^{-1}_Y(0.5))$$ so that $$F_Y^{-1}(q) \leqslant F_X^{-1}(q),\quad \forall q\in[0.5,1].$$ So, it seems that yes, convex ordering of $\mathcal{F}_X<_c \mathcal{F}_Y$ implies right tail dominance of $F_Y(y)$ over $F_X(x)$ (or to be precise some version $F_{X+b}(x),\;b\in\mathbb{R}$ of $F_X(x)$)
Does convex ordering imply right tail dominance? Ok, I think this can be solved like so (comments welcome): Denoting $\mathcal{F}_X$ and $\mathcal{F}_Y$ the distributions of $X$ and $Y$ and recalling that $$\mathcal{F}_X <_c \mathcal{F}_Y$$ implies
26,514
Marginal likelihood vs. prior predictive probability
I'm assuming $\alpha$ contains the values that define your prior for $\theta$. When this is the case, we typically omit $\alpha$ from the notation and have the marginal likelihood $$p(\mathbb{X}) = \int p(\mathbb{X}|\theta) p(\theta) d\theta.$$ The prior predictive distribution is not well defined in that you haven't told me what it is that you want predicted, e.g. the prior predictive distribution is different when predicting a single data point and predicting a set of observations. In the notation, this is confusing because $p(\tilde{x}|\theta)$ is different depending on what $\tilde{x}$ is. If you want to predict data that has exactly the same structure as the data you observed, then the marginal likelihood is just the prior predictive distribution for data of this structure evaluated at the data you observed, i.e. the marginal likelihood is a number whereas the prior predictive distribution has a probability density (or mass) function.
Marginal likelihood vs. prior predictive probability
I'm assuming $\alpha$ contains the values that define your prior for $\theta$. When this is the case, we typically omit $\alpha$ from the notation and have the marginal likelihood $$p(\mathbb{X}) = \
Marginal likelihood vs. prior predictive probability I'm assuming $\alpha$ contains the values that define your prior for $\theta$. When this is the case, we typically omit $\alpha$ from the notation and have the marginal likelihood $$p(\mathbb{X}) = \int p(\mathbb{X}|\theta) p(\theta) d\theta.$$ The prior predictive distribution is not well defined in that you haven't told me what it is that you want predicted, e.g. the prior predictive distribution is different when predicting a single data point and predicting a set of observations. In the notation, this is confusing because $p(\tilde{x}|\theta)$ is different depending on what $\tilde{x}$ is. If you want to predict data that has exactly the same structure as the data you observed, then the marginal likelihood is just the prior predictive distribution for data of this structure evaluated at the data you observed, i.e. the marginal likelihood is a number whereas the prior predictive distribution has a probability density (or mass) function.
Marginal likelihood vs. prior predictive probability I'm assuming $\alpha$ contains the values that define your prior for $\theta$. When this is the case, we typically omit $\alpha$ from the notation and have the marginal likelihood $$p(\mathbb{X}) = \
26,515
Marginal likelihood vs. prior predictive probability
For a parametric model ${\cal M} = \{p(\cdot \mid \theta, \alpha)\}$ with two parameters $\theta$ and $\alpha$ equipped with a prior distribution $\pi(\theta, \alpha)$ then the ("joint") likelihood on $(\theta, \alpha)$ after $x$ has been observed is defined by $$L(\theta, \alpha \mid x) \overset{\theta,\alpha}{\propto} p(x \mid \theta, \alpha).$$ See here about my notation $\overset{\theta,\alpha}{\propto}$. The marginal likelihood on $\alpha$ is obtained by integrating the joint likelihood over the conditional prior distribution $\pi(\theta \mid \alpha)$: $$\tilde L(\alpha \mid x) \overset{\alpha}{\propto} \int L(\theta,\alpha \mid x) \pi(\theta \mid \alpha) d\theta.$$ This is nothing but the "ordinary" likelihood for a new model $\tilde{\cal M} = \{\tilde p(\cdot \mid \alpha)\}$ with parameter $\alpha$, obtained by integrating the original sampling distribution over the conditional prior distribution $\pi(\theta \mid \alpha)$: $$\tilde p(x \mid \alpha) = \int p(x \mid \theta, \alpha)\pi(\theta \mid \alpha) d\theta$$ which is also the conditional prior predictive distribution (of $x$ given $\alpha$). Using the marginal prior distribution $\pi(\alpha)$ of $\alpha$ for this model yields exactly the same posterior distribution: $$\pi(\alpha \mid x) \overset{\alpha}{\propto} \pi(\alpha)\tilde L(\alpha \mid x).$$ To sum up, the marginal likelihood is the likelihood of the model whose sampling distribution is the conditional prior predictive distribution.
Marginal likelihood vs. prior predictive probability
For a parametric model ${\cal M} = \{p(\cdot \mid \theta, \alpha)\}$ with two parameters $\theta$ and $\alpha$ equipped with a prior distribution $\pi(\theta, \alpha)$ then the ("joint") likelihood on
Marginal likelihood vs. prior predictive probability For a parametric model ${\cal M} = \{p(\cdot \mid \theta, \alpha)\}$ with two parameters $\theta$ and $\alpha$ equipped with a prior distribution $\pi(\theta, \alpha)$ then the ("joint") likelihood on $(\theta, \alpha)$ after $x$ has been observed is defined by $$L(\theta, \alpha \mid x) \overset{\theta,\alpha}{\propto} p(x \mid \theta, \alpha).$$ See here about my notation $\overset{\theta,\alpha}{\propto}$. The marginal likelihood on $\alpha$ is obtained by integrating the joint likelihood over the conditional prior distribution $\pi(\theta \mid \alpha)$: $$\tilde L(\alpha \mid x) \overset{\alpha}{\propto} \int L(\theta,\alpha \mid x) \pi(\theta \mid \alpha) d\theta.$$ This is nothing but the "ordinary" likelihood for a new model $\tilde{\cal M} = \{\tilde p(\cdot \mid \alpha)\}$ with parameter $\alpha$, obtained by integrating the original sampling distribution over the conditional prior distribution $\pi(\theta \mid \alpha)$: $$\tilde p(x \mid \alpha) = \int p(x \mid \theta, \alpha)\pi(\theta \mid \alpha) d\theta$$ which is also the conditional prior predictive distribution (of $x$ given $\alpha$). Using the marginal prior distribution $\pi(\alpha)$ of $\alpha$ for this model yields exactly the same posterior distribution: $$\pi(\alpha \mid x) \overset{\alpha}{\propto} \pi(\alpha)\tilde L(\alpha \mid x).$$ To sum up, the marginal likelihood is the likelihood of the model whose sampling distribution is the conditional prior predictive distribution.
Marginal likelihood vs. prior predictive probability For a parametric model ${\cal M} = \{p(\cdot \mid \theta, \alpha)\}$ with two parameters $\theta$ and $\alpha$ equipped with a prior distribution $\pi(\theta, \alpha)$ then the ("joint") likelihood on
26,516
Post-hoc testing in multcomp::glht for mixed-effects models (lme4) with interactions
It is a lot easier to do using the lsmeans package library(lsmeans) lsmeans(mod, pairwise ~ tension | wool) lsmeans(mod, pairwise ~ wool | tension)
Post-hoc testing in multcomp::glht for mixed-effects models (lme4) with interactions
It is a lot easier to do using the lsmeans package library(lsmeans) lsmeans(mod, pairwise ~ tension | wool) lsmeans(mod, pairwise ~ wool | tension)
Post-hoc testing in multcomp::glht for mixed-effects models (lme4) with interactions It is a lot easier to do using the lsmeans package library(lsmeans) lsmeans(mod, pairwise ~ tension | wool) lsmeans(mod, pairwise ~ wool | tension)
Post-hoc testing in multcomp::glht for mixed-effects models (lme4) with interactions It is a lot easier to do using the lsmeans package library(lsmeans) lsmeans(mod, pairwise ~ tension | wool) lsmeans(mod, pairwise ~ wool | tension)
26,517
Classification on variable-length time series
You can use Dynamic Time Warping distance measure, which can tell distance between different length time series. This question has some nice answers about how to do that. However it would be instructive to construct certain summary statistics for the series and do the clustering on them.
Classification on variable-length time series
You can use Dynamic Time Warping distance measure, which can tell distance between different length time series. This question has some nice answers about how to do that. However it would be instruct
Classification on variable-length time series You can use Dynamic Time Warping distance measure, which can tell distance between different length time series. This question has some nice answers about how to do that. However it would be instructive to construct certain summary statistics for the series and do the clustering on them.
Classification on variable-length time series You can use Dynamic Time Warping distance measure, which can tell distance between different length time series. This question has some nice answers about how to do that. However it would be instruct
26,518
How to select the best fit without over-fitting data? Modelling a bimodal distribution with N normal functions, etc
Here are two ways you could approach the problem of selecting your distribution: For model comparison use a measure that penalizes the model depending on the number of parameters. Information criteria do this. Use an information criterion to choose which model to retain, choose the model with the lowest information criterion (for example AIC). The rule of thumb for comparing if a difference in AIC's is significant is if the difference in the AIC is greater than 2 (this is not a formal hypothesis test, see Testing the difference in AIC of two non-nested models). The AIC = $2k - 2ln(L)$, where $k$ is the number of estimated parameters and $L$ is the maximum likelihood, $L = \max\limits_{\theta} L(\theta |x)$ and $L(\theta |x) = Pr(x|\theta)$ is the likelihood function and $\Pr(x|\theta)$ is the probability of the observed data $x$ conditional on the distribution parameter $\theta$. If you want a formal hypothesis test you could proceed in at least two ways. The arguably easier one is to fit your distributions using part of your sample and than test if the residuals distributions are significantly different using a Chi-squared or Kolgomorov-Smirnov test on the rest of the data. This way you're not using the same data to fit and test your model as AndrewM mentioned in the comments. You could also do a likelihood ratio test with an adjustment to the null distribution. A version of this is described in Lo Y. et al. (2013) "Testing the number of components in normal mixture." Biometrika but I do not have access to the article so I cannot provide you with more details as to how exactly to do this. Either way, if the test is not significant retain the distribution with the lower number of parameters, if it is significant choose the one with the higher number of parameters.
How to select the best fit without over-fitting data? Modelling a bimodal distribution with N normal
Here are two ways you could approach the problem of selecting your distribution: For model comparison use a measure that penalizes the model depending on the number of parameters. Information criteri
How to select the best fit without over-fitting data? Modelling a bimodal distribution with N normal functions, etc Here are two ways you could approach the problem of selecting your distribution: For model comparison use a measure that penalizes the model depending on the number of parameters. Information criteria do this. Use an information criterion to choose which model to retain, choose the model with the lowest information criterion (for example AIC). The rule of thumb for comparing if a difference in AIC's is significant is if the difference in the AIC is greater than 2 (this is not a formal hypothesis test, see Testing the difference in AIC of two non-nested models). The AIC = $2k - 2ln(L)$, where $k$ is the number of estimated parameters and $L$ is the maximum likelihood, $L = \max\limits_{\theta} L(\theta |x)$ and $L(\theta |x) = Pr(x|\theta)$ is the likelihood function and $\Pr(x|\theta)$ is the probability of the observed data $x$ conditional on the distribution parameter $\theta$. If you want a formal hypothesis test you could proceed in at least two ways. The arguably easier one is to fit your distributions using part of your sample and than test if the residuals distributions are significantly different using a Chi-squared or Kolgomorov-Smirnov test on the rest of the data. This way you're not using the same data to fit and test your model as AndrewM mentioned in the comments. You could also do a likelihood ratio test with an adjustment to the null distribution. A version of this is described in Lo Y. et al. (2013) "Testing the number of components in normal mixture." Biometrika but I do not have access to the article so I cannot provide you with more details as to how exactly to do this. Either way, if the test is not significant retain the distribution with the lower number of parameters, if it is significant choose the one with the higher number of parameters.
How to select the best fit without over-fitting data? Modelling a bimodal distribution with N normal Here are two ways you could approach the problem of selecting your distribution: For model comparison use a measure that penalizes the model depending on the number of parameters. Information criteri
26,519
How to calculate the inverse of sum of a Kronecker product and a diagonal matrix
Well, we know that: $$(\mathbf{A} \otimes \mathbf{B})^{-1} = \mathbf{A}^{-1} \otimes \mathbf{B}^{-1}$$ (and $(\mathbf{A} \otimes \mathbf{B})$ is invertible $\iff$ $\mathbf{A}$ and $\mathbf{B}$ are invertible as well). So that takes care of the first term. $\mathbf{C}$ is invertible with inverse equal to the diagonal matrix with diagonal elements formed of the element-wise inverses of the diagonal entries of $\mathbf{C}$. We also know that if $\mathbf{C}$ is diagonal: $$\left(\mathbf{D}+\mathbf{C}\right)^{-1} = \mathbf{D}^{-1} - \mathbf{D}^{-1} \left(\mathbf{C}^{-1}+\mathbf{D}^{-1} \right)^{-1}\mathbf{D}^{-1},$$ where $\mathbf{D}=\mathbf{A} \otimes \mathbf{B}$. Next, you use the fact that: $$\left(\mathbf{C}^{-1}+\mathbf{D}^{-1} \right)^{-1}=\mathbf{C}\left(\mathbf{C}+\mathbf{D}\right)^{-1}\mathbf{D}.$$ Plugging this in the previous formula yields: $$\left(\mathbf{D}+\mathbf{C}\right)^{-1} = \mathbf{D}^{-1} - \mathbf{D}^{-1} \mathbf{C}\left(\mathbf{C}+\mathbf{D}\right)^{-1},$$ moving things around: $$\left(\mathbf{D}+\mathbf{C}\right)^{-1} = \left(\mathbf{I}+\mathbf{D}^{-1}\mathbf{C}\right)^{-1}\mathbf{D}^{-1},$$ where $\mathbf{I}$ is the identity matrix of same size as $\mathbf{D}$. Now, given the spectral decomposition of $\mathbf{D}^{-1}\mathbf{C}$, that of $\left(\mathbf{I}+\mathbf{D}^{-1}\mathbf{C}\right)^{-1}$ is easy to compute because the spectral decomposition of $\mathbf{I}+\mathbf{D}^{-1}\mathbf{C}$ is related to that of $\mathbf{D}^{-1}\mathbf{C}$ (the eigenvector of the sum of the two terms are the same as the eigenvector of the second term and the eigen-value of the sum equals the sum of the eigenvalues of the two terms). Because of the fact that $\mathbf{C}$ is diagonal, the spectral decomposition of $\mathbf{D}^{-1}\mathbf{C}$ itself is easy to obtain from that of $\mathbf{D}^{-1}$. Now, you still have to compute the spectral decomposition of $\mathbf{D}$ but I think that the fact that $\mathbf{D}$ is the product of two Kronecker matrices helps considerably here.
How to calculate the inverse of sum of a Kronecker product and a diagonal matrix
Well, we know that: $$(\mathbf{A} \otimes \mathbf{B})^{-1} = \mathbf{A}^{-1} \otimes \mathbf{B}^{-1}$$ (and $(\mathbf{A} \otimes \mathbf{B})$ is invertible $\iff$ $\mathbf{A}$ and $\mathbf{B}$ are
How to calculate the inverse of sum of a Kronecker product and a diagonal matrix Well, we know that: $$(\mathbf{A} \otimes \mathbf{B})^{-1} = \mathbf{A}^{-1} \otimes \mathbf{B}^{-1}$$ (and $(\mathbf{A} \otimes \mathbf{B})$ is invertible $\iff$ $\mathbf{A}$ and $\mathbf{B}$ are invertible as well). So that takes care of the first term. $\mathbf{C}$ is invertible with inverse equal to the diagonal matrix with diagonal elements formed of the element-wise inverses of the diagonal entries of $\mathbf{C}$. We also know that if $\mathbf{C}$ is diagonal: $$\left(\mathbf{D}+\mathbf{C}\right)^{-1} = \mathbf{D}^{-1} - \mathbf{D}^{-1} \left(\mathbf{C}^{-1}+\mathbf{D}^{-1} \right)^{-1}\mathbf{D}^{-1},$$ where $\mathbf{D}=\mathbf{A} \otimes \mathbf{B}$. Next, you use the fact that: $$\left(\mathbf{C}^{-1}+\mathbf{D}^{-1} \right)^{-1}=\mathbf{C}\left(\mathbf{C}+\mathbf{D}\right)^{-1}\mathbf{D}.$$ Plugging this in the previous formula yields: $$\left(\mathbf{D}+\mathbf{C}\right)^{-1} = \mathbf{D}^{-1} - \mathbf{D}^{-1} \mathbf{C}\left(\mathbf{C}+\mathbf{D}\right)^{-1},$$ moving things around: $$\left(\mathbf{D}+\mathbf{C}\right)^{-1} = \left(\mathbf{I}+\mathbf{D}^{-1}\mathbf{C}\right)^{-1}\mathbf{D}^{-1},$$ where $\mathbf{I}$ is the identity matrix of same size as $\mathbf{D}$. Now, given the spectral decomposition of $\mathbf{D}^{-1}\mathbf{C}$, that of $\left(\mathbf{I}+\mathbf{D}^{-1}\mathbf{C}\right)^{-1}$ is easy to compute because the spectral decomposition of $\mathbf{I}+\mathbf{D}^{-1}\mathbf{C}$ is related to that of $\mathbf{D}^{-1}\mathbf{C}$ (the eigenvector of the sum of the two terms are the same as the eigenvector of the second term and the eigen-value of the sum equals the sum of the eigenvalues of the two terms). Because of the fact that $\mathbf{C}$ is diagonal, the spectral decomposition of $\mathbf{D}^{-1}\mathbf{C}$ itself is easy to obtain from that of $\mathbf{D}^{-1}$. Now, you still have to compute the spectral decomposition of $\mathbf{D}$ but I think that the fact that $\mathbf{D}$ is the product of two Kronecker matrices helps considerably here.
How to calculate the inverse of sum of a Kronecker product and a diagonal matrix Well, we know that: $$(\mathbf{A} \otimes \mathbf{B})^{-1} = \mathbf{A}^{-1} \otimes \mathbf{B}^{-1}$$ (and $(\mathbf{A} \otimes \mathbf{B})$ is invertible $\iff$ $\mathbf{A}$ and $\mathbf{B}$ are
26,520
Compute quantile of sum of distributions from particular quantiles
$q_Z$ could be anything. To understand this situation, let us make a preliminary simplification. By working with $Y_i = X_i - q_i$ we obtain a more uniform characterization $$\alpha = \Pr(X_i \le q_i) = \Pr(Y_i \le 0).$$ That is, each $Y_i$ has the same probability of being negative. Because $$W = \sum_i Y_i = \sum_i X_i - \sum_i q_i = Z - \sum_i q_i,$$ the defining equation for $q_Z$ is equivalent to $$\alpha = \Pr(Z \le q_Z) = \Pr(Z - \sum_i q_i \le q_Z - \sum_i q_i) = \Pr(W \le q_W)$$ with $q_Z = q_W + \sum_i q_i$. What are the possible values of $q_W$? Consider the case where the $Y_i$ all have the same distribution with all probability on two values, one of them negative ($y_{-}$) and the other one positive ($y_{+}$). The possible values of the sum $W$ are limited to $ky_{-} + (n-k)y_{+}$ for $k=0, 1, \ldots, n$. Each of these occurs with probability $${\Pr}_W(ky_{-} + (n-k)y_{+}) = \binom{n}{k}\alpha^k(1-\alpha)^{n-k}.$$ The extremes can be found by Choosing $y_{-}$ and $y_{+}$ so that $y_{-} + (n-1)y_{+} \lt 0$; $y_{-}=-n$ and $y_{+}=1$ will accomplish this. This guarantees that $W$ will be negative except when all the $Y_i$ are positive. This chance equals $1 - (1-\alpha)^n$. It exceeds $\alpha$ when $n\gt 1$, implying the $\alpha$ quantile of $W$ must be strictly negative. Choosing $y_{-}$ and $y_{+}$ so that $(n-1) y_{-} + y_{+} \gt 0$; $y_{-}=-1$ and $y_{+}=n$ will accomplish this. This guarantees that $W$ will be negative only when all the $Y_i$ are negative. This chance equals $\alpha^n$. It is less than $\alpha$ when $n\gt 1$, implying the $\alpha$ quantile of $W$ must be strictly positive. This shows that the $\alpha$ quantile of $W$ could be either negative or positive, but is not zero. What could its size be? It has to equal some integral linear combination of $y_{-}$ and $y_{+}$. Making both these values integers assures all the possible values of $W$ are integral. Upon scaling $y_{\pm}$ by an arbitrary positive number $s$, we can guarantee that all integral linear combinations of $y_{-}$ and $y_{+}$ are integral multiples of $s$. Since $q_W \ne 0$, it must be at least $s$ in size. Consequently, the possible values of $q_W$ (and whence of $q_Z$) are unlimited, no matter what $n\gt 1$ may equal. The only way to derive any information about $q_Z$ would be to make specific and strong constraints on the distributions of the $X_i$, in order to prevent and limit the kind of unbalanced distributions used to derive this negative result.
Compute quantile of sum of distributions from particular quantiles
$q_Z$ could be anything. To understand this situation, let us make a preliminary simplification. By working with $Y_i = X_i - q_i$ we obtain a more uniform characterization $$\alpha = \Pr(X_i \le q_
Compute quantile of sum of distributions from particular quantiles $q_Z$ could be anything. To understand this situation, let us make a preliminary simplification. By working with $Y_i = X_i - q_i$ we obtain a more uniform characterization $$\alpha = \Pr(X_i \le q_i) = \Pr(Y_i \le 0).$$ That is, each $Y_i$ has the same probability of being negative. Because $$W = \sum_i Y_i = \sum_i X_i - \sum_i q_i = Z - \sum_i q_i,$$ the defining equation for $q_Z$ is equivalent to $$\alpha = \Pr(Z \le q_Z) = \Pr(Z - \sum_i q_i \le q_Z - \sum_i q_i) = \Pr(W \le q_W)$$ with $q_Z = q_W + \sum_i q_i$. What are the possible values of $q_W$? Consider the case where the $Y_i$ all have the same distribution with all probability on two values, one of them negative ($y_{-}$) and the other one positive ($y_{+}$). The possible values of the sum $W$ are limited to $ky_{-} + (n-k)y_{+}$ for $k=0, 1, \ldots, n$. Each of these occurs with probability $${\Pr}_W(ky_{-} + (n-k)y_{+}) = \binom{n}{k}\alpha^k(1-\alpha)^{n-k}.$$ The extremes can be found by Choosing $y_{-}$ and $y_{+}$ so that $y_{-} + (n-1)y_{+} \lt 0$; $y_{-}=-n$ and $y_{+}=1$ will accomplish this. This guarantees that $W$ will be negative except when all the $Y_i$ are positive. This chance equals $1 - (1-\alpha)^n$. It exceeds $\alpha$ when $n\gt 1$, implying the $\alpha$ quantile of $W$ must be strictly negative. Choosing $y_{-}$ and $y_{+}$ so that $(n-1) y_{-} + y_{+} \gt 0$; $y_{-}=-1$ and $y_{+}=n$ will accomplish this. This guarantees that $W$ will be negative only when all the $Y_i$ are negative. This chance equals $\alpha^n$. It is less than $\alpha$ when $n\gt 1$, implying the $\alpha$ quantile of $W$ must be strictly positive. This shows that the $\alpha$ quantile of $W$ could be either negative or positive, but is not zero. What could its size be? It has to equal some integral linear combination of $y_{-}$ and $y_{+}$. Making both these values integers assures all the possible values of $W$ are integral. Upon scaling $y_{\pm}$ by an arbitrary positive number $s$, we can guarantee that all integral linear combinations of $y_{-}$ and $y_{+}$ are integral multiples of $s$. Since $q_W \ne 0$, it must be at least $s$ in size. Consequently, the possible values of $q_W$ (and whence of $q_Z$) are unlimited, no matter what $n\gt 1$ may equal. The only way to derive any information about $q_Z$ would be to make specific and strong constraints on the distributions of the $X_i$, in order to prevent and limit the kind of unbalanced distributions used to derive this negative result.
Compute quantile of sum of distributions from particular quantiles $q_Z$ could be anything. To understand this situation, let us make a preliminary simplification. By working with $Y_i = X_i - q_i$ we obtain a more uniform characterization $$\alpha = \Pr(X_i \le q_
26,521
Compute quantile of sum of distributions from particular quantiles
(version 3) Since you say you have sample data, you could use the following numerical method: a) fit pdfs to the data for each $X_i$ variable...maybe using kernel densities b) take the DFT (discrete Fourier transform) of each kernel density c) multiply the DFTs together d) take the inverse DFT That would give you an estimate of the pdf of $Z$. It's a standard technique for finding the distribution of the sum of independent random variables, covered by many authors, and not too hard to derive. It's used in the insurance industry for combining distributions of possible insurance claims. For the sake of providing a citation I googled it, and found this one: doi.org/10.1016/S0167-4730(96)00032-X (although I have only read the abstract).
Compute quantile of sum of distributions from particular quantiles
(version 3) Since you say you have sample data, you could use the following numerical method: a) fit pdfs to the data for each $X_i$ variable...maybe using kernel densities b) take the DFT (discrete F
Compute quantile of sum of distributions from particular quantiles (version 3) Since you say you have sample data, you could use the following numerical method: a) fit pdfs to the data for each $X_i$ variable...maybe using kernel densities b) take the DFT (discrete Fourier transform) of each kernel density c) multiply the DFTs together d) take the inverse DFT That would give you an estimate of the pdf of $Z$. It's a standard technique for finding the distribution of the sum of independent random variables, covered by many authors, and not too hard to derive. It's used in the insurance industry for combining distributions of possible insurance claims. For the sake of providing a citation I googled it, and found this one: doi.org/10.1016/S0167-4730(96)00032-X (although I have only read the abstract).
Compute quantile of sum of distributions from particular quantiles (version 3) Since you say you have sample data, you could use the following numerical method: a) fit pdfs to the data for each $X_i$ variable...maybe using kernel densities b) take the DFT (discrete F
26,522
Compute quantile of sum of distributions from particular quantiles
Convolution method: If the random variables Xᵢ are independent and identically distributed (i.i.d.), you can use the convolution property of probability distributions. The sum of independent random variables follows the convolution of their individual probability density functions (PDFs) or quantile functions. Let Q(x) be the quantile function of Xᵢ and Y be the random variable defined as the sum of n i.i.d. Xᵢ. Then the quantile function Q_Y(x) of Y can be estimated using the convolution as follows: Q_Y(x) = Q ⊗ Q ⊗ ... ⊗ Q (n times) (x), where ⊗ denotes the convolution operation. This approach assumes independence and identical distribution of the Xᵢ.
Compute quantile of sum of distributions from particular quantiles
Convolution method: If the random variables Xᵢ are independent and identically distributed (i.i.d.), you can use the convolution property of probability distributions. The sum of independent random va
Compute quantile of sum of distributions from particular quantiles Convolution method: If the random variables Xᵢ are independent and identically distributed (i.i.d.), you can use the convolution property of probability distributions. The sum of independent random variables follows the convolution of their individual probability density functions (PDFs) or quantile functions. Let Q(x) be the quantile function of Xᵢ and Y be the random variable defined as the sum of n i.i.d. Xᵢ. Then the quantile function Q_Y(x) of Y can be estimated using the convolution as follows: Q_Y(x) = Q ⊗ Q ⊗ ... ⊗ Q (n times) (x), where ⊗ denotes the convolution operation. This approach assumes independence and identical distribution of the Xᵢ.
Compute quantile of sum of distributions from particular quantiles Convolution method: If the random variables Xᵢ are independent and identically distributed (i.i.d.), you can use the convolution property of probability distributions. The sum of independent random va
26,523
Low sample size: LR vs F - test
The Likelihood ratio test you're using uses a chi-square distribution to approximate the null distribution of likelihoods. This approximation works best with large sample sizes, so its inaccuracy with a small sample size makes some sense. I see a few options for getting better Type-I error in your situation: There are corrected versions of the likelihood ratio test, such as Bartlett's correction. I don't know much about these (beyond the fact that they exist), but I've heard that Ben Bolker knows more. You could estimate the null distribution for the likelihood ratio by bootstrapping. If the observed likelihood ratio falls outside middle 95% of the bootstrap distribution, then it's statistically significant. Finally, the Poisson distribution has one fewer free parameter than the negative binomial, and might be worth trying when the sample size is very small.
Low sample size: LR vs F - test
The Likelihood ratio test you're using uses a chi-square distribution to approximate the null distribution of likelihoods. This approximation works best with large sample sizes, so its inaccuracy wit
Low sample size: LR vs F - test The Likelihood ratio test you're using uses a chi-square distribution to approximate the null distribution of likelihoods. This approximation works best with large sample sizes, so its inaccuracy with a small sample size makes some sense. I see a few options for getting better Type-I error in your situation: There are corrected versions of the likelihood ratio test, such as Bartlett's correction. I don't know much about these (beyond the fact that they exist), but I've heard that Ben Bolker knows more. You could estimate the null distribution for the likelihood ratio by bootstrapping. If the observed likelihood ratio falls outside middle 95% of the bootstrap distribution, then it's statistically significant. Finally, the Poisson distribution has one fewer free parameter than the negative binomial, and might be worth trying when the sample size is very small.
Low sample size: LR vs F - test The Likelihood ratio test you're using uses a chi-square distribution to approximate the null distribution of likelihoods. This approximation works best with large sample sizes, so its inaccuracy wit
26,524
How to understand the log marginal likelihood of a Gaussian Process?
The marginal likelihood is generally used to have a measure of how the model fitting. You can find the marginal likelihood of a process as the marginalization over the set of parameters that govern the process This integral is generally not available and cannot be computed in closed form. However, an approximation can be found with the sum of the complete likelihood and a penalization term, that I suppose is the decomposition that you mentioned on point 2. The likelihood is generally computed in logarithmic scale for numerical stability reason: consider a computer that can store only numbers between 99,000 and 0.001 (only three decimal) plus the sign. If you compute a density and in some point this has value 0.0023456789, in the computer this will be stored as 0.003 losing part of the real value, if you compute it in log scale the log(0.0023456789)=−6.05518 will be stored as −6.055 losing less than in original scale. If you multiply a lot of small values, the situation get worst: consider 0.00234567892 that will be store as 0 while log(0.00234567892)=−12.11036
How to understand the log marginal likelihood of a Gaussian Process?
The marginal likelihood is generally used to have a measure of how the model fitting. You can find the marginal likelihood of a process as the marginalization over the set of parameters that govern th
How to understand the log marginal likelihood of a Gaussian Process? The marginal likelihood is generally used to have a measure of how the model fitting. You can find the marginal likelihood of a process as the marginalization over the set of parameters that govern the process This integral is generally not available and cannot be computed in closed form. However, an approximation can be found with the sum of the complete likelihood and a penalization term, that I suppose is the decomposition that you mentioned on point 2. The likelihood is generally computed in logarithmic scale for numerical stability reason: consider a computer that can store only numbers between 99,000 and 0.001 (only three decimal) plus the sign. If you compute a density and in some point this has value 0.0023456789, in the computer this will be stored as 0.003 losing part of the real value, if you compute it in log scale the log(0.0023456789)=−6.05518 will be stored as −6.055 losing less than in original scale. If you multiply a lot of small values, the situation get worst: consider 0.00234567892 that will be store as 0 while log(0.00234567892)=−12.11036
How to understand the log marginal likelihood of a Gaussian Process? The marginal likelihood is generally used to have a measure of how the model fitting. You can find the marginal likelihood of a process as the marginalization over the set of parameters that govern th
26,525
Probability distribution of functions of random variables?
If $g$ is measurable, then $$ P(g(X,Z)\in A\mid X=x)=P(g(x,Z)\in A\mid X=x),\quad A\in\mathcal{B}(\mathbb{R}) $$ holds for $P_X$-a.a. $x$. In particular, if $Z$ is independent of $X$, then $$ P(g(X,Z)\in A\mid X=x)=P(g(x,Z)\in A),\quad A\in\mathcal{B}(\mathbb{R}) $$ holds for $P_X$-a.a. $x$. This relies on the following general result: If $U,T$ and $S$ are random variables and $P_S(\cdot \mid T=t)$ denotes a regular conditional probability of $S$ given $T=t$, i.e. $P_S(A \mid T=t)=P(S\in A\mid T=t)$, then $$ {\rm E}[U\mid T=t]=\int_\mathbb{R} {\rm E}[U\mid T=t,S=s]\,P_S(\mathrm ds\mid T=t).\tag{*} $$ Proof: The definition of a regular conditional probability ensures that $$ {\rm E}[\psi(S,T)]=\int_\mathbb{R}\int_\mathbb{R} \psi(s,t)\,P_S(\mathrm ds\mid T=t)P_T(\mathrm dt) $$ for measurable and integrable $\psi$. Now let $\psi(s,t)=\mathbf{1}_B(t){\rm E}[U\mid S=s,T=t]$ for some set Borel set $B$. Then $$ \begin{align} \int_{T^{-1}(B)} U\,\mathrm dP&={\rm E}[\mathbf{1}_B(T)U]={\rm E}[\mathbf{1}_B(T){\rm E}[U\mid S,T]]={\rm E}[\psi(S,T)]\\ &=\int_{\mathbb{R}}\int_{\mathbb{R}}\psi(s,t)\, P_S(\mathrm ds\mid T=t)P_T(\mathrm dt)\\ &=\int_B\varphi(t)P_T(\mathrm dt) \end{align} $$ with $$ \varphi(t)=\int_\mathbb{R}{\rm E}[U\mid T=t,S=s]\,P_S(\mathrm ds\mid T=t). $$ Since $B$ was arbitrary we conclude that $\varphi(t)={\rm E}[U\mid T=t]$. Now, let $A\in\mathcal{B}(\mathbb{R})$ and use $(*)$ with $U=\psi(X,Z)$, where $\psi(x,z)=\mathbf{1}_{g^{-1}(A)}(x,z)$ and $S=Z$, $T=X$. Then we note that $$ {\rm E}[U\mid X=x,Z=z]={\rm E}[\psi(X,Y)\mid X=x,Z=z]=\psi(x,z) $$ by definition of conditional expectation and hence by $(*)$ we have $$ \begin{align} P(g(X,Z)\in A\mid X=x)&={\rm E}[U\mid X=x]=\int_\mathbb{R} \psi(x,z)\,P_Z(\mathrm dz\mid X=x)\\ &=P(g(x,Z)\in A\mid X=x). \end{align} $$
Probability distribution of functions of random variables?
If $g$ is measurable, then $$ P(g(X,Z)\in A\mid X=x)=P(g(x,Z)\in A\mid X=x),\quad A\in\mathcal{B}(\mathbb{R}) $$ holds for $P_X$-a.a. $x$. In particular, if $Z$ is independent of $X$, then $$ P(g(X,Z)
Probability distribution of functions of random variables? If $g$ is measurable, then $$ P(g(X,Z)\in A\mid X=x)=P(g(x,Z)\in A\mid X=x),\quad A\in\mathcal{B}(\mathbb{R}) $$ holds for $P_X$-a.a. $x$. In particular, if $Z$ is independent of $X$, then $$ P(g(X,Z)\in A\mid X=x)=P(g(x,Z)\in A),\quad A\in\mathcal{B}(\mathbb{R}) $$ holds for $P_X$-a.a. $x$. This relies on the following general result: If $U,T$ and $S$ are random variables and $P_S(\cdot \mid T=t)$ denotes a regular conditional probability of $S$ given $T=t$, i.e. $P_S(A \mid T=t)=P(S\in A\mid T=t)$, then $$ {\rm E}[U\mid T=t]=\int_\mathbb{R} {\rm E}[U\mid T=t,S=s]\,P_S(\mathrm ds\mid T=t).\tag{*} $$ Proof: The definition of a regular conditional probability ensures that $$ {\rm E}[\psi(S,T)]=\int_\mathbb{R}\int_\mathbb{R} \psi(s,t)\,P_S(\mathrm ds\mid T=t)P_T(\mathrm dt) $$ for measurable and integrable $\psi$. Now let $\psi(s,t)=\mathbf{1}_B(t){\rm E}[U\mid S=s,T=t]$ for some set Borel set $B$. Then $$ \begin{align} \int_{T^{-1}(B)} U\,\mathrm dP&={\rm E}[\mathbf{1}_B(T)U]={\rm E}[\mathbf{1}_B(T){\rm E}[U\mid S,T]]={\rm E}[\psi(S,T)]\\ &=\int_{\mathbb{R}}\int_{\mathbb{R}}\psi(s,t)\, P_S(\mathrm ds\mid T=t)P_T(\mathrm dt)\\ &=\int_B\varphi(t)P_T(\mathrm dt) \end{align} $$ with $$ \varphi(t)=\int_\mathbb{R}{\rm E}[U\mid T=t,S=s]\,P_S(\mathrm ds\mid T=t). $$ Since $B$ was arbitrary we conclude that $\varphi(t)={\rm E}[U\mid T=t]$. Now, let $A\in\mathcal{B}(\mathbb{R})$ and use $(*)$ with $U=\psi(X,Z)$, where $\psi(x,z)=\mathbf{1}_{g^{-1}(A)}(x,z)$ and $S=Z$, $T=X$. Then we note that $$ {\rm E}[U\mid X=x,Z=z]={\rm E}[\psi(X,Y)\mid X=x,Z=z]=\psi(x,z) $$ by definition of conditional expectation and hence by $(*)$ we have $$ \begin{align} P(g(X,Z)\in A\mid X=x)&={\rm E}[U\mid X=x]=\int_\mathbb{R} \psi(x,z)\,P_Z(\mathrm dz\mid X=x)\\ &=P(g(x,Z)\in A\mid X=x). \end{align} $$
Probability distribution of functions of random variables? If $g$ is measurable, then $$ P(g(X,Z)\in A\mid X=x)=P(g(x,Z)\in A\mid X=x),\quad A\in\mathcal{B}(\mathbb{R}) $$ holds for $P_X$-a.a. $x$. In particular, if $Z$ is independent of $X$, then $$ P(g(X,Z)
26,526
Why does repeated measures ANOVA assume sphericity?
Intuition behind sphericity assumption One of the assumptions of common, non repeated measures, ANOVA is equal variance in all groups. (We can understand it because equal variance, also known as homoscedasticity, is needed for the OLS estimator in linear regression to be BLUE and for the corresponding t-tests to be valid, see Gauss–Markov theorem. And ANOVA can be implemented as linear regression.) So let's try to reduce the RM-ANOVA case to the non-RM case. For simplicity, I will be dealing with one-factor RM-ANOVA (without any between-subject effects) that has $n$ subjects recorded in $k$ RM conditions. Each subject can have their own subject-specific offset, or intercept. If we subtract values in one group from values in all other groups, we will cancel these intercepts and arrive to the situation when we can use non-RM-ANOVA to test if these $k-1$ group differences are all zero. For this test to be valid, we need an assumption of equal variances of these $k-1$ differences. Now we can subtract group #2 from all other groups, again arriving at $k-1$ differences that also should have equal variances. For each group out of $k$, the variances of the corresponding $k-1$ differences should be equal. It quickly follows that all $k(k-1)/2$ possible differences should be equal. Which is precisely the sphericity assumption. Why shouldn't group variances be equal themselves? When we think of RM-ANOVA, we usually think of a simple additive mixed-model-style model of the form $$y_{ij}=\mu+\alpha_i + \beta_j + \epsilon_{ij},$$ where $\alpha_i$ are subject effects, $\beta_j$ are condition effects, and $\epsilon\sim\mathcal N(0,\sigma^2)$. For this model, group differences will follow $\mathcal N(\beta_{j_1} - \beta_{j_2}, 2\sigma^2)$, i.e. will all have the same variance $2\sigma^2$, so sphericity holds. But each group will follow a mixture of $n$ Gaussians with means at $\alpha_i$ and variances $\sigma^2$, which is some complicated distribution with variance $V(\vec \alpha, \sigma^2)$ that is constant across groups. So in this model, indeed, group variances are the same too. Group covariances are also the same, meaning that this model implies compound symmetry. This is a more stringent condition as compared to sphericity. As my intuitive argument above shows, RM-ANOVA can work fine in the more general situation, when the additive model written above does not hold. Precise mathematical statement I am going to add here something from the Huynh & Feldt, 1970, Conditions Under Which Mean Square Ratios in Repeated Measurements Designs Have Exact $F$-Distributions. What happens when sphericity breaks? When sphericity does not hold, we can probably expect RM-ANOVA to (i) have inflated size (more type I errors), (ii) have decreased power (more type II errors). One can explore this by simulations, but I am not going to do it here.
Why does repeated measures ANOVA assume sphericity?
Intuition behind sphericity assumption One of the assumptions of common, non repeated measures, ANOVA is equal variance in all groups. (We can understand it because equal variance, also known as homo
Why does repeated measures ANOVA assume sphericity? Intuition behind sphericity assumption One of the assumptions of common, non repeated measures, ANOVA is equal variance in all groups. (We can understand it because equal variance, also known as homoscedasticity, is needed for the OLS estimator in linear regression to be BLUE and for the corresponding t-tests to be valid, see Gauss–Markov theorem. And ANOVA can be implemented as linear regression.) So let's try to reduce the RM-ANOVA case to the non-RM case. For simplicity, I will be dealing with one-factor RM-ANOVA (without any between-subject effects) that has $n$ subjects recorded in $k$ RM conditions. Each subject can have their own subject-specific offset, or intercept. If we subtract values in one group from values in all other groups, we will cancel these intercepts and arrive to the situation when we can use non-RM-ANOVA to test if these $k-1$ group differences are all zero. For this test to be valid, we need an assumption of equal variances of these $k-1$ differences. Now we can subtract group #2 from all other groups, again arriving at $k-1$ differences that also should have equal variances. For each group out of $k$, the variances of the corresponding $k-1$ differences should be equal. It quickly follows that all $k(k-1)/2$ possible differences should be equal. Which is precisely the sphericity assumption. Why shouldn't group variances be equal themselves? When we think of RM-ANOVA, we usually think of a simple additive mixed-model-style model of the form $$y_{ij}=\mu+\alpha_i + \beta_j + \epsilon_{ij},$$ where $\alpha_i$ are subject effects, $\beta_j$ are condition effects, and $\epsilon\sim\mathcal N(0,\sigma^2)$. For this model, group differences will follow $\mathcal N(\beta_{j_1} - \beta_{j_2}, 2\sigma^2)$, i.e. will all have the same variance $2\sigma^2$, so sphericity holds. But each group will follow a mixture of $n$ Gaussians with means at $\alpha_i$ and variances $\sigma^2$, which is some complicated distribution with variance $V(\vec \alpha, \sigma^2)$ that is constant across groups. So in this model, indeed, group variances are the same too. Group covariances are also the same, meaning that this model implies compound symmetry. This is a more stringent condition as compared to sphericity. As my intuitive argument above shows, RM-ANOVA can work fine in the more general situation, when the additive model written above does not hold. Precise mathematical statement I am going to add here something from the Huynh & Feldt, 1970, Conditions Under Which Mean Square Ratios in Repeated Measurements Designs Have Exact $F$-Distributions. What happens when sphericity breaks? When sphericity does not hold, we can probably expect RM-ANOVA to (i) have inflated size (more type I errors), (ii) have decreased power (more type II errors). One can explore this by simulations, but I am not going to do it here.
Why does repeated measures ANOVA assume sphericity? Intuition behind sphericity assumption One of the assumptions of common, non repeated measures, ANOVA is equal variance in all groups. (We can understand it because equal variance, also known as homo
26,527
Why does repeated measures ANOVA assume sphericity?
It turns out, that the effect of violating sphericity is a loss of power ( i.e. an increased probability of a Type II error) and a test statistic ( F-ratio ) that simply cannot be compared to tabulated values of F-distribution. F-test becomes too liberal ( i.e. proportion of rejections of the null hypothesis is larger than alpha level when the null hypothesis is true. Precise investigation of this subject is very involved, but fortunately Box et al wrote a paper about that: https://projecteuclid.org/download/pdf_1/euclid.aoms/1177728786 In short, the situation is as follows. First, let's say we have one factor repeated measurements design with S subjects and A experimental treatments In this case the effect of the independent variable is tested by computing F statistic, which is computed as the ratio of the mean square of effect by the mean square of the interaction between the subject factor and the independent variable. When sphericity holds, this statistics have Fisher distribution with $\upsilon_{1}=A-1$ and $\upsilon_{2}=(A-1)(S-1)$ degrees of freedom. In above article Box revealed, that when sphericity fails, the correct number of degrees of freedom becomes $\upsilon_{1}$ of F ratio depends on a sphericity $\epsilon$ like so : $$ \upsilon_{1} = \epsilon(A-1) $$ $$ \upsilon_{2} = \epsilon(A-1)(S-1) $$ Also Box introduced the sphericity index, which applies to population covariance matrix . If we call $\xi_{a,a}$ the entries of this AxA table, then the index is $$ \epsilon = \frac{\left ( \sum_{a}^{ }\xi_{a,a} \right )^{2}}{\left ( A-1 \right )\sum_{a,a'}^{ }\xi_{a,a'}^{2}} $$ The Box index of sphericity is best understood in relation to the eigenvalues of a covariance matrix. Recall that covariance matrices belong to the class of positive semi-definite matrices and therefore always has positive of null eigenvalues. Thus, the sphericity conditionis equivalent to having all eigenvalues equal to a constant. So, when sphericity is violated we should apply some correction for our F statistics, and most notable examples of this corrections are Greenhouse-Geisser and Huynh-Feldt, for example Without any corrections your results will be biased and so unreliable. Hope this helps!
Why does repeated measures ANOVA assume sphericity?
It turns out, that the effect of violating sphericity is a loss of power ( i.e. an increased probability of a Type II error) and a test statistic ( F-ratio ) that simply cannot be compared to tabulate
Why does repeated measures ANOVA assume sphericity? It turns out, that the effect of violating sphericity is a loss of power ( i.e. an increased probability of a Type II error) and a test statistic ( F-ratio ) that simply cannot be compared to tabulated values of F-distribution. F-test becomes too liberal ( i.e. proportion of rejections of the null hypothesis is larger than alpha level when the null hypothesis is true. Precise investigation of this subject is very involved, but fortunately Box et al wrote a paper about that: https://projecteuclid.org/download/pdf_1/euclid.aoms/1177728786 In short, the situation is as follows. First, let's say we have one factor repeated measurements design with S subjects and A experimental treatments In this case the effect of the independent variable is tested by computing F statistic, which is computed as the ratio of the mean square of effect by the mean square of the interaction between the subject factor and the independent variable. When sphericity holds, this statistics have Fisher distribution with $\upsilon_{1}=A-1$ and $\upsilon_{2}=(A-1)(S-1)$ degrees of freedom. In above article Box revealed, that when sphericity fails, the correct number of degrees of freedom becomes $\upsilon_{1}$ of F ratio depends on a sphericity $\epsilon$ like so : $$ \upsilon_{1} = \epsilon(A-1) $$ $$ \upsilon_{2} = \epsilon(A-1)(S-1) $$ Also Box introduced the sphericity index, which applies to population covariance matrix . If we call $\xi_{a,a}$ the entries of this AxA table, then the index is $$ \epsilon = \frac{\left ( \sum_{a}^{ }\xi_{a,a} \right )^{2}}{\left ( A-1 \right )\sum_{a,a'}^{ }\xi_{a,a'}^{2}} $$ The Box index of sphericity is best understood in relation to the eigenvalues of a covariance matrix. Recall that covariance matrices belong to the class of positive semi-definite matrices and therefore always has positive of null eigenvalues. Thus, the sphericity conditionis equivalent to having all eigenvalues equal to a constant. So, when sphericity is violated we should apply some correction for our F statistics, and most notable examples of this corrections are Greenhouse-Geisser and Huynh-Feldt, for example Without any corrections your results will be biased and so unreliable. Hope this helps!
Why does repeated measures ANOVA assume sphericity? It turns out, that the effect of violating sphericity is a loss of power ( i.e. an increased probability of a Type II error) and a test statistic ( F-ratio ) that simply cannot be compared to tabulate
26,528
Why does repeated measures ANOVA assume sphericity?
I will try to answer this question in a simple setting of repeated measures ANOVA. The concept is similar to the answer by @amoeba, with hopefully a more basic illustration. Assume that a group of subjects are randomized evenly into different groups and each subject is measured at equal number of times. This is a split plot design with subjects as the whole plot and measurements within each subject as observations of subplots. Denote $y_{ijk}$ as the measurement at the k-th timepoint of the j-th subject from the i-th group, $i=1, ..., I; j = 1, ..., J; k = 1, ..., K.$ The sample mean of the i-th group is $$\bar{y}_{i..} = \frac{1}{JK}\sum_{j=1}^{J}\sum_{k=1}^{K}{y_{ijk}}$$ and that of the ij-th subject is $$\bar{y}_{ij.} = \frac{1}{K}\sum_{k=1}^{K}{y_{ijk}}$$ By assuming independence among subjects, the variance of difference between two group means is $$Var(\bar{y}_{i..} - \bar{y}_{i'..}) = \frac{1}{J^2}\sum_{j=1}^JVar(\bar{y}_{ij.}) + \frac{1}{J^2}\sum_{j'=1}^JVar(\bar{y}_{i'j'.})$$ It is reasonable to expect that repeated measurements within a subject are correlated. So, $Var(\bar{y}_{ij.})$ is not as simple as $\sigma^{2}/K$ with $\sigma^{2}$ being the variance of each observation. Regardless, if $Var(\bar{y}_{ij.})$ is assumed constant for all subjects, one can validly execute a "straight-forward" 2-sample t-test to compare 2 group means. Thus, one motivation to assume constant variances is for performing a valid and simple t-test. Now, to the sphericity question that was raised. There may be interest to compare sample means between any two timepoints with $\bar{y}_{..k} - \bar{y}_{..k'}$, where $$\bar{y}_{..k} = \frac{1}{IJ}\sum_{i=1}^{I}\sum_{j=1}^{J}{y_{ijk}}.$$ This comparison requires finding the variance of pairwise difference between $y_{ijk}$ and $y_{ijk'}$ across all subjects. Specifically, under the usual assumption of independence among subjects, $$Var(\bar{y}_{..k} - \bar{y}_{..k'}) = \frac{1}{(IJ)^2}\sum_{i=1}^I\sum_{j=1}^JVar(y_{ijk} - y_{ijk'})$$ Therefore, assuming a constant variance of all pairwise differences makes it valid to perform a t-test once the common variance is estimated. This assumption, together with the constant variance of each observation, implies that the covariance between any pair of measurements is constant across all pairs - Sergio has a great post on this topic. The assumptions therefore render a variance-covariance structure for repeated measurements of each subject as a matrix with a constant diagonally and another constant off-diagonally. When the off-diagonal entries are all zero, it reduces to the all-independent model (which could be inappropriate for many repeated measurement studies). When the off diagonal entries are the same as the diagonal one, repeated measurements are perfectly correlated for a subject, meaning any single measurement is as good as all measurements for each subject. Final note - when K = 2 in our simple split plot design, the sphericity condition is automatically fulfilled.
Why does repeated measures ANOVA assume sphericity?
I will try to answer this question in a simple setting of repeated measures ANOVA. The concept is similar to the answer by @amoeba, with hopefully a more basic illustration. Assume that a group of s
Why does repeated measures ANOVA assume sphericity? I will try to answer this question in a simple setting of repeated measures ANOVA. The concept is similar to the answer by @amoeba, with hopefully a more basic illustration. Assume that a group of subjects are randomized evenly into different groups and each subject is measured at equal number of times. This is a split plot design with subjects as the whole plot and measurements within each subject as observations of subplots. Denote $y_{ijk}$ as the measurement at the k-th timepoint of the j-th subject from the i-th group, $i=1, ..., I; j = 1, ..., J; k = 1, ..., K.$ The sample mean of the i-th group is $$\bar{y}_{i..} = \frac{1}{JK}\sum_{j=1}^{J}\sum_{k=1}^{K}{y_{ijk}}$$ and that of the ij-th subject is $$\bar{y}_{ij.} = \frac{1}{K}\sum_{k=1}^{K}{y_{ijk}}$$ By assuming independence among subjects, the variance of difference between two group means is $$Var(\bar{y}_{i..} - \bar{y}_{i'..}) = \frac{1}{J^2}\sum_{j=1}^JVar(\bar{y}_{ij.}) + \frac{1}{J^2}\sum_{j'=1}^JVar(\bar{y}_{i'j'.})$$ It is reasonable to expect that repeated measurements within a subject are correlated. So, $Var(\bar{y}_{ij.})$ is not as simple as $\sigma^{2}/K$ with $\sigma^{2}$ being the variance of each observation. Regardless, if $Var(\bar{y}_{ij.})$ is assumed constant for all subjects, one can validly execute a "straight-forward" 2-sample t-test to compare 2 group means. Thus, one motivation to assume constant variances is for performing a valid and simple t-test. Now, to the sphericity question that was raised. There may be interest to compare sample means between any two timepoints with $\bar{y}_{..k} - \bar{y}_{..k'}$, where $$\bar{y}_{..k} = \frac{1}{IJ}\sum_{i=1}^{I}\sum_{j=1}^{J}{y_{ijk}}.$$ This comparison requires finding the variance of pairwise difference between $y_{ijk}$ and $y_{ijk'}$ across all subjects. Specifically, under the usual assumption of independence among subjects, $$Var(\bar{y}_{..k} - \bar{y}_{..k'}) = \frac{1}{(IJ)^2}\sum_{i=1}^I\sum_{j=1}^JVar(y_{ijk} - y_{ijk'})$$ Therefore, assuming a constant variance of all pairwise differences makes it valid to perform a t-test once the common variance is estimated. This assumption, together with the constant variance of each observation, implies that the covariance between any pair of measurements is constant across all pairs - Sergio has a great post on this topic. The assumptions therefore render a variance-covariance structure for repeated measurements of each subject as a matrix with a constant diagonally and another constant off-diagonally. When the off-diagonal entries are all zero, it reduces to the all-independent model (which could be inappropriate for many repeated measurement studies). When the off diagonal entries are the same as the diagonal one, repeated measurements are perfectly correlated for a subject, meaning any single measurement is as good as all measurements for each subject. Final note - when K = 2 in our simple split plot design, the sphericity condition is automatically fulfilled.
Why does repeated measures ANOVA assume sphericity? I will try to answer this question in a simple setting of repeated measures ANOVA. The concept is similar to the answer by @amoeba, with hopefully a more basic illustration. Assume that a group of s
26,529
Preconditioning gradient descent
Your question mentions that a diagonal $P$ gives you a parameter-dependent learning rate, which is like normalizing your input, except it assumes that your input has diagonal covariance. In general using a preconditioning matrix $P$ is equivalent to normalizing $x$. Let the covariance of $x$ be $\Sigma = L L^T$ then $\tilde{x} = L^{-1}x $ is the normalized version of $x$. $$ \Delta \tilde{x} = \nabla_\tilde{x} F(x) = L^T \nabla_x F(x) $$ so $$ \Delta x = L \Delta \tilde{x} = LL^{T} \nabla_{x} F(x) = \Sigma \nabla_{x} F(x) $$ Doing this makes your objective (more) isotropic in the parameter space. It's the same as a parameter-dependent learning rate, except that your axes don't necessarily line up with the coordinates. Here's an image where you can see a situation where you would need one learning rate on the line $y = x$, and another on the line $y=-x$, and how the transformation $L = ( \sigma_1 + \sigma_3 ) \operatorname{diag}(1, \sqrt{10})$ solves that problem. Another way you could look this is that Newton's method would give you an optimization step: $$ x_{n+1} = x_n - \gamma_n [Hf|_{x_n}]^{-1} \nabla f(x_n) $$ And approximating the hessian as constant near the minimum with $P \approx Hf|_{x^\star} $ brings you closer to the fast convergence provided by Newton's method, without having to calculate the Hessian or make more computationally expensive approximations of the Hessian that you would see in quasi-Newton methods. Note that for a normal distribution, the hessian of the log-loss is $ H = \Sigma^{-1} $, and these two perspectives are equivalent.
Preconditioning gradient descent
Your question mentions that a diagonal $P$ gives you a parameter-dependent learning rate, which is like normalizing your input, except it assumes that your input has diagonal covariance. In general us
Preconditioning gradient descent Your question mentions that a diagonal $P$ gives you a parameter-dependent learning rate, which is like normalizing your input, except it assumes that your input has diagonal covariance. In general using a preconditioning matrix $P$ is equivalent to normalizing $x$. Let the covariance of $x$ be $\Sigma = L L^T$ then $\tilde{x} = L^{-1}x $ is the normalized version of $x$. $$ \Delta \tilde{x} = \nabla_\tilde{x} F(x) = L^T \nabla_x F(x) $$ so $$ \Delta x = L \Delta \tilde{x} = LL^{T} \nabla_{x} F(x) = \Sigma \nabla_{x} F(x) $$ Doing this makes your objective (more) isotropic in the parameter space. It's the same as a parameter-dependent learning rate, except that your axes don't necessarily line up with the coordinates. Here's an image where you can see a situation where you would need one learning rate on the line $y = x$, and another on the line $y=-x$, and how the transformation $L = ( \sigma_1 + \sigma_3 ) \operatorname{diag}(1, \sqrt{10})$ solves that problem. Another way you could look this is that Newton's method would give you an optimization step: $$ x_{n+1} = x_n - \gamma_n [Hf|_{x_n}]^{-1} \nabla f(x_n) $$ And approximating the hessian as constant near the minimum with $P \approx Hf|_{x^\star} $ brings you closer to the fast convergence provided by Newton's method, without having to calculate the Hessian or make more computationally expensive approximations of the Hessian that you would see in quasi-Newton methods. Note that for a normal distribution, the hessian of the log-loss is $ H = \Sigma^{-1} $, and these two perspectives are equivalent.
Preconditioning gradient descent Your question mentions that a diagonal $P$ gives you a parameter-dependent learning rate, which is like normalizing your input, except it assumes that your input has diagonal covariance. In general us
26,530
The meaning of conditional test error vs. expected test error in cross-validation
I think you may be misunderstanding conditional test error. This may be because Hastie, Friedman, and Tibshirani (HFT) are not consistent in their terminology, sometimes calling this same notion "test error", "generalization error", "prediction error on an independent test set", "true conditional error", or "actual test error". Regardless of name, it's the average error that the model you fitted on a particular training set $\tau$ would incur when applied to examples drawn from the distribution of (X,Y) pairs. If you lose money each time the fitted model makes an error (or proportional to the error if you're talking about regression), it's the average amount of money you lose each time you use the classifier. Arguably, it's the most natural thing to care about for a model you've fitted to a particular training set. Once that sinks in, the real question is why one should care about expected test error! (HFT also call this "expected prediction error".) After all, it's an average over all sorts of training sets that you're typically never going to get to use. (It appears, by the way, that HFT intend an average over training sets of a particular size in defining expected test error, but they don't ever say this explicitly.) The reason is that expected test error is a more fundamental characteristic of a learning algorithm, since it averages over the vagaries of whether you got lucky or not with your particular training set. As you mention, HFT show the CV estimates expected test error better than it estimates conditional test error. This is fortunate if you're comparing machine learning algorithms, but unfortunate if you want to know how well the particular model you fit to a particular training set is going to work.
The meaning of conditional test error vs. expected test error in cross-validation
I think you may be misunderstanding conditional test error. This may be because Hastie, Friedman, and Tibshirani (HFT) are not consistent in their terminology, sometimes calling this same notion "test
The meaning of conditional test error vs. expected test error in cross-validation I think you may be misunderstanding conditional test error. This may be because Hastie, Friedman, and Tibshirani (HFT) are not consistent in their terminology, sometimes calling this same notion "test error", "generalization error", "prediction error on an independent test set", "true conditional error", or "actual test error". Regardless of name, it's the average error that the model you fitted on a particular training set $\tau$ would incur when applied to examples drawn from the distribution of (X,Y) pairs. If you lose money each time the fitted model makes an error (or proportional to the error if you're talking about regression), it's the average amount of money you lose each time you use the classifier. Arguably, it's the most natural thing to care about for a model you've fitted to a particular training set. Once that sinks in, the real question is why one should care about expected test error! (HFT also call this "expected prediction error".) After all, it's an average over all sorts of training sets that you're typically never going to get to use. (It appears, by the way, that HFT intend an average over training sets of a particular size in defining expected test error, but they don't ever say this explicitly.) The reason is that expected test error is a more fundamental characteristic of a learning algorithm, since it averages over the vagaries of whether you got lucky or not with your particular training set. As you mention, HFT show the CV estimates expected test error better than it estimates conditional test error. This is fortunate if you're comparing machine learning algorithms, but unfortunate if you want to know how well the particular model you fit to a particular training set is going to work.
The meaning of conditional test error vs. expected test error in cross-validation I think you may be misunderstanding conditional test error. This may be because Hastie, Friedman, and Tibshirani (HFT) are not consistent in their terminology, sometimes calling this same notion "test
26,531
The meaning of conditional test error vs. expected test error in cross-validation
I'm thinking about the same passage and am also wondering when I would ever be interested in the conditional test error. What's more, as far as I can understand they should be the same asymptotically: for very large training and test sets the precise training/test set split should no longer result in different conditional test error estimates. As you can see in the Hastie et al. book their examples on conditional - expected differences are always based on relatively small number of observations, which if I understand this correctly is the reason for why conditional and expected test errors look different in the graphs. The book mentions that the expected test error averages over the randomness in the training set, while the (conditional) test error does not. Now when would I want to take the uncertainty associated with which particular training/test-set partition I make into account? My answer would be that I'm usually never interested in accomodating this kind of uncertainty as this is not what I'm interested in when I'm doing model assessment: In assessing a the predictive quality of a model I want to know how it would fare in let's say forecasting the weather tomorrow. The weather tomorrow is related to my overall data pretty much as my test data is related to my training data - so I calculate one conditional test error to assess my model. However, the weather tomorrow is related to my overall data not like one specific test set is related to the corresponding specific training set, but how the average test set is related to the average training set. So I obtain the next training/test- set partition and get another conditional test error. I do this many times (as e.g. in K-fold cross-validation) - the variation in the individual conditional test errors averages out - and I'm left with the expected test error; which, again, is all I can think of wanting to obtain. Put differently, in the test error/expected test error graphs in Hastie et al., we get an idea of the efficiency of the model estimator: if the conditional test errors are widely dispersed around the expected test error this is an indication of the estimator being inefficient, while less variation in the conditional test errors would indicate a more efficient estimator, given the amount of observations. Bottomline: I might be mistaken here, and I'd be happy to be corrected on this, but as I see it at the moment the concept of the conditional test error is a doubtful attempt at assessing external model validity through allowing oneself only one training/test-partitioning shot. For large samples this single shot should be equivalent to conditoinal test errors averaged over many training/test-partitioning shots, i.e. the expected test error. For small samples where a difference occurs the actual measure of interest appears to me to be the expected, and not the conditional test error.
The meaning of conditional test error vs. expected test error in cross-validation
I'm thinking about the same passage and am also wondering when I would ever be interested in the conditional test error. What's more, as far as I can understand they should be the same asymptotically:
The meaning of conditional test error vs. expected test error in cross-validation I'm thinking about the same passage and am also wondering when I would ever be interested in the conditional test error. What's more, as far as I can understand they should be the same asymptotically: for very large training and test sets the precise training/test set split should no longer result in different conditional test error estimates. As you can see in the Hastie et al. book their examples on conditional - expected differences are always based on relatively small number of observations, which if I understand this correctly is the reason for why conditional and expected test errors look different in the graphs. The book mentions that the expected test error averages over the randomness in the training set, while the (conditional) test error does not. Now when would I want to take the uncertainty associated with which particular training/test-set partition I make into account? My answer would be that I'm usually never interested in accomodating this kind of uncertainty as this is not what I'm interested in when I'm doing model assessment: In assessing a the predictive quality of a model I want to know how it would fare in let's say forecasting the weather tomorrow. The weather tomorrow is related to my overall data pretty much as my test data is related to my training data - so I calculate one conditional test error to assess my model. However, the weather tomorrow is related to my overall data not like one specific test set is related to the corresponding specific training set, but how the average test set is related to the average training set. So I obtain the next training/test- set partition and get another conditional test error. I do this many times (as e.g. in K-fold cross-validation) - the variation in the individual conditional test errors averages out - and I'm left with the expected test error; which, again, is all I can think of wanting to obtain. Put differently, in the test error/expected test error graphs in Hastie et al., we get an idea of the efficiency of the model estimator: if the conditional test errors are widely dispersed around the expected test error this is an indication of the estimator being inefficient, while less variation in the conditional test errors would indicate a more efficient estimator, given the amount of observations. Bottomline: I might be mistaken here, and I'd be happy to be corrected on this, but as I see it at the moment the concept of the conditional test error is a doubtful attempt at assessing external model validity through allowing oneself only one training/test-partitioning shot. For large samples this single shot should be equivalent to conditoinal test errors averaged over many training/test-partitioning shots, i.e. the expected test error. For small samples where a difference occurs the actual measure of interest appears to me to be the expected, and not the conditional test error.
The meaning of conditional test error vs. expected test error in cross-validation I'm thinking about the same passage and am also wondering when I would ever be interested in the conditional test error. What's more, as far as I can understand they should be the same asymptotically:
26,532
What is a good analogy to illustrate the strengths of Hierarchical Bayesian Models?
I would like to illustrate an example as to modelling relating to cancer rate(As in Johnson and Albert 1999). It will touch first and third element of your interest. So the problem is predicting cancer rates in various cities. Say we have data of number of people in various cities $N_i$ and number of people who died with cancer $x_i$. Say we want to estimate cancer rates $\theta_i$. There are various ways to model them and as we see problems with each of them. We will see how heirachical bayes modelling can overcome some problem. 1. One way is to do estimation seperately but we will suffer from sparse data problem and would be an underestimate of the rates as for low $N_i$. 2. One more approach to manage the problem of sparse data would be to use same $\theta_i$ for all cities and tie the parameters but this is also a very strong assumption. 3. So what could be done is all $\theta_i$'s are similar in some way but also with city specific variations. So one could model in such a way that all $\theta_i$'s are drawn from a common distribution. Say $x_i \sim Bin(N_i,\theta_i)$ and $\theta_i \sim Beta(a,b) $ A full joint distribution would be then $p(D,\theta,\eta|N)= p(\eta)\prod_{i=1}^N Bin(x_i|N_i,\theta_i)Beta(\theta_i|\eta)$ where $\eta = (a,b)$. We need to infer $\eta$ from data. If it is clamped to a constant then information will not flow between $\theta_i$'s and they will be conditionally independent. But by treating $\eta$ as unknowns we allow cities with less data borrow statistical strength from cities with more data. The main idea is to more bayesian and setting priors on priors as to model uncertainty in hyperparameters. This allows flow of influence between $\theta_i$'s in this example.
What is a good analogy to illustrate the strengths of Hierarchical Bayesian Models?
I would like to illustrate an example as to modelling relating to cancer rate(As in Johnson and Albert 1999). It will touch first and third element of your interest. So the problem is predicting can
What is a good analogy to illustrate the strengths of Hierarchical Bayesian Models? I would like to illustrate an example as to modelling relating to cancer rate(As in Johnson and Albert 1999). It will touch first and third element of your interest. So the problem is predicting cancer rates in various cities. Say we have data of number of people in various cities $N_i$ and number of people who died with cancer $x_i$. Say we want to estimate cancer rates $\theta_i$. There are various ways to model them and as we see problems with each of them. We will see how heirachical bayes modelling can overcome some problem. 1. One way is to do estimation seperately but we will suffer from sparse data problem and would be an underestimate of the rates as for low $N_i$. 2. One more approach to manage the problem of sparse data would be to use same $\theta_i$ for all cities and tie the parameters but this is also a very strong assumption. 3. So what could be done is all $\theta_i$'s are similar in some way but also with city specific variations. So one could model in such a way that all $\theta_i$'s are drawn from a common distribution. Say $x_i \sim Bin(N_i,\theta_i)$ and $\theta_i \sim Beta(a,b) $ A full joint distribution would be then $p(D,\theta,\eta|N)= p(\eta)\prod_{i=1}^N Bin(x_i|N_i,\theta_i)Beta(\theta_i|\eta)$ where $\eta = (a,b)$. We need to infer $\eta$ from data. If it is clamped to a constant then information will not flow between $\theta_i$'s and they will be conditionally independent. But by treating $\eta$ as unknowns we allow cities with less data borrow statistical strength from cities with more data. The main idea is to more bayesian and setting priors on priors as to model uncertainty in hyperparameters. This allows flow of influence between $\theta_i$'s in this example.
What is a good analogy to illustrate the strengths of Hierarchical Bayesian Models? I would like to illustrate an example as to modelling relating to cancer rate(As in Johnson and Albert 1999). It will touch first and third element of your interest. So the problem is predicting can
26,533
What is a good analogy to illustrate the strengths of Hierarchical Bayesian Models?
When you are ill, you observe symptoms but what you want is a diagnosis. If you are not a physician I guess that you can simply find the diagnosis that best matches your symptoms. But what Ph HBM would do is to look at your symptoms, their relative meaningfulness, how they fit/relate your different previous health problems, the one of your family, the current common diseases and environmental conditions, your weakness, your strength... and then he will combine of these stuffs using its knowledge to update what he guess of your health conditions and will give you the more likely diagnosis. I am sure that this analogy achieves its limit pretty soon but I think that it can give a good intuition of what one would expect from a HBM, do you ? (and I did not find a better one)
What is a good analogy to illustrate the strengths of Hierarchical Bayesian Models?
When you are ill, you observe symptoms but what you want is a diagnosis. If you are not a physician I guess that you can simply find the diagnosis that best matches your symptoms. But what Ph HBM woul
What is a good analogy to illustrate the strengths of Hierarchical Bayesian Models? When you are ill, you observe symptoms but what you want is a diagnosis. If you are not a physician I guess that you can simply find the diagnosis that best matches your symptoms. But what Ph HBM would do is to look at your symptoms, their relative meaningfulness, how they fit/relate your different previous health problems, the one of your family, the current common diseases and environmental conditions, your weakness, your strength... and then he will combine of these stuffs using its knowledge to update what he guess of your health conditions and will give you the more likely diagnosis. I am sure that this analogy achieves its limit pretty soon but I think that it can give a good intuition of what one would expect from a HBM, do you ? (and I did not find a better one)
What is a good analogy to illustrate the strengths of Hierarchical Bayesian Models? When you are ill, you observe symptoms but what you want is a diagnosis. If you are not a physician I guess that you can simply find the diagnosis that best matches your symptoms. But what Ph HBM woul
26,534
Why use factor graph for Bayesian inference?
I will try to answer my own question. Message A very important notion of factor graph is message, which can be understood as A tells something about B, if the message is passed from A to B. In the probabilistic model context, message from factor $f$ to variable $x$ can be denoted as $\mu_{f \to x}$, which can be understood as $f$ knows something(probability distribution in this case) and tells it to $x$. Factor summarizes messages In the "factor" context, to know the probability distribution of some variable, one needs to have all the messages ready from its neighboring factors and then summarize all the messages to derive the distribution. For example, in the following graph, the edges, $x_i$, are variables and nodes, $f_i$, are factors connected by edges. To know $P(x_4)$, we need to know the $\mu_{f_3 \to x_4}$ and $\mu_{f_4 \to x_4}$ and summarize them together. Recursive structure of messages Then how to know these two messages? For example, $\mu_{f_4 \to x_4}$. It can be seen as the message after summarizing two messages, $\mu_{x_5 \to f_4}$ and $\mu_{x_6 \to f_4}$. And $\mu_{x_6 \to f_4}$ is essentially $\mu_{f_6 \to x_6}$, which can be calculated from some other messages. This is the recursive structure of messages, messages can be defined by messages. Recursion is a good thing, one for better understanding, one for easier implementation of computer program. Conclusion The benefit of factors are: Factor, which summarizes inflow messages and output the outflow message, enables messages which is essential for computing marginal Factors enable the recursive structure of calculating messages, making the message passing or belief propagation process easier to understand, and possibly easier to implement.
Why use factor graph for Bayesian inference?
I will try to answer my own question. Message A very important notion of factor graph is message, which can be understood as A tells something about B, if the message is passed from A to B. In the pro
Why use factor graph for Bayesian inference? I will try to answer my own question. Message A very important notion of factor graph is message, which can be understood as A tells something about B, if the message is passed from A to B. In the probabilistic model context, message from factor $f$ to variable $x$ can be denoted as $\mu_{f \to x}$, which can be understood as $f$ knows something(probability distribution in this case) and tells it to $x$. Factor summarizes messages In the "factor" context, to know the probability distribution of some variable, one needs to have all the messages ready from its neighboring factors and then summarize all the messages to derive the distribution. For example, in the following graph, the edges, $x_i$, are variables and nodes, $f_i$, are factors connected by edges. To know $P(x_4)$, we need to know the $\mu_{f_3 \to x_4}$ and $\mu_{f_4 \to x_4}$ and summarize them together. Recursive structure of messages Then how to know these two messages? For example, $\mu_{f_4 \to x_4}$. It can be seen as the message after summarizing two messages, $\mu_{x_5 \to f_4}$ and $\mu_{x_6 \to f_4}$. And $\mu_{x_6 \to f_4}$ is essentially $\mu_{f_6 \to x_6}$, which can be calculated from some other messages. This is the recursive structure of messages, messages can be defined by messages. Recursion is a good thing, one for better understanding, one for easier implementation of computer program. Conclusion The benefit of factors are: Factor, which summarizes inflow messages and output the outflow message, enables messages which is essential for computing marginal Factors enable the recursive structure of calculating messages, making the message passing or belief propagation process easier to understand, and possibly easier to implement.
Why use factor graph for Bayesian inference? I will try to answer my own question. Message A very important notion of factor graph is message, which can be understood as A tells something about B, if the message is passed from A to B. In the pro
26,535
Why use factor graph for Bayesian inference?
A Bayesian Network, by definition, is a collection of random variables $\{X_n: P \rightarrow \mathbb{R}\}$ and a graph $G$ such that the probability function $P(X_1,...,X_n)$ factors as conditional probabilities in a way determined by $G$. See http://en.wikipedia.org/wiki/Factor_graph. Most importantly the factors in the Bayesian Network are of the form $P(X_i| X_{j_1},..,X_{j_n})$. A factor graph, even though it is more general, is the same in that it is a graphical way to keep information about the factorization of $P(X_1,...,X_n)$ or any other function. The difference is that when a Bayesian network is converted to a factor graph the factors in the factor graph are grouped. For example, one factor in the factor graph may be $P(X_i| X_{j_1},..,X_{j_n})P(X_{j_n})P(X_{j_1}) = P(X_i| X_{j_2},..,X_{j_{n-1}})$. The original Bayesian network stored this as three factors but the factor graph stores it only as one factor. In general, the factor graph of a Bayesian network keeps tracks of fewer factorizations than the original Bayesian network did.
Why use factor graph for Bayesian inference?
A Bayesian Network, by definition, is a collection of random variables $\{X_n: P \rightarrow \mathbb{R}\}$ and a graph $G$ such that the probability function $P(X_1,...,X_n)$ factors as conditional pr
Why use factor graph for Bayesian inference? A Bayesian Network, by definition, is a collection of random variables $\{X_n: P \rightarrow \mathbb{R}\}$ and a graph $G$ such that the probability function $P(X_1,...,X_n)$ factors as conditional probabilities in a way determined by $G$. See http://en.wikipedia.org/wiki/Factor_graph. Most importantly the factors in the Bayesian Network are of the form $P(X_i| X_{j_1},..,X_{j_n})$. A factor graph, even though it is more general, is the same in that it is a graphical way to keep information about the factorization of $P(X_1,...,X_n)$ or any other function. The difference is that when a Bayesian network is converted to a factor graph the factors in the factor graph are grouped. For example, one factor in the factor graph may be $P(X_i| X_{j_1},..,X_{j_n})P(X_{j_n})P(X_{j_1}) = P(X_i| X_{j_2},..,X_{j_{n-1}})$. The original Bayesian network stored this as three factors but the factor graph stores it only as one factor. In general, the factor graph of a Bayesian network keeps tracks of fewer factorizations than the original Bayesian network did.
Why use factor graph for Bayesian inference? A Bayesian Network, by definition, is a collection of random variables $\{X_n: P \rightarrow \mathbb{R}\}$ and a graph $G$ such that the probability function $P(X_1,...,X_n)$ factors as conditional pr
26,536
Why use factor graph for Bayesian inference?
A factor graph is just yet another representation of a Bayesian model. If you had an exact algorithm for inference in a particular Bayesian network, and another exact algorithm for inference in the corresponding factor graph, the two results would be the same. Factor graphs just happen to be a useful representation for deriving efficient (exact and approximate) inference algorithms by exploiting conditional independence between variables in the model, thereby mitigating the curse of dimensionality. To give an analogy: the Fourier transform contains the exact same information as the time representation of a signal, yet some tasks are easier accomplished in the frequency domain, and some are easier accomplished in the time domain. In the same sense, a factor graph is just a reformulation of the same information (the probabilistic model), which is helpful for deriving clever algorithms but doesn't really "add" anything. To be more specific, assume that you are interested in deriving the marginal $p(x_i)$ of some quantity in a model, which requires integrating over all other variables: $$ p(x_i) = \int p(x_1, x_2, \ldots, x_i,\ldots, x_N) dx_1x_2\ldots x_{i-1}x_{i+1}\ldots x_N $$ In a high-dimensional model, this is an integration over a high-dimensional space, which is very hard to calculate. (This marginalization/integration problem is what makes inference in high dimensions hard/intractable. One approach is to find clever ways to evaluate this integral efficiently, which is what Markov chain Monte Carlo (MCMC) methods do. Those are known to suffer from notoriously long computation times.) Without going into too many details, a factor graph encodes the fact that many of these variables are conditionally independent of one another. This enables replacing the above, high-dimensional integration by a series of integration problems of much lower dimension, namely, the computations of the different messages. By exploiting the structure of the problem in this way, inference becomes feasible. This is the core benefit of formulating inference in terms of factor graphs.
Why use factor graph for Bayesian inference?
A factor graph is just yet another representation of a Bayesian model. If you had an exact algorithm for inference in a particular Bayesian network, and another exact algorithm for inference in the co
Why use factor graph for Bayesian inference? A factor graph is just yet another representation of a Bayesian model. If you had an exact algorithm for inference in a particular Bayesian network, and another exact algorithm for inference in the corresponding factor graph, the two results would be the same. Factor graphs just happen to be a useful representation for deriving efficient (exact and approximate) inference algorithms by exploiting conditional independence between variables in the model, thereby mitigating the curse of dimensionality. To give an analogy: the Fourier transform contains the exact same information as the time representation of a signal, yet some tasks are easier accomplished in the frequency domain, and some are easier accomplished in the time domain. In the same sense, a factor graph is just a reformulation of the same information (the probabilistic model), which is helpful for deriving clever algorithms but doesn't really "add" anything. To be more specific, assume that you are interested in deriving the marginal $p(x_i)$ of some quantity in a model, which requires integrating over all other variables: $$ p(x_i) = \int p(x_1, x_2, \ldots, x_i,\ldots, x_N) dx_1x_2\ldots x_{i-1}x_{i+1}\ldots x_N $$ In a high-dimensional model, this is an integration over a high-dimensional space, which is very hard to calculate. (This marginalization/integration problem is what makes inference in high dimensions hard/intractable. One approach is to find clever ways to evaluate this integral efficiently, which is what Markov chain Monte Carlo (MCMC) methods do. Those are known to suffer from notoriously long computation times.) Without going into too many details, a factor graph encodes the fact that many of these variables are conditionally independent of one another. This enables replacing the above, high-dimensional integration by a series of integration problems of much lower dimension, namely, the computations of the different messages. By exploiting the structure of the problem in this way, inference becomes feasible. This is the core benefit of formulating inference in terms of factor graphs.
Why use factor graph for Bayesian inference? A factor graph is just yet another representation of a Bayesian model. If you had an exact algorithm for inference in a particular Bayesian network, and another exact algorithm for inference in the co
26,537
Coding for an ordered covariate
You could check out Gertheiss & Tutz, Penalized Regression with Ordinal Predictors, & their R package ordPens. They say:– Rather than estimating the parameters by simple maximum likelihood methods we propose to penalize differences between coefficients of adjacent categories in the estimation procedure. The rationale behind is as follows: the response $y$ is assumed to change slowly between two adjacent categories of the independent variable. In other words, we try to avoid high jumps and prefer a smoother coefficient vector.
Coding for an ordered covariate
You could check out Gertheiss & Tutz, Penalized Regression with Ordinal Predictors, & their R package ordPens. They say:– Rather than estimating the parameters by simple maximum likelihood methods we
Coding for an ordered covariate You could check out Gertheiss & Tutz, Penalized Regression with Ordinal Predictors, & their R package ordPens. They say:– Rather than estimating the parameters by simple maximum likelihood methods we propose to penalize differences between coefficients of adjacent categories in the estimation procedure. The rationale behind is as follows: the response $y$ is assumed to change slowly between two adjacent categories of the independent variable. In other words, we try to avoid high jumps and prefer a smoother coefficient vector.
Coding for an ordered covariate You could check out Gertheiss & Tutz, Penalized Regression with Ordinal Predictors, & their R package ordPens. They say:– Rather than estimating the parameters by simple maximum likelihood methods we
26,538
Maximal & closed frequent -- Answer Included
I found a slightly extended definition in this source (which includes a good explanation). Here is a more reliable (published) source: CHARM: An efficient algorithm for closed itemset mining by Mohammed J. Zaki and Ching-jui Hsiao. According to this source: An itemset 
is 
closed
 if 
none 
of
 its 
immediate 
supersets 
has 
 the
 same
 support 
as 
the 
itemset An 
itemset 
is 
maximal 
frequent
 if 
none 
of 
its 
immediate 
supersets 
is 
frequent
 Some remarks: It is necessary to set a min_support (support = the number of item sets containing the subset of interest divided by the number of all itemsets) which defines which itemset is frequent. An itemset is frequent if its support >= min_support. In regards to the algorithm, only itemsets with min_support are considered when one tries to find the maximal frequent and closed itemsets. The important aspect in the definition of closed is, that it does not matter if an immediate superset exists with more support, only immediate supersets with exactly the same support do matter. maximal frequent => closed => frequent, but not vice versa. Application to the example of the OP Note: Did not check the support counts Let's say min_support=0.5. This is fulfilled if min_support_count >= 3 {A} = 4 ; not closed due to {A,E} {B} = 2 ; not frequent => ignore {C} = 5 ; not closed due to {C,E} {D} = 4 ; not closed due to {D,E}, but not maximal due to e.g. {A,D} {E} = 6 ; closed, but not maximal due to e.g. {D,E} {A,B} = 1; not frequent => ignore {A,C} = 3; not closed due to {A,C,E} {A,D} = 3; not closed due to {A,D,E} {A,E} = 4; closed, but not maximal due to {A,D,E} {B,C} = 2; not frequent => ignore {B,D} = 0; not frequent => ignore {B,E} = 2; not frequent => ignore {C,D} = 3; not closed due to {C,D,E} {C,E} = 5; closed, but not maximal due to {C,D,E} {D,E} = 4; closed, but not maximal due to {A,D,E} {A,B,C} = 1; not frequent => ignore {A,B,D} = 0; not frequent => ignore {A,B,E} = 1; not frequent => ignore {A,C,D} = 2; not frequent => ignore {A,C,E} = 3; maximal frequent {A,D,E} = 3; maximal frequent {B,C,D} = 0; not frequent => ignore {B,C,E} = 2; not frequent => ignore {C,D,E} = 3; maximal frequent {A,B,C,D} = 0; not frequent => ignore {A,B,C,E} = 1; not frequent => ignore {B,C,D,E} = 0; not frequent => ignore
Maximal & closed frequent -- Answer Included
I found a slightly extended definition in this source (which includes a good explanation). Here is a more reliable (published) source: CHARM: An efficient algorithm for closed itemset mining by Mohamm
Maximal & closed frequent -- Answer Included I found a slightly extended definition in this source (which includes a good explanation). Here is a more reliable (published) source: CHARM: An efficient algorithm for closed itemset mining by Mohammed J. Zaki and Ching-jui Hsiao. According to this source: An itemset 
is 
closed
 if 
none 
of
 its 
immediate 
supersets 
has 
 the
 same
 support 
as 
the 
itemset An 
itemset 
is 
maximal 
frequent
 if 
none 
of 
its 
immediate 
supersets 
is 
frequent
 Some remarks: It is necessary to set a min_support (support = the number of item sets containing the subset of interest divided by the number of all itemsets) which defines which itemset is frequent. An itemset is frequent if its support >= min_support. In regards to the algorithm, only itemsets with min_support are considered when one tries to find the maximal frequent and closed itemsets. The important aspect in the definition of closed is, that it does not matter if an immediate superset exists with more support, only immediate supersets with exactly the same support do matter. maximal frequent => closed => frequent, but not vice versa. Application to the example of the OP Note: Did not check the support counts Let's say min_support=0.5. This is fulfilled if min_support_count >= 3 {A} = 4 ; not closed due to {A,E} {B} = 2 ; not frequent => ignore {C} = 5 ; not closed due to {C,E} {D} = 4 ; not closed due to {D,E}, but not maximal due to e.g. {A,D} {E} = 6 ; closed, but not maximal due to e.g. {D,E} {A,B} = 1; not frequent => ignore {A,C} = 3; not closed due to {A,C,E} {A,D} = 3; not closed due to {A,D,E} {A,E} = 4; closed, but not maximal due to {A,D,E} {B,C} = 2; not frequent => ignore {B,D} = 0; not frequent => ignore {B,E} = 2; not frequent => ignore {C,D} = 3; not closed due to {C,D,E} {C,E} = 5; closed, but not maximal due to {C,D,E} {D,E} = 4; closed, but not maximal due to {A,D,E} {A,B,C} = 1; not frequent => ignore {A,B,D} = 0; not frequent => ignore {A,B,E} = 1; not frequent => ignore {A,C,D} = 2; not frequent => ignore {A,C,E} = 3; maximal frequent {A,D,E} = 3; maximal frequent {B,C,D} = 0; not frequent => ignore {B,C,E} = 2; not frequent => ignore {C,D,E} = 3; maximal frequent {A,B,C,D} = 0; not frequent => ignore {A,B,C,E} = 1; not frequent => ignore {B,C,D,E} = 0; not frequent => ignore
Maximal & closed frequent -- Answer Included I found a slightly extended definition in this source (which includes a good explanation). Here is a more reliable (published) source: CHARM: An efficient algorithm for closed itemset mining by Mohamm
26,539
Maximal & closed frequent -- Answer Included
You may want to read up on the APRIORI algorithm. It avoids unneccessary itemsets by clever pruning. {A} = 4 ; {B} = 2 ; {C} = 5 ; {D} = 4 ; {E} = 6 B is not frequent, remove. Construct and count two-itemsets (no magic yet, except that B is already out) {A,C} = 3; {A,D} = 3; {A,E} = 4; {C,D} = 3; {C,E} = 5; {D,E} = 3 All of these are frequent (notice that all that had B cannot be frequent!) Now use the prefix rule. ONLY combine itemsets starting with the same n-1 items. Remove all, where any subset is not frequent. Count the remaining itemsets. {A,C,D} = 2; {A,C,E} = 3; {A,D,E} = 3; {C,D,E} = 3 Note that {A,C,D} is not frequent. As there is no shared prefix, there cannot be a larger frequent itemset! Notice how much less work I did! For maximal / closed itemsets, check subsets / supersets. Note that e.g. {E}=6, and {A,E}=4. {E} is a subset, but has higher support, i.e. it is closed but not maximal. {A} is neither, as it does not have higher support than {A,E}, i.e. it is redundant.
Maximal & closed frequent -- Answer Included
You may want to read up on the APRIORI algorithm. It avoids unneccessary itemsets by clever pruning. {A} = 4 ; {B} = 2 ; {C} = 5 ; {D} = 4 ; {E} = 6 B is not frequent, remove. Construct and count
Maximal & closed frequent -- Answer Included You may want to read up on the APRIORI algorithm. It avoids unneccessary itemsets by clever pruning. {A} = 4 ; {B} = 2 ; {C} = 5 ; {D} = 4 ; {E} = 6 B is not frequent, remove. Construct and count two-itemsets (no magic yet, except that B is already out) {A,C} = 3; {A,D} = 3; {A,E} = 4; {C,D} = 3; {C,E} = 5; {D,E} = 3 All of these are frequent (notice that all that had B cannot be frequent!) Now use the prefix rule. ONLY combine itemsets starting with the same n-1 items. Remove all, where any subset is not frequent. Count the remaining itemsets. {A,C,D} = 2; {A,C,E} = 3; {A,D,E} = 3; {C,D,E} = 3 Note that {A,C,D} is not frequent. As there is no shared prefix, there cannot be a larger frequent itemset! Notice how much less work I did! For maximal / closed itemsets, check subsets / supersets. Note that e.g. {E}=6, and {A,E}=4. {E} is a subset, but has higher support, i.e. it is closed but not maximal. {A} is neither, as it does not have higher support than {A,E}, i.e. it is redundant.
Maximal & closed frequent -- Answer Included You may want to read up on the APRIORI algorithm. It avoids unneccessary itemsets by clever pruning. {A} = 4 ; {B} = 2 ; {C} = 5 ; {D} = 4 ; {E} = 6 B is not frequent, remove. Construct and count
26,540
What exactly is weakly informative prior?
The above comment is accurate. For a quantitive discussion, there are a number of "uninformative" priors in the literature. See for example Jeffreys' prior; see earlier post What is an "uninformative prior"? Can we ever have one with truly no information? They are defined in different ways, but the key is that they do not place too much probability in any particular interval (and hence favor those values) with the uniform distribution being a canonical example. The idea is to let the data determine where the mode is.
What exactly is weakly informative prior?
The above comment is accurate. For a quantitive discussion, there are a number of "uninformative" priors in the literature. See for example Jeffreys' prior; see earlier post What is an "uninformative
What exactly is weakly informative prior? The above comment is accurate. For a quantitive discussion, there are a number of "uninformative" priors in the literature. See for example Jeffreys' prior; see earlier post What is an "uninformative prior"? Can we ever have one with truly no information? They are defined in different ways, but the key is that they do not place too much probability in any particular interval (and hence favor those values) with the uniform distribution being a canonical example. The idea is to let the data determine where the mode is.
What exactly is weakly informative prior? The above comment is accurate. For a quantitive discussion, there are a number of "uninformative" priors in the literature. See for example Jeffreys' prior; see earlier post What is an "uninformative
26,541
What exactly is weakly informative prior?
Further to Eupraxis1981's discussion of informative priors, you can think of the "information" in a prior as inversely proportional to its variance. Consider a prior with near zero variance: you're basically saying "before looking at the data, I'm almost positive I already know the location of the true value of the statistic." Conversely, if you set a really wide variance, you're saying "without looking at the data, I have really no assumptions about the true value of the parameter. It could be pretty much anywhere, and I won't be that surprised. I've got hunch it's probably near the mode of my prior, but if it turns out to be far from the mode I won't actually be surprised." Uninformative priors are attempts to bring no prior assumptions into your analysis (how successful they are is open to debate). But it's entirely possible and sometimes useful for a prior to be only "weakly" informative.
What exactly is weakly informative prior?
Further to Eupraxis1981's discussion of informative priors, you can think of the "information" in a prior as inversely proportional to its variance. Consider a prior with near zero variance: you're ba
What exactly is weakly informative prior? Further to Eupraxis1981's discussion of informative priors, you can think of the "information" in a prior as inversely proportional to its variance. Consider a prior with near zero variance: you're basically saying "before looking at the data, I'm almost positive I already know the location of the true value of the statistic." Conversely, if you set a really wide variance, you're saying "without looking at the data, I have really no assumptions about the true value of the parameter. It could be pretty much anywhere, and I won't be that surprised. I've got hunch it's probably near the mode of my prior, but if it turns out to be far from the mode I won't actually be surprised." Uninformative priors are attempts to bring no prior assumptions into your analysis (how successful they are is open to debate). But it's entirely possible and sometimes useful for a prior to be only "weakly" informative.
What exactly is weakly informative prior? Further to Eupraxis1981's discussion of informative priors, you can think of the "information" in a prior as inversely proportional to its variance. Consider a prior with near zero variance: you're ba
26,542
Cumulative / Cumulative Plot (or "Visualizing a Lorenz Curve")
If you want to do it simply with the basic R commands, then following codes may help. At first you read the data. person<-rep(1:7) data<-c(4253, 4262, 4270, 4383, 4394, 4476, 4635) Then you can see the contribution of each user. plot(person,data) lines(person,data) You can also see how much the first two, three, four, ... , seven persons contribute. cdata<-cumsum(data) plot(person,cdata) lines(person,cdata) Finally you can get your desired plot (in proportions in both axes) by the following commands: plot(person/max(person),cdata/max(cdata),xlab="Top-contributing users",ylab="Data",col="red") lines(person/max(person),cdata/max(cdata),col="red") I have labelled the axes as you wanted. It can give you a clear view about how much percentage of data are being contributed by a certain proportion of persons.
Cumulative / Cumulative Plot (or "Visualizing a Lorenz Curve")
If you want to do it simply with the basic R commands, then following codes may help. At first you read the data. person<-rep(1:7) data<-c(4253, 4262, 4270, 4383, 4394, 4476, 4635) Then you can see
Cumulative / Cumulative Plot (or "Visualizing a Lorenz Curve") If you want to do it simply with the basic R commands, then following codes may help. At first you read the data. person<-rep(1:7) data<-c(4253, 4262, 4270, 4383, 4394, 4476, 4635) Then you can see the contribution of each user. plot(person,data) lines(person,data) You can also see how much the first two, three, four, ... , seven persons contribute. cdata<-cumsum(data) plot(person,cdata) lines(person,cdata) Finally you can get your desired plot (in proportions in both axes) by the following commands: plot(person/max(person),cdata/max(cdata),xlab="Top-contributing users",ylab="Data",col="red") lines(person/max(person),cdata/max(cdata),col="red") I have labelled the axes as you wanted. It can give you a clear view about how much percentage of data are being contributed by a certain proportion of persons.
Cumulative / Cumulative Plot (or "Visualizing a Lorenz Curve") If you want to do it simply with the basic R commands, then following codes may help. At first you read the data. person<-rep(1:7) data<-c(4253, 4262, 4270, 4383, 4394, 4476, 4635) Then you can see
26,543
Cumulative / Cumulative Plot (or "Visualizing a Lorenz Curve")
I found a way to quickly visualize the Lorenz curve with ggplot2, resulting in a more aesthetic and easier-to-interpret graphic. For this latter reason, I mirrored the Lorenz curve on the diagonal line which results in a more intuitive form, if you ask me. It also contains annotation lines that should facilitate the explanation of the plot (e.g. "The 5% top contributing users make up 50% of the data"). Attention: Finding the right spot for the annotation line makes use of a quite idiotic heuristic and might not work with a smaller data set. Example data: data <- data.frame(lco = c(338L, 6317L, 79L, 36L, 3634L, 8633L, 3231L, 27L, 173L, 5934L, 4476L, 1604L, 340L, 723L, 260L, 7008L, 7968L, 3854L, 4011L, 1596L, 1428L, 587L, 1595L, 32L, 277L, 5201L, 133L, 407L, 676L, 1874L, 1700L, 843L, 237L, 4270L, 2404L, 530L, 305L, 9344L, 720L, 1806L, 35L, 790L, 1383L, 5522L, 178L, 75L, 6219L, 121L, 923L, 1123L, 102L, 70L, 50L, 119L, 445L, 464L, 182L, 244L, 1358L, 7840L, 661L, 70L, 132L, 634L, 4262L, 1872L, 345L, 11L, 28L, 284L, 626L, 1033L, 26L, 798L, 13L, 480L, 44L, 339L, 259L, 312L, 262L, 67L, 1359L, 1835L, 13L, 189L, 292L, 2152L, 215L, 39L, 1131L, 1323L, 700L, 3271L, 1622L, 4669L, 125L, 281L, 277L, 232L, 1111L, 8669L, 7233L, 9363L, 400L, 502L, 1425L, 904L, 2924L, 927L, 31L, 1132L, 200L, 17L, 7602L, 12365L, 258L, 16L, 223L, 55L, 11L, 785L, 493L, 4L, 1161L, 393L, 791L, 30L, 568L, 386L, 75L, 1882L, 674L, 29L, 4217L, 332L, 103L, 332L, 30L, 168L, 277L, 176L, 49L, 19L, 76L, 144L, 145L, 65L, 52L, 391L, 25L, 104L, 484L, 20L, 12L, 188L, 5677L, 19L, 273L, 424L, 281L, 458L, 50L, 255L, 898L, 840L, 872L, 573L, 874L, 8L, 35L, 235L, 22L, 229L, 158L, 59L, 147L, 544L, 24L, 325L, 15L, 3L, 1531L, 1014L, 43L, 35L, 2176L, 934L, 253L, 69L, 784L, 352L, 188L, 27L, 1516L, 322L, 1394L, 7686L, 13L, 812L, 349L, 779L, 77L, 941L, 104L, 82L, 93L, 1206L, 24L, 6159L, 131L, 99L, 1310L, 27L, 520L, 327L, 350L, 42L, 102L, 25L, 14L, 42L, 33L, 469L, 177L, 119L, 64L, 75L, 190L, 82L, 82L, 473L, 51L, 9L, 49L, 41L, 221L, 1778L, 4188L, 4L, 86L, 39L, 93L, 35L, 44L, 227L, 636L, 589L, 332L, 22L, 15L, 50L, 147L, 32L, 134L, 133L, 629L, 168L, 69L, 747L, 34L, 20L, 552L, 8L, 54L, 28L, 1437L, 83L, 3225L, 776L, 784L, 247L, 33L, 40L, 368L, 104L, 420L, 357L, 9L, 164L, 7L, 227L, 142L, 33L, 91L, 78L, 175L, 194L, 294L, 433L, 52L, 7L, 372L, 29L, 220L, 371L, 375L, 233L, 12L, 35L, 795L, 35L, 43L, 50L, 57L, 32L, 162L, 124L, 160L, 52L, 132L, 131L, 50L, 117L, 145L, 33L, 83L, 33L, 123L, 43L, 27L, 91L, 25L, 2116L, 51L, 509L, 603L, 267L, 10L, 10L, 51L, 6028L, 99L, 597L, 53L, 131L, 1084L, 1222L, 153L, 70L, 746L, 437L, 82L, 299L, 1682L, 21L, 24L, 973L, 207L, 55L, 116L, 47L, 48L, 149L, 100L, 690L, 129L, 80L, 1143L, 103L, 50L, 242L, 708L, 316L, 83L, 61L, 15L, 203L, 435L, 474L, 47L, 718L, 21L, 33L, 27L, 15L, 53L, 97L, 6L, 39L, 59L, 255L, 51L, 15L, 20L, 514L, 74L, 20L, 319L, 14L, 14L, 45L, 36L, 625L, 5534L, 43L, 590L, 43L, 29L, 233L, 135L, 174L, 20L, 335L, 106L, 230L, 64L, 3551L, 524L, 72L, 44L, 16L, 98L, 37L, 62L, 390L, 83L, 28L, 3L, 63L, 32L, 124L, 56L, 149L, 11L, 153L, 661L, 15L, 25L, 49L, 626L, 141L, 38L, 23L, 123L, 530L, 47L, 6L, 18L, 222L, 391L, 71L, 75L, 234L, 142L, 45L, 439L, 675L, 14L, 53L, 19L, 100L, 51L, 147L, 10L, 141L, 979L, 97L, 330L, 112L, 71L, 4L, 9L, 124L, 141L, 145L, 302L, 122L, 435L, 50L, 81L, 99L, 330L, 84L, 41L, 227L, 4L, 37L, 5L, 99L, 210L, 7L, 183L, 67L, 98L, 157L, 96L, 150L, 22L, 288L, 391L, 188L, 54L, 56L, 49L, 618L, 160L, 631L, 9L, 355L, 56L, 119L, 37L, 36L, 153L, 110L, 126L, 335L, 121L, 80L, 113L, 62L, 97L, 22L, 72L, 1742L, 1007L, 11L, 121L, 27L, 62L, 823L, 56L, 40L, 26L, 69L, 120L, 516L, 11L, 146L, 245L, 174L, 1648L, 105L, 123L, 17L, 2565L, 138L, 200L, 46L, 130L, 189L, 87L, 191L, 143L, 76L, 702L, 79L, 67L, 166L, 3487L, 88L, 395L, 283L, 140L, 535L, 198L, 64L, 1033L, 376L, 180L, 14L, 32L, 441L, 361L, 520L, 62L, 247L, 10L, 24L, 721L, 176L, 164L, 33L, 44L, 12L, 30L, 13L, 157L, 122L, 161L, 45L, 34L, 538L, 74L, 14L, 19L, 15L, 1714L, 437L, 16L, 12L, 130L, 25L, 93L, 9L, 15L, 81L, 889L, 27L, 195L, 5L, 233L, 113L, 356L, 51L, 146L, 6822L, 65L, 166L, 45L, 18L, 295L, 196L, 145L, 256L, 14L, 8L, 89L, 32L, 20L, 239L, 68L, 63L, 21L, 102L, 158L, 1138L, 48L, 113L, 144L, 83L, 93L, 3L, 1032L, 45L, 36L, 68L, 146L, 370L, 25L, 10L, 290L, 858L, 19L, 17L, 64L, 42L, 38L, 711L, 144L, 58L, 144L, 1736L, 188L, 38L, 58L, 91L, 255L, 58L, 307L, 4L, 9L, 60L, 14L, 13L, 118L, 1549L, 108L, 483L, 34L, 1471L, 13L, 16L, 76L, 163L, 147L, 75L, 520L, 4L, 59L, 73L, 32L, 24L, 656L, 16L, 2655L, 38L, 20L, 1011L, 85L, 592L, 91L, 883L, 5174L, 42L, 17L, 88L, 21L, 61L, 33L, 1726L, 46L, 387L, 920L, 120L, 134L, 72L, 144L, 1603L, 646L, 45L, 282L, 56L, 19L, 41L, 69L, 151L, 632L, 47L, 48L, 126L, 114L, 119L, 144L, 949L, 67L, 144L, 27L, 61L, 70L, 287L, 64L, 323L, 27L, 149L, 1914L, 20L, 1077L, 21L, 70L, 59L, 123L, 537L, 131L, 1226L, 2908L, 8L, 133L, 42L, 175L, 100L, 162L, 494L, 414L, 2618L, 33L, 93L, 48L, 3676L, 553L, 705L, 58L, 268L, 141L, 284L, 98L, 135L, 13L, 49L, 792L, 128L, 172L, 236L, 221L, 596L, 35L, 241L, 10L, 193L, 189L, 26L, 27L, 47L, 100L, 398L, 21L, 26L, 86L, 147L, 3639L, 161L, 60L, 106L, 111L, 42L, 11L, 654L, 21L, 129L, 1152L, 224L, 49L, 12L, 22L, 73L, 207L, 165L, 113L, 12L, 1224L, 177L, 6L, 390L, 2747L, 23L, 46L, 1166L, 805L, 20L, 130L, 46L, 110L, 16L, 88L, 652L, 61L, 86L, 16L, 804L, 41L, 4383L, 511L, 126L, 549L, 23L, 45L, 80L, 162L, 127L, 700L, 43L, 147L, 102L, 84L, 67L, 57L, 30L, 55L, 274L, 314L, 847L, 203L, 322L, 8350L, 101L, 10L, 122L, 54L, 120L, 10L, 22L, 327L, 234L, 56L, 998L, 409L, 131L, 2163L, 81L, 19L, 6675L, 7L, 2182L, 1136L, 71L, 15L, 286L, 133L, 132L, 37L, 144L, 28L, 392L, 870L, 312L, 190L, 135L, 16L, 6L, 153L, 38L, 62L, 2710L, 36L, 61L, 37L, 88L, 375L, 88L, 131L, 73L, 212L, 918L, 185L, 53L, 143L, 69L, 2231L, 54L, 23L, 220L, 195L, 468L, 2009L, 364L, 54L, 277L, 1547L, 240L, 1700L, 1533L, 374L, 363L, 35L, 97L, 19L, 87L, 67L, 22L, 267L, 16L, 11L, 35L, 460L, 44L, 58L, 26L, 13L, 172L, 114L, 272L, 64L, 254L, 49L, 440L, 329L, 48L, 93L, 10L, 70L, 17L, 120L, 5229L, 118L, 133L, 43L, 2419L, 207L, 102L, 90L, 127L, 3939L, 14L, 5L, 552L, 425L, 656L, 511L, 170L, 396L, 177L, 3680L, 111L, 21L, 320L, 367L, 51L, 672L, 1675L, 59L, 91L, 281L, 113L, 19L, 37L, 65L, 288L, 27L, 149L, 61L, 63L, 75L, 165L, 90L, 9L, 12L, 82L, 111L, 157L)) Code: # lorenz curve of user contribution library(ineq) library(ggplot2) library(scales) library(grid) # compute lorenz curve lcolc <- Lc(data$lco) # bring lorenz curve in another format easily readable by ggplot2 # namely reverse the L column so that lorenz curve is mirrored on diagonal # p stays p (the diagonal) # Uprob contains the indices of the L's, but we need percentiles lcdf <- data.frame(L = rev(1-lcolc$L), p = lcolc$p, Uprob = c(1:length(lcolc$L)/length(lcolc$L))) # basic plot with the diagonal line and the L line p <- ggplot(lcdf, aes(x = Uprob, y = L)) + geom_line(colour = hcl(h=15, l=65, c=100)) + geom_line(aes(x = p, y = p)) # compute annotation lines at 50 percent L (uses a heuristic) index <- which(lcdf$L >= 0.499 & lcdf$L <= 0.501)[1] ypos <- lcdf$L[index] yposs <- c(0,ypos) xpos <- index/length(lcdf$L) xposs <- c(0,xpos) ypositions <- data.frame(x = xposs, y = c(ypos,ypos)) xpositions <- data.frame(x = c(xpos,xpos), y = yposs) # add annotation line p <- p + geom_line(data = ypositions, aes(x = x, y = y), linetype="dashed") + geom_line(data = xpositions, aes(x = x, y = y), linetype="dashed") # set axes and labels (namely insert custom breaks in scales) p <- p + scale_x_continuous(breaks=c(0, xpos,0.25,0.5,0.75,1), labels = percent_format()) + scale_y_continuous( labels = percent_format()) # add minimal theme p <- p + theme_minimal() + xlab("Percentage of objects") + ylab("Percentage of events") # customize theme p <- p + theme(plot.margin = unit(c(0.5,1,1,1), "cm"), axis.title.x = element_text(vjust=-1), axis.title.y = element_text(angle=90, vjust=0), panel.grid.minor = element_blank(), plot.background = element_rect(fill = rgb(0.99,0.99,0.99), linetype=0)) # print plot p
Cumulative / Cumulative Plot (or "Visualizing a Lorenz Curve")
I found a way to quickly visualize the Lorenz curve with ggplot2, resulting in a more aesthetic and easier-to-interpret graphic. For this latter reason, I mirrored the Lorenz curve on the diagonal lin
Cumulative / Cumulative Plot (or "Visualizing a Lorenz Curve") I found a way to quickly visualize the Lorenz curve with ggplot2, resulting in a more aesthetic and easier-to-interpret graphic. For this latter reason, I mirrored the Lorenz curve on the diagonal line which results in a more intuitive form, if you ask me. It also contains annotation lines that should facilitate the explanation of the plot (e.g. "The 5% top contributing users make up 50% of the data"). Attention: Finding the right spot for the annotation line makes use of a quite idiotic heuristic and might not work with a smaller data set. Example data: data <- data.frame(lco = c(338L, 6317L, 79L, 36L, 3634L, 8633L, 3231L, 27L, 173L, 5934L, 4476L, 1604L, 340L, 723L, 260L, 7008L, 7968L, 3854L, 4011L, 1596L, 1428L, 587L, 1595L, 32L, 277L, 5201L, 133L, 407L, 676L, 1874L, 1700L, 843L, 237L, 4270L, 2404L, 530L, 305L, 9344L, 720L, 1806L, 35L, 790L, 1383L, 5522L, 178L, 75L, 6219L, 121L, 923L, 1123L, 102L, 70L, 50L, 119L, 445L, 464L, 182L, 244L, 1358L, 7840L, 661L, 70L, 132L, 634L, 4262L, 1872L, 345L, 11L, 28L, 284L, 626L, 1033L, 26L, 798L, 13L, 480L, 44L, 339L, 259L, 312L, 262L, 67L, 1359L, 1835L, 13L, 189L, 292L, 2152L, 215L, 39L, 1131L, 1323L, 700L, 3271L, 1622L, 4669L, 125L, 281L, 277L, 232L, 1111L, 8669L, 7233L, 9363L, 400L, 502L, 1425L, 904L, 2924L, 927L, 31L, 1132L, 200L, 17L, 7602L, 12365L, 258L, 16L, 223L, 55L, 11L, 785L, 493L, 4L, 1161L, 393L, 791L, 30L, 568L, 386L, 75L, 1882L, 674L, 29L, 4217L, 332L, 103L, 332L, 30L, 168L, 277L, 176L, 49L, 19L, 76L, 144L, 145L, 65L, 52L, 391L, 25L, 104L, 484L, 20L, 12L, 188L, 5677L, 19L, 273L, 424L, 281L, 458L, 50L, 255L, 898L, 840L, 872L, 573L, 874L, 8L, 35L, 235L, 22L, 229L, 158L, 59L, 147L, 544L, 24L, 325L, 15L, 3L, 1531L, 1014L, 43L, 35L, 2176L, 934L, 253L, 69L, 784L, 352L, 188L, 27L, 1516L, 322L, 1394L, 7686L, 13L, 812L, 349L, 779L, 77L, 941L, 104L, 82L, 93L, 1206L, 24L, 6159L, 131L, 99L, 1310L, 27L, 520L, 327L, 350L, 42L, 102L, 25L, 14L, 42L, 33L, 469L, 177L, 119L, 64L, 75L, 190L, 82L, 82L, 473L, 51L, 9L, 49L, 41L, 221L, 1778L, 4188L, 4L, 86L, 39L, 93L, 35L, 44L, 227L, 636L, 589L, 332L, 22L, 15L, 50L, 147L, 32L, 134L, 133L, 629L, 168L, 69L, 747L, 34L, 20L, 552L, 8L, 54L, 28L, 1437L, 83L, 3225L, 776L, 784L, 247L, 33L, 40L, 368L, 104L, 420L, 357L, 9L, 164L, 7L, 227L, 142L, 33L, 91L, 78L, 175L, 194L, 294L, 433L, 52L, 7L, 372L, 29L, 220L, 371L, 375L, 233L, 12L, 35L, 795L, 35L, 43L, 50L, 57L, 32L, 162L, 124L, 160L, 52L, 132L, 131L, 50L, 117L, 145L, 33L, 83L, 33L, 123L, 43L, 27L, 91L, 25L, 2116L, 51L, 509L, 603L, 267L, 10L, 10L, 51L, 6028L, 99L, 597L, 53L, 131L, 1084L, 1222L, 153L, 70L, 746L, 437L, 82L, 299L, 1682L, 21L, 24L, 973L, 207L, 55L, 116L, 47L, 48L, 149L, 100L, 690L, 129L, 80L, 1143L, 103L, 50L, 242L, 708L, 316L, 83L, 61L, 15L, 203L, 435L, 474L, 47L, 718L, 21L, 33L, 27L, 15L, 53L, 97L, 6L, 39L, 59L, 255L, 51L, 15L, 20L, 514L, 74L, 20L, 319L, 14L, 14L, 45L, 36L, 625L, 5534L, 43L, 590L, 43L, 29L, 233L, 135L, 174L, 20L, 335L, 106L, 230L, 64L, 3551L, 524L, 72L, 44L, 16L, 98L, 37L, 62L, 390L, 83L, 28L, 3L, 63L, 32L, 124L, 56L, 149L, 11L, 153L, 661L, 15L, 25L, 49L, 626L, 141L, 38L, 23L, 123L, 530L, 47L, 6L, 18L, 222L, 391L, 71L, 75L, 234L, 142L, 45L, 439L, 675L, 14L, 53L, 19L, 100L, 51L, 147L, 10L, 141L, 979L, 97L, 330L, 112L, 71L, 4L, 9L, 124L, 141L, 145L, 302L, 122L, 435L, 50L, 81L, 99L, 330L, 84L, 41L, 227L, 4L, 37L, 5L, 99L, 210L, 7L, 183L, 67L, 98L, 157L, 96L, 150L, 22L, 288L, 391L, 188L, 54L, 56L, 49L, 618L, 160L, 631L, 9L, 355L, 56L, 119L, 37L, 36L, 153L, 110L, 126L, 335L, 121L, 80L, 113L, 62L, 97L, 22L, 72L, 1742L, 1007L, 11L, 121L, 27L, 62L, 823L, 56L, 40L, 26L, 69L, 120L, 516L, 11L, 146L, 245L, 174L, 1648L, 105L, 123L, 17L, 2565L, 138L, 200L, 46L, 130L, 189L, 87L, 191L, 143L, 76L, 702L, 79L, 67L, 166L, 3487L, 88L, 395L, 283L, 140L, 535L, 198L, 64L, 1033L, 376L, 180L, 14L, 32L, 441L, 361L, 520L, 62L, 247L, 10L, 24L, 721L, 176L, 164L, 33L, 44L, 12L, 30L, 13L, 157L, 122L, 161L, 45L, 34L, 538L, 74L, 14L, 19L, 15L, 1714L, 437L, 16L, 12L, 130L, 25L, 93L, 9L, 15L, 81L, 889L, 27L, 195L, 5L, 233L, 113L, 356L, 51L, 146L, 6822L, 65L, 166L, 45L, 18L, 295L, 196L, 145L, 256L, 14L, 8L, 89L, 32L, 20L, 239L, 68L, 63L, 21L, 102L, 158L, 1138L, 48L, 113L, 144L, 83L, 93L, 3L, 1032L, 45L, 36L, 68L, 146L, 370L, 25L, 10L, 290L, 858L, 19L, 17L, 64L, 42L, 38L, 711L, 144L, 58L, 144L, 1736L, 188L, 38L, 58L, 91L, 255L, 58L, 307L, 4L, 9L, 60L, 14L, 13L, 118L, 1549L, 108L, 483L, 34L, 1471L, 13L, 16L, 76L, 163L, 147L, 75L, 520L, 4L, 59L, 73L, 32L, 24L, 656L, 16L, 2655L, 38L, 20L, 1011L, 85L, 592L, 91L, 883L, 5174L, 42L, 17L, 88L, 21L, 61L, 33L, 1726L, 46L, 387L, 920L, 120L, 134L, 72L, 144L, 1603L, 646L, 45L, 282L, 56L, 19L, 41L, 69L, 151L, 632L, 47L, 48L, 126L, 114L, 119L, 144L, 949L, 67L, 144L, 27L, 61L, 70L, 287L, 64L, 323L, 27L, 149L, 1914L, 20L, 1077L, 21L, 70L, 59L, 123L, 537L, 131L, 1226L, 2908L, 8L, 133L, 42L, 175L, 100L, 162L, 494L, 414L, 2618L, 33L, 93L, 48L, 3676L, 553L, 705L, 58L, 268L, 141L, 284L, 98L, 135L, 13L, 49L, 792L, 128L, 172L, 236L, 221L, 596L, 35L, 241L, 10L, 193L, 189L, 26L, 27L, 47L, 100L, 398L, 21L, 26L, 86L, 147L, 3639L, 161L, 60L, 106L, 111L, 42L, 11L, 654L, 21L, 129L, 1152L, 224L, 49L, 12L, 22L, 73L, 207L, 165L, 113L, 12L, 1224L, 177L, 6L, 390L, 2747L, 23L, 46L, 1166L, 805L, 20L, 130L, 46L, 110L, 16L, 88L, 652L, 61L, 86L, 16L, 804L, 41L, 4383L, 511L, 126L, 549L, 23L, 45L, 80L, 162L, 127L, 700L, 43L, 147L, 102L, 84L, 67L, 57L, 30L, 55L, 274L, 314L, 847L, 203L, 322L, 8350L, 101L, 10L, 122L, 54L, 120L, 10L, 22L, 327L, 234L, 56L, 998L, 409L, 131L, 2163L, 81L, 19L, 6675L, 7L, 2182L, 1136L, 71L, 15L, 286L, 133L, 132L, 37L, 144L, 28L, 392L, 870L, 312L, 190L, 135L, 16L, 6L, 153L, 38L, 62L, 2710L, 36L, 61L, 37L, 88L, 375L, 88L, 131L, 73L, 212L, 918L, 185L, 53L, 143L, 69L, 2231L, 54L, 23L, 220L, 195L, 468L, 2009L, 364L, 54L, 277L, 1547L, 240L, 1700L, 1533L, 374L, 363L, 35L, 97L, 19L, 87L, 67L, 22L, 267L, 16L, 11L, 35L, 460L, 44L, 58L, 26L, 13L, 172L, 114L, 272L, 64L, 254L, 49L, 440L, 329L, 48L, 93L, 10L, 70L, 17L, 120L, 5229L, 118L, 133L, 43L, 2419L, 207L, 102L, 90L, 127L, 3939L, 14L, 5L, 552L, 425L, 656L, 511L, 170L, 396L, 177L, 3680L, 111L, 21L, 320L, 367L, 51L, 672L, 1675L, 59L, 91L, 281L, 113L, 19L, 37L, 65L, 288L, 27L, 149L, 61L, 63L, 75L, 165L, 90L, 9L, 12L, 82L, 111L, 157L)) Code: # lorenz curve of user contribution library(ineq) library(ggplot2) library(scales) library(grid) # compute lorenz curve lcolc <- Lc(data$lco) # bring lorenz curve in another format easily readable by ggplot2 # namely reverse the L column so that lorenz curve is mirrored on diagonal # p stays p (the diagonal) # Uprob contains the indices of the L's, but we need percentiles lcdf <- data.frame(L = rev(1-lcolc$L), p = lcolc$p, Uprob = c(1:length(lcolc$L)/length(lcolc$L))) # basic plot with the diagonal line and the L line p <- ggplot(lcdf, aes(x = Uprob, y = L)) + geom_line(colour = hcl(h=15, l=65, c=100)) + geom_line(aes(x = p, y = p)) # compute annotation lines at 50 percent L (uses a heuristic) index <- which(lcdf$L >= 0.499 & lcdf$L <= 0.501)[1] ypos <- lcdf$L[index] yposs <- c(0,ypos) xpos <- index/length(lcdf$L) xposs <- c(0,xpos) ypositions <- data.frame(x = xposs, y = c(ypos,ypos)) xpositions <- data.frame(x = c(xpos,xpos), y = yposs) # add annotation line p <- p + geom_line(data = ypositions, aes(x = x, y = y), linetype="dashed") + geom_line(data = xpositions, aes(x = x, y = y), linetype="dashed") # set axes and labels (namely insert custom breaks in scales) p <- p + scale_x_continuous(breaks=c(0, xpos,0.25,0.5,0.75,1), labels = percent_format()) + scale_y_continuous( labels = percent_format()) # add minimal theme p <- p + theme_minimal() + xlab("Percentage of objects") + ylab("Percentage of events") # customize theme p <- p + theme(plot.margin = unit(c(0.5,1,1,1), "cm"), axis.title.x = element_text(vjust=-1), axis.title.y = element_text(angle=90, vjust=0), panel.grid.minor = element_blank(), plot.background = element_rect(fill = rgb(0.99,0.99,0.99), linetype=0)) # print plot p
Cumulative / Cumulative Plot (or "Visualizing a Lorenz Curve") I found a way to quickly visualize the Lorenz curve with ggplot2, resulting in a more aesthetic and easier-to-interpret graphic. For this latter reason, I mirrored the Lorenz curve on the diagonal lin
26,544
Cumulative / Cumulative Plot (or "Visualizing a Lorenz Curve")
Two more ways to do this as I was recently working on this for vaccine clinical trials: 1.Use Hmisc Ecdf. This is straight forward and plots it out though bit difficult to figure out details on changing different elements of the graph. 2.Calculate cumulative distribution and then 1-cumulative is reverse cumulative. Plot the reverse using ggplot2 using geom_step if you like a step function in the graph. The function below would use ecdf from base r to give you cumulative distribution and then 1-cumulative: rcdf <- function (x) { cdf <- ecdf(x) y <- cdf(x) xrcdf <- 1-y } in the above rcdf is a user-defined function defined using ecdf.
Cumulative / Cumulative Plot (or "Visualizing a Lorenz Curve")
Two more ways to do this as I was recently working on this for vaccine clinical trials: 1.Use Hmisc Ecdf. This is straight forward and plots it out though bit difficult to figure out details on changi
Cumulative / Cumulative Plot (or "Visualizing a Lorenz Curve") Two more ways to do this as I was recently working on this for vaccine clinical trials: 1.Use Hmisc Ecdf. This is straight forward and plots it out though bit difficult to figure out details on changing different elements of the graph. 2.Calculate cumulative distribution and then 1-cumulative is reverse cumulative. Plot the reverse using ggplot2 using geom_step if you like a step function in the graph. The function below would use ecdf from base r to give you cumulative distribution and then 1-cumulative: rcdf <- function (x) { cdf <- ecdf(x) y <- cdf(x) xrcdf <- 1-y } in the above rcdf is a user-defined function defined using ecdf.
Cumulative / Cumulative Plot (or "Visualizing a Lorenz Curve") Two more ways to do this as I was recently working on this for vaccine clinical trials: 1.Use Hmisc Ecdf. This is straight forward and plots it out though bit difficult to figure out details on changi
26,545
Rules to apply Monte Carlo simulation of p-values for chi-squared test
By searching, it seems that the point of Monte-Carlo Simulation is to produce a reference distribution, based on randomly generated samples which will have the same size as the tested sample, in order to compute p-values when test conditions are not satisfied. This is explained in Hope A. J Royal Stat Society Series B (1968) which can be found on JSTOR. Here is a relevant quote from the Hope paper: Monte-Carlo significance test procedures consist of the comparison of the observed data with random samples generated in accordance with the hypothesis being tested. ... It is preferable to use a known test of good efficiency instead of a Monte-Carlo test procedure assuming that the alternative statistical hypothesis can be completely specified. However, it is not always possible to use such a test because the necessary conditions for applying the test may not be satisfied, or the underlying distribution may be unknown or it may be difficult to decide on an appropriate test criterion.
Rules to apply Monte Carlo simulation of p-values for chi-squared test
By searching, it seems that the point of Monte-Carlo Simulation is to produce a reference distribution, based on randomly generated samples which will have the same size as the tested sample, in order
Rules to apply Monte Carlo simulation of p-values for chi-squared test By searching, it seems that the point of Monte-Carlo Simulation is to produce a reference distribution, based on randomly generated samples which will have the same size as the tested sample, in order to compute p-values when test conditions are not satisfied. This is explained in Hope A. J Royal Stat Society Series B (1968) which can be found on JSTOR. Here is a relevant quote from the Hope paper: Monte-Carlo significance test procedures consist of the comparison of the observed data with random samples generated in accordance with the hypothesis being tested. ... It is preferable to use a known test of good efficiency instead of a Monte-Carlo test procedure assuming that the alternative statistical hypothesis can be completely specified. However, it is not always possible to use such a test because the necessary conditions for applying the test may not be satisfied, or the underlying distribution may be unknown or it may be difficult to decide on an appropriate test criterion.
Rules to apply Monte Carlo simulation of p-values for chi-squared test By searching, it seems that the point of Monte-Carlo Simulation is to produce a reference distribution, based on randomly generated samples which will have the same size as the tested sample, in order
26,546
Sampling distribution of regression coefficients
This part primarily relates to your first, third and fourth question: There's a fundamental difference between Bayesian statistics and frequentist statistics. Frequentist statistics makes inference about which fixed parameter values are consistent with data viewed as random, usually via the likelihood. You take $\theta$ (some parameter or parameters) as fixed but unknown, and see which ones make the data more likely; it looks at the properties of sampling from some model given the parameters to make inference about where the parameters might be. (A Bayesian might say the frequentist approach is based on 'the frequencies of things that didn't happen') Bayesian statistics looks at the information on parameters in terms of a probability distribution on them, which is updated by data, via the likelihood. Parameters have distributions, so you look at $P(\theta|\underline{x})$. This results in things which often look similar but where the variables in one look "the wrong way around" viewed through the lens of the other way of thinking about it. So, fundamentally they're somewhat different things, and the fact that things that are on the LHS of one are on the RHS of the other is no accident. If you do some work with both, it soon becomes reasonably clear. The second question seems to me to relate simply to a typo. --- the statement "equivalent to the usual frequentist sampling distribution, that is" : I took this to mean that the authors were stating the frequentist sampling distribution. Have I read this wrongly? There's two things going on there - they've expressed something a bit loosely (people do this particular kind of over-loose expression all the time), and I think you're also interpreting it differently from the intent. What exactly does the expression they give mean, then ? Hopefully the discussion below will help clarify the intended sense. If you can provide a reference (pref. online as I don't have good library access) where this expression is derived I would be grateful. It follows right from here: http://en.wikipedia.org/wiki/Bayesian_linear_regression by taking flat priors on $\beta$ and I think a flat prior for $\sigma^2$ as well. The reason is that the posterior is thereby proportional to the likelihood and the intervals generated from the posteriors on the parameters match the frequentist confidence intervals for the parameters. You might find the first few pages here helpful as well.
Sampling distribution of regression coefficients
This part primarily relates to your first, third and fourth question: There's a fundamental difference between Bayesian statistics and frequentist statistics. Frequentist statistics makes inference ab
Sampling distribution of regression coefficients This part primarily relates to your first, third and fourth question: There's a fundamental difference between Bayesian statistics and frequentist statistics. Frequentist statistics makes inference about which fixed parameter values are consistent with data viewed as random, usually via the likelihood. You take $\theta$ (some parameter or parameters) as fixed but unknown, and see which ones make the data more likely; it looks at the properties of sampling from some model given the parameters to make inference about where the parameters might be. (A Bayesian might say the frequentist approach is based on 'the frequencies of things that didn't happen') Bayesian statistics looks at the information on parameters in terms of a probability distribution on them, which is updated by data, via the likelihood. Parameters have distributions, so you look at $P(\theta|\underline{x})$. This results in things which often look similar but where the variables in one look "the wrong way around" viewed through the lens of the other way of thinking about it. So, fundamentally they're somewhat different things, and the fact that things that are on the LHS of one are on the RHS of the other is no accident. If you do some work with both, it soon becomes reasonably clear. The second question seems to me to relate simply to a typo. --- the statement "equivalent to the usual frequentist sampling distribution, that is" : I took this to mean that the authors were stating the frequentist sampling distribution. Have I read this wrongly? There's two things going on there - they've expressed something a bit loosely (people do this particular kind of over-loose expression all the time), and I think you're also interpreting it differently from the intent. What exactly does the expression they give mean, then ? Hopefully the discussion below will help clarify the intended sense. If you can provide a reference (pref. online as I don't have good library access) where this expression is derived I would be grateful. It follows right from here: http://en.wikipedia.org/wiki/Bayesian_linear_regression by taking flat priors on $\beta$ and I think a flat prior for $\sigma^2$ as well. The reason is that the posterior is thereby proportional to the likelihood and the intervals generated from the posteriors on the parameters match the frequentist confidence intervals for the parameters. You might find the first few pages here helpful as well.
Sampling distribution of regression coefficients This part primarily relates to your first, third and fourth question: There's a fundamental difference between Bayesian statistics and frequentist statistics. Frequentist statistics makes inference ab
26,547
Expected number of duplicates (triplicates etc) when drawing with replacement
The $i^{th}$ iterm will be selected $\text{Binom}(m, \, 1/n)$ times. From this, you can find all the quantities you want, because, e.g., $$\mathbb{E}[\text{number of pairs}] = \sum_{i = 1}^n \mathbb{P}[i^{th} \text{ item appears twice}] $$ For example, the expected number of pairs is given by $$ n \cdot \mathbb{P}[\text{Binom}(m, \, 1/n) = 2]. $$ You can get the numeric value in R with the command n*dbinom(k, m, 1/n).
Expected number of duplicates (triplicates etc) when drawing with replacement
The $i^{th}$ iterm will be selected $\text{Binom}(m, \, 1/n)$ times. From this, you can find all the quantities you want, because, e.g., $$\mathbb{E}[\text{number of pairs}] = \sum_{i = 1}^n \mathbb{
Expected number of duplicates (triplicates etc) when drawing with replacement The $i^{th}$ iterm will be selected $\text{Binom}(m, \, 1/n)$ times. From this, you can find all the quantities you want, because, e.g., $$\mathbb{E}[\text{number of pairs}] = \sum_{i = 1}^n \mathbb{P}[i^{th} \text{ item appears twice}] $$ For example, the expected number of pairs is given by $$ n \cdot \mathbb{P}[\text{Binom}(m, \, 1/n) = 2]. $$ You can get the numeric value in R with the command n*dbinom(k, m, 1/n).
Expected number of duplicates (triplicates etc) when drawing with replacement The $i^{th}$ iterm will be selected $\text{Binom}(m, \, 1/n)$ times. From this, you can find all the quantities you want, because, e.g., $$\mathbb{E}[\text{number of pairs}] = \sum_{i = 1}^n \mathbb{
26,548
Interpretation of conditional density plots
For example, is the probability of the Result being equal to 1 when Var 1 is 150 approximately 80%? No, it's the other way around. The probability that Result $=0$ when Var1 $=150$ is approximately 80%. Likewise, the probability that Result $=1$ when Var1 $=150$ is approximately 20%. The dark grey area is that which is the conditional probability of the Result being equal to 1, right? The dark shaded area corresponds to Result $=0$; the light shaded area corresponds to Result $=1$. If you have more than two levels in your Result factor, it will probably be more obvious what is being portrayed. We are just used to looking at density functions so this presentation can be confusing at first. How does this accumulation affect how these plots are interpreted? Looking at the source for cdplot(), what I think is going on here is that the smoothed proportions of the results are weighted by the density of the explanatory variable. So, the distributions of the dependent variable are going to be better represented in higher density regions of the explanatory variable. One way of interpreting that is that where there are regions of the explanatory variable with few points, the conditional distributions will not be as well determined. Where there are regions of the explanatory variable with more points, the conditional distributions will be better determined.
Interpretation of conditional density plots
For example, is the probability of the Result being equal to 1 when Var 1 is 150 approximately 80%? No, it's the other way around. The probability that Result $=0$ when Var1 $=150$ is approximatel
Interpretation of conditional density plots For example, is the probability of the Result being equal to 1 when Var 1 is 150 approximately 80%? No, it's the other way around. The probability that Result $=0$ when Var1 $=150$ is approximately 80%. Likewise, the probability that Result $=1$ when Var1 $=150$ is approximately 20%. The dark grey area is that which is the conditional probability of the Result being equal to 1, right? The dark shaded area corresponds to Result $=0$; the light shaded area corresponds to Result $=1$. If you have more than two levels in your Result factor, it will probably be more obvious what is being portrayed. We are just used to looking at density functions so this presentation can be confusing at first. How does this accumulation affect how these plots are interpreted? Looking at the source for cdplot(), what I think is going on here is that the smoothed proportions of the results are weighted by the density of the explanatory variable. So, the distributions of the dependent variable are going to be better represented in higher density regions of the explanatory variable. One way of interpreting that is that where there are regions of the explanatory variable with few points, the conditional distributions will not be as well determined. Where there are regions of the explanatory variable with more points, the conditional distributions will be better determined.
Interpretation of conditional density plots For example, is the probability of the Result being equal to 1 when Var 1 is 150 approximately 80%? No, it's the other way around. The probability that Result $=0$ when Var1 $=150$ is approximatel
26,549
Conceptual distinction between heteroscedasticity and non-stationarity
To give precise definitions, let $X_1, \ldots, X_n$ be real valued random variables. Stationarity is usually only defined if we think of the index of the variables as time. In this case the sequence of random variables is stationary of $X_1, \ldots, X_{n-1}$ has the same distribution as $X_2, \ldots, X_n$. This implies, in particular, that $X_i$ for $i = 1, \ldots, n$ all have the same marginal distribution and thus the same marginal mean and variance (given that they have finite second moment). The meaning of heteroscedasticity can depend on the context. If the marginal variances of the $X_i$'s change with $i$ (even if the mean is constant) the random variables are called heteroscedastic in the sense of not being homoscedastic. In regression analysis we usually consider the variance of the response conditionally on the regressors, and we define heteroscedasticity as a non-constant conditional variance. In time series analysis, where the terminology conditional heteroscedasticity is common, the interest is typically in the variance of $X_k$ conditionally on $X_{k-1}, \ldots, X_1$. If this conditional variance is non-constant we have conditional heteroscedasticity. The ARCH (autoregressive conditional heteroscedasticity) model is the most famous example of a stationary time series model with non-constant conditional variance. Heteroscedasticity (conditional heteroscedasticity in particular) does not imply non-stationarity in general. Stationarity is important for a number of reasons. One simple statistical consequence is that the average $$\frac{1}{n} \sum_{i=1}^n f(X_i)$$ is then an unbiased estimator of the expectation $E f(X_1)$ (and assuming ergodicity, which is slightly more than stationarity and often assumed implicitly, the average is a consistent estimator of the expectation for $n \to \infty$). The importance of heteroscedasticity (or homoscedasticity) is, from a statistical point of view, related to the assessment of statistical uncertainty e.g. the computation of confidence intervals. If computations are carried out under an assumption of homoscedasticity while the data actually shows heteroscedasticity, the resulting confidence intervals can be misleading.
Conceptual distinction between heteroscedasticity and non-stationarity
To give precise definitions, let $X_1, \ldots, X_n$ be real valued random variables. Stationarity is usually only defined if we think of the index of the variables as time. In this case the sequence
Conceptual distinction between heteroscedasticity and non-stationarity To give precise definitions, let $X_1, \ldots, X_n$ be real valued random variables. Stationarity is usually only defined if we think of the index of the variables as time. In this case the sequence of random variables is stationary of $X_1, \ldots, X_{n-1}$ has the same distribution as $X_2, \ldots, X_n$. This implies, in particular, that $X_i$ for $i = 1, \ldots, n$ all have the same marginal distribution and thus the same marginal mean and variance (given that they have finite second moment). The meaning of heteroscedasticity can depend on the context. If the marginal variances of the $X_i$'s change with $i$ (even if the mean is constant) the random variables are called heteroscedastic in the sense of not being homoscedastic. In regression analysis we usually consider the variance of the response conditionally on the regressors, and we define heteroscedasticity as a non-constant conditional variance. In time series analysis, where the terminology conditional heteroscedasticity is common, the interest is typically in the variance of $X_k$ conditionally on $X_{k-1}, \ldots, X_1$. If this conditional variance is non-constant we have conditional heteroscedasticity. The ARCH (autoregressive conditional heteroscedasticity) model is the most famous example of a stationary time series model with non-constant conditional variance. Heteroscedasticity (conditional heteroscedasticity in particular) does not imply non-stationarity in general. Stationarity is important for a number of reasons. One simple statistical consequence is that the average $$\frac{1}{n} \sum_{i=1}^n f(X_i)$$ is then an unbiased estimator of the expectation $E f(X_1)$ (and assuming ergodicity, which is slightly more than stationarity and often assumed implicitly, the average is a consistent estimator of the expectation for $n \to \infty$). The importance of heteroscedasticity (or homoscedasticity) is, from a statistical point of view, related to the assessment of statistical uncertainty e.g. the computation of confidence intervals. If computations are carried out under an assumption of homoscedasticity while the data actually shows heteroscedasticity, the resulting confidence intervals can be misleading.
Conceptual distinction between heteroscedasticity and non-stationarity To give precise definitions, let $X_1, \ldots, X_n$ be real valued random variables. Stationarity is usually only defined if we think of the index of the variables as time. In this case the sequence
26,550
Conceptual distinction between heteroscedasticity and non-stationarity
There are 3 degrees of stationary. The weak form requires mean and variance are kept constant. This means that of 3 stationary definitions are stronger requirements than heteroscedasticity because heteroscedasticity means constant variance, without reference to the mean. A process can have heteroscedasticity. But if its mean is not constant, then the process is not (weakly) stationary. A stationary process (let's denote it by 'S') implies homoscedasticity (let's denote it by 'H'). So S --> H. Naturally its contraposition is also true. So H' --> S', i.e. non-homoscedasticity implies non-stationary. But the inversion and negation are not true. In other words: "Non-stationary implies non-homoscedasticity" is not true. "There exists a stationary process that is non-homoscedasticity" is not true.
Conceptual distinction between heteroscedasticity and non-stationarity
There are 3 degrees of stationary. The weak form requires mean and variance are kept constant. This means that of 3 stationary definitions are stronger requirements than heteroscedasticity because h
Conceptual distinction between heteroscedasticity and non-stationarity There are 3 degrees of stationary. The weak form requires mean and variance are kept constant. This means that of 3 stationary definitions are stronger requirements than heteroscedasticity because heteroscedasticity means constant variance, without reference to the mean. A process can have heteroscedasticity. But if its mean is not constant, then the process is not (weakly) stationary. A stationary process (let's denote it by 'S') implies homoscedasticity (let's denote it by 'H'). So S --> H. Naturally its contraposition is also true. So H' --> S', i.e. non-homoscedasticity implies non-stationary. But the inversion and negation are not true. In other words: "Non-stationary implies non-homoscedasticity" is not true. "There exists a stationary process that is non-homoscedasticity" is not true.
Conceptual distinction between heteroscedasticity and non-stationarity There are 3 degrees of stationary. The weak form requires mean and variance are kept constant. This means that of 3 stationary definitions are stronger requirements than heteroscedasticity because h
26,551
Conceptual distinction between heteroscedasticity and non-stationarity
A time series is stationary if all its statistical properties do not depend upon the time origin. If this requirement is not fulfilled, the time series is not stationary. Even a stationary time series cannot be described on the basis of just one sample record. Its statistical properties must be analyzed by averaging over the ensemble of sample records at different time origins. If statistical properties are the same for any individual sample record and for the case when they are determined through ensemble averaging, the time series is ergodic. As statistical properties of a heteroscedactic time series are time-dependent, it is not stationary and, of course, not ergodic. Its properties determined for a single sample record cannot be extended to its past and future behavior. Incidentally, the correlation/regression analysis cannot be applied to time series as dependence between them (the coherence function) is frequency-dependent and can be characterized through (multivariate) stochastic difference equations equations (time domain) or the frequency response function(s) (frequency domain). Extending regression analysis developed for random variables to time series is erroneous (e.g. see Bendat and Piersol, 2010; Box et al., 2015).
Conceptual distinction between heteroscedasticity and non-stationarity
A time series is stationary if all its statistical properties do not depend upon the time origin. If this requirement is not fulfilled, the time series is not stationary. Even a stationary time serie
Conceptual distinction between heteroscedasticity and non-stationarity A time series is stationary if all its statistical properties do not depend upon the time origin. If this requirement is not fulfilled, the time series is not stationary. Even a stationary time series cannot be described on the basis of just one sample record. Its statistical properties must be analyzed by averaging over the ensemble of sample records at different time origins. If statistical properties are the same for any individual sample record and for the case when they are determined through ensemble averaging, the time series is ergodic. As statistical properties of a heteroscedactic time series are time-dependent, it is not stationary and, of course, not ergodic. Its properties determined for a single sample record cannot be extended to its past and future behavior. Incidentally, the correlation/regression analysis cannot be applied to time series as dependence between them (the coherence function) is frequency-dependent and can be characterized through (multivariate) stochastic difference equations equations (time domain) or the frequency response function(s) (frequency domain). Extending regression analysis developed for random variables to time series is erroneous (e.g. see Bendat and Piersol, 2010; Box et al., 2015).
Conceptual distinction between heteroscedasticity and non-stationarity A time series is stationary if all its statistical properties do not depend upon the time origin. If this requirement is not fulfilled, the time series is not stationary. Even a stationary time serie
26,552
What are the assumptions for quantile regression? [duplicate]
The advantage of QR techniques is that they do not require, for instance, homoskedasticity of the error terms, strong assumption on the distribution of the covariates. If you are interested in applications to Ecology there is a nice and concise introduction by Cade. Or if you come from economics, the gentle treatment given by Agrist and Prischke may fit you better.
What are the assumptions for quantile regression? [duplicate]
The advantage of QR techniques is that they do not require, for instance, homoskedasticity of the error terms, strong assumption on the distribution of the covariates. If you are interested in applica
What are the assumptions for quantile regression? [duplicate] The advantage of QR techniques is that they do not require, for instance, homoskedasticity of the error terms, strong assumption on the distribution of the covariates. If you are interested in applications to Ecology there is a nice and concise introduction by Cade. Or if you come from economics, the gentle treatment given by Agrist and Prischke may fit you better.
What are the assumptions for quantile regression? [duplicate] The advantage of QR techniques is that they do not require, for instance, homoskedasticity of the error terms, strong assumption on the distribution of the covariates. If you are interested in applica
26,553
PCA, ICA and Laplacian eigenmaps
The answer to your question is given by the mapping at the bottom of Page 6 of the original Laplacian Eigenmaps paper : $x_i \rightarrow (f_1(i), \dots, f_m(i))$ So for instance, the embedding of a point $x_5$ in, say, the top 2 "components" is given by $(f_1(5), f_2(5))$ where $f_1$ and $f_2$ are the eigenvectors corresponding to the two smallest non-zero eigenvalues from the generalized eigenvalue problem $L f = \lambda D f$. Note that unlike in PCA, it is not straightforward to obtain an out-of-sample embedding. In other words, you can obtain the embedding of a point that was already considered when computing $L$, but not (easily) if it is a new point. If you're interested in doing the latter, look up this paper.
PCA, ICA and Laplacian eigenmaps
The answer to your question is given by the mapping at the bottom of Page 6 of the original Laplacian Eigenmaps paper : $x_i \rightarrow (f_1(i), \dots, f_m(i))$ So for instance, the embedding of a po
PCA, ICA and Laplacian eigenmaps The answer to your question is given by the mapping at the bottom of Page 6 of the original Laplacian Eigenmaps paper : $x_i \rightarrow (f_1(i), \dots, f_m(i))$ So for instance, the embedding of a point $x_5$ in, say, the top 2 "components" is given by $(f_1(5), f_2(5))$ where $f_1$ and $f_2$ are the eigenvectors corresponding to the two smallest non-zero eigenvalues from the generalized eigenvalue problem $L f = \lambda D f$. Note that unlike in PCA, it is not straightforward to obtain an out-of-sample embedding. In other words, you can obtain the embedding of a point that was already considered when computing $L$, but not (easily) if it is a new point. If you're interested in doing the latter, look up this paper.
PCA, ICA and Laplacian eigenmaps The answer to your question is given by the mapping at the bottom of Page 6 of the original Laplacian Eigenmaps paper : $x_i \rightarrow (f_1(i), \dots, f_m(i))$ So for instance, the embedding of a po
26,554
PCA, ICA and Laplacian eigenmaps
Here is the link to Prof Trosset's Web page of the course and also he is writing a book http://mypage.iu.edu/~mtrosset/Courses/675/notes.pdf which gets updated every week or so. Also the R functions for Laplacian eigen maps are given. Just try it for yourself. You may also consider this paper by Belkin Thanks Abhik Student of Prof Trosset
PCA, ICA and Laplacian eigenmaps
Here is the link to Prof Trosset's Web page of the course and also he is writing a book http://mypage.iu.edu/~mtrosset/Courses/675/notes.pdf which gets updated every week or so. Also the R functions f
PCA, ICA and Laplacian eigenmaps Here is the link to Prof Trosset's Web page of the course and also he is writing a book http://mypage.iu.edu/~mtrosset/Courses/675/notes.pdf which gets updated every week or so. Also the R functions for Laplacian eigen maps are given. Just try it for yourself. You may also consider this paper by Belkin Thanks Abhik Student of Prof Trosset
PCA, ICA and Laplacian eigenmaps Here is the link to Prof Trosset's Web page of the course and also he is writing a book http://mypage.iu.edu/~mtrosset/Courses/675/notes.pdf which gets updated every week or so. Also the R functions f
26,555
PCA, ICA and Laplacian eigenmaps
Unlike PCA- Laplacian eigenmaps uses the generalized eigen vectors corresponding to the smallest eigenvalues. It skips the eigen vector with the smallest eigen value (could be zero), and uses the eigen vectors corresponding to the next few smallest eigen values. PCA is a maximum variance preserving embedding using the kernel/gram matrix. Laplacian Eigenmaps is posed more as a minimization problem with respect to the combinatorial graph laplacian (refer papers by Trosset).
PCA, ICA and Laplacian eigenmaps
Unlike PCA- Laplacian eigenmaps uses the generalized eigen vectors corresponding to the smallest eigenvalues. It skips the eigen vector with the smallest eigen value (could be zero), and uses the eige
PCA, ICA and Laplacian eigenmaps Unlike PCA- Laplacian eigenmaps uses the generalized eigen vectors corresponding to the smallest eigenvalues. It skips the eigen vector with the smallest eigen value (could be zero), and uses the eigen vectors corresponding to the next few smallest eigen values. PCA is a maximum variance preserving embedding using the kernel/gram matrix. Laplacian Eigenmaps is posed more as a minimization problem with respect to the combinatorial graph laplacian (refer papers by Trosset).
PCA, ICA and Laplacian eigenmaps Unlike PCA- Laplacian eigenmaps uses the generalized eigen vectors corresponding to the smallest eigenvalues. It skips the eigen vector with the smallest eigen value (could be zero), and uses the eige
26,556
What are some good exploratory analysis and diagnostic plots for count data? [duplicate]
Even given your extended description of your project in the comments it is, IMO, still too broad a question to give much useful advice. Specifically for analysis of categorical data (of which your count data would seem to fall within given the description), I would perhaps suggest the work of Michael Friendly. He published the books Visualizing Categorical Data and Discrete Data Analysis with R: Visualization and Modeling Techniques for Categorical and Count Data. It is specific to analysis in SAS respectively R. Potentially 20 values though will be a stretch for some (many?) of the suggested categorical displays. For diagnostic plots, I would suggest John Fox's Regression Diagnostics green book. It is in the context of linear regression models, but the majority of same diagnostics (or very similar ones) can be utilized for generalized linear models as well. I suspect most advice given in general for EDA for continuous data types could be fairly easily extended to low count data. So I wouldn't be fixated on looking for resources specific to count data. It matters more for models, but the way to graphically explore/present/diagnose those models should be fairly similar.
What are some good exploratory analysis and diagnostic plots for count data? [duplicate]
Even given your extended description of your project in the comments it is, IMO, still too broad a question to give much useful advice. Specifically for analysis of categorical data (of which your cou
What are some good exploratory analysis and diagnostic plots for count data? [duplicate] Even given your extended description of your project in the comments it is, IMO, still too broad a question to give much useful advice. Specifically for analysis of categorical data (of which your count data would seem to fall within given the description), I would perhaps suggest the work of Michael Friendly. He published the books Visualizing Categorical Data and Discrete Data Analysis with R: Visualization and Modeling Techniques for Categorical and Count Data. It is specific to analysis in SAS respectively R. Potentially 20 values though will be a stretch for some (many?) of the suggested categorical displays. For diagnostic plots, I would suggest John Fox's Regression Diagnostics green book. It is in the context of linear regression models, but the majority of same diagnostics (or very similar ones) can be utilized for generalized linear models as well. I suspect most advice given in general for EDA for continuous data types could be fairly easily extended to low count data. So I wouldn't be fixated on looking for resources specific to count data. It matters more for models, but the way to graphically explore/present/diagnose those models should be fairly similar.
What are some good exploratory analysis and diagnostic plots for count data? [duplicate] Even given your extended description of your project in the comments it is, IMO, still too broad a question to give much useful advice. Specifically for analysis of categorical data (of which your cou
26,557
What are some good exploratory analysis and diagnostic plots for count data? [duplicate]
Have a look at this book (and associated R package) M Friendly: Visualizing Categorical data http://www.amazon.com/Visualizing-Categorical-Data-Michael-Friendly/dp/1580256600 I will give some examples with R and the package vcd implementing ideas from the above book (the book has code in SAS). First some plots for investigating count data distributions. Counting the number of successes in a fixed number of trials, there is the binomial distribution. But if the assumption of equal success probability in each trial does not hold (or independence is violated) we might get some other distribution. I will use the famous dataset of gender distributions in families in Saxony. library(vcd) data(Saxony) # Show a kind of "hanging rootogram" gf0 <- goodfit(Saxony, type = "binomial") summary(gf0) summary(gf0) Goodness-of-fit test for binomial distribution X^2 df P(> X^2) Likelihood Ratio 97.0065 11 6.978187e-16 plot(gf0) This shows a hanging rootogram, a kind of histogram where the bars are hanging from the theoretical distribution curve, so deviations can all be compared with the x-axis. It is a "rootogram" because it shows square roots of frequencies, so that all deviations are on about the same scale (approximating the count in each bar with a Poisson distribution, for which the square root is a variance-stabilizing transformation). We can observe a systematic deviation from the binomial distribution. Then the same with a Poisson model, using the famous horsekicks data. data("HorseKicks") # von Bortkiewicz's famous data: gf <- goodfit(HorseKicks, type="poisson") # hanging rootogram: summary(gf) Goodness-of-fit test for poisson distribution X^2 df P(> X^2) Likelihood Ratio 0.8682214 3 0.8330891 so in this case the null hypothesis of a Poisson distribution is not rejected. plot(gf) Contingency tables are also an example of count data, counting the occurrences in each cell of the table. We use the much discussed data of admissions at UC Berkeley. data(UCBAdmissions) apply(UCBAdmissions, c(1, 2), sum) Gender Admit Male Female Admitted 1198 557 Rejected 1493 1278 We show this as a mosaic plot: mosaicplot(apply(UCBAdmissions, c(1, 2), sum), main = "Student admissions at UC Berkeley") Which gives the impression that a larger proportion of females is rejected. But is this true? Prospective students are only competing with other applicants at the same department, not with all the other prospective students in general. So maybe we should do the comparison for each department separately. opar <- par(mfrow = c(2, 3), oma = c(0, 0, 2, 0)) for(i in 1:6) mosaicplot(UCBAdmissions[,,i], xlab = "Admit", ylab = "Sex", main = paste("Department", LETTERS[i])) mtext(expression(bold("Student admissions at UC Berkeley")), outer = TRUE, cex = 1.5) and now the conclusion is less clear.
What are some good exploratory analysis and diagnostic plots for count data? [duplicate]
Have a look at this book (and associated R package) M Friendly: Visualizing Categorical data http://www.amazon.com/Visualizing-Categorical-Data-Michael-Friendly/dp/1580256600 I will give some example
What are some good exploratory analysis and diagnostic plots for count data? [duplicate] Have a look at this book (and associated R package) M Friendly: Visualizing Categorical data http://www.amazon.com/Visualizing-Categorical-Data-Michael-Friendly/dp/1580256600 I will give some examples with R and the package vcd implementing ideas from the above book (the book has code in SAS). First some plots for investigating count data distributions. Counting the number of successes in a fixed number of trials, there is the binomial distribution. But if the assumption of equal success probability in each trial does not hold (or independence is violated) we might get some other distribution. I will use the famous dataset of gender distributions in families in Saxony. library(vcd) data(Saxony) # Show a kind of "hanging rootogram" gf0 <- goodfit(Saxony, type = "binomial") summary(gf0) summary(gf0) Goodness-of-fit test for binomial distribution X^2 df P(> X^2) Likelihood Ratio 97.0065 11 6.978187e-16 plot(gf0) This shows a hanging rootogram, a kind of histogram where the bars are hanging from the theoretical distribution curve, so deviations can all be compared with the x-axis. It is a "rootogram" because it shows square roots of frequencies, so that all deviations are on about the same scale (approximating the count in each bar with a Poisson distribution, for which the square root is a variance-stabilizing transformation). We can observe a systematic deviation from the binomial distribution. Then the same with a Poisson model, using the famous horsekicks data. data("HorseKicks") # von Bortkiewicz's famous data: gf <- goodfit(HorseKicks, type="poisson") # hanging rootogram: summary(gf) Goodness-of-fit test for poisson distribution X^2 df P(> X^2) Likelihood Ratio 0.8682214 3 0.8330891 so in this case the null hypothesis of a Poisson distribution is not rejected. plot(gf) Contingency tables are also an example of count data, counting the occurrences in each cell of the table. We use the much discussed data of admissions at UC Berkeley. data(UCBAdmissions) apply(UCBAdmissions, c(1, 2), sum) Gender Admit Male Female Admitted 1198 557 Rejected 1493 1278 We show this as a mosaic plot: mosaicplot(apply(UCBAdmissions, c(1, 2), sum), main = "Student admissions at UC Berkeley") Which gives the impression that a larger proportion of females is rejected. But is this true? Prospective students are only competing with other applicants at the same department, not with all the other prospective students in general. So maybe we should do the comparison for each department separately. opar <- par(mfrow = c(2, 3), oma = c(0, 0, 2, 0)) for(i in 1:6) mosaicplot(UCBAdmissions[,,i], xlab = "Admit", ylab = "Sex", main = paste("Department", LETTERS[i])) mtext(expression(bold("Student admissions at UC Berkeley")), outer = TRUE, cex = 1.5) and now the conclusion is less clear.
What are some good exploratory analysis and diagnostic plots for count data? [duplicate] Have a look at this book (and associated R package) M Friendly: Visualizing Categorical data http://www.amazon.com/Visualizing-Categorical-Data-Michael-Friendly/dp/1580256600 I will give some example
26,558
Imputation of a censored variable
Any method of imputation including multiple imputation is a shot in the dark if you can't take acoount of how the data above 50 are distributed. Since you have 200 variables are any of them correlated with the biomarker? If you could fit a regression for the biomarker as a function of the covariates you could use that model to predict the values for the truncated ones. You could apply an error to the prediction based on the residual variance in the model to generate multiple imputations that way. It would be more sensible. Of course this assumes you can find a valid model and that the residuals have zero mean and constant variance. You would only fit then non-truncated biomarker values to construct the model.
Imputation of a censored variable
Any method of imputation including multiple imputation is a shot in the dark if you can't take acoount of how the data above 50 are distributed. Since you have 200 variables are any of them correlate
Imputation of a censored variable Any method of imputation including multiple imputation is a shot in the dark if you can't take acoount of how the data above 50 are distributed. Since you have 200 variables are any of them correlated with the biomarker? If you could fit a regression for the biomarker as a function of the covariates you could use that model to predict the values for the truncated ones. You could apply an error to the prediction based on the residual variance in the model to generate multiple imputations that way. It would be more sensible. Of course this assumes you can find a valid model and that the residuals have zero mean and constant variance. You would only fit then non-truncated biomarker values to construct the model.
Imputation of a censored variable Any method of imputation including multiple imputation is a shot in the dark if you can't take acoount of how the data above 50 are distributed. Since you have 200 variables are any of them correlate
26,559
Simulating distributions
I will answer your point about simulations with R because this is the only one I am familiar with. R has a lot of builtin distributions which you can simulate. The logics of naming is that to simulate a distribution called dis the name will be rdis. Below are the ones I use most often # Some continuous distributions. ?rnorm ?runif ?rgamma ?rlnorm ?rweibull ?rexp ?rt # Some discrete distributions. ?rpoiss ?rbinom ?rnbinom ?rgeom ?rhyper You can find some complements in Fitting distributions with R. Addition: thanks to @jthetzel for providing a link with a comprehensive list of distributions and the packages they belong to. But wait, there's more: OK, following @whuber's comment I'll try to address the other points. Regarding point 1, I never go by a goodness-of-fit approach. Instead I always think about the origin of the signal, like what causes the phenomenon, is there some natural symmetries in what produces it etc. You need several book's chapters to cover it so I'll just give two examples. If the data are counts and there is no upper limit, I try a Poisson. Poisson variables can be interpreted as the counts of successive independent during a time window, which is a very general framework. I fit the distribution and see (often visually) whether the variance is well described. Quite often, the variance of the sample is much higher, in which case I use a Negative Binomial. Negative Binomial can be interpreted as a mix of Poisson with different variables, which is even more general, so this usually fits very well to the sample. If I think that the data is symmetric around the mean, i.e. that deviations are equally likely to be positive or negative, I try to fit a Gaussian. I then check (again visually) whether there is a lot of outliers, i.e data points very far away from the mean. If there are, I use a Student's t instead. The Student's t distribution can be interpreted as a mixture of Gaussian with different variances, which is again very general. In those examples, when I say visually, I mean that I use a Q-Q plot Point 3, also deserves several book's chapters. The effects of using a distribution instead of another are limitless. So instead of going through it all, I will continue the two examples above. In my early days, I did not know that Negative Binomial can have a meaningful interpretation so I used Poisson all the time (because I like to be able to interpret the parameters in human terms). Very often, when you use a Poisson, you fit the mean nicely, but you underestimate the variance. This means that you are unable to reproduce extreme values of your sample and you will consider such values as outliers (data points that do not have the same distribution as the other points) while they are actually not. Again in my early days, I did not know that Student's t also has a meaningful interpretation and I would use the Gaussian all the time. A similar thing happened. I would fit the mean and the variance well, but I would still not capture the outliers because almost all data points are supposed to be within 3 standard deviations of the mean. The same thing happened, I concluded that some points were "extraordinary", while actually they were not.
Simulating distributions
I will answer your point about simulations with R because this is the only one I am familiar with. R has a lot of builtin distributions which you can simulate. The logics of naming is that to simulate
Simulating distributions I will answer your point about simulations with R because this is the only one I am familiar with. R has a lot of builtin distributions which you can simulate. The logics of naming is that to simulate a distribution called dis the name will be rdis. Below are the ones I use most often # Some continuous distributions. ?rnorm ?runif ?rgamma ?rlnorm ?rweibull ?rexp ?rt # Some discrete distributions. ?rpoiss ?rbinom ?rnbinom ?rgeom ?rhyper You can find some complements in Fitting distributions with R. Addition: thanks to @jthetzel for providing a link with a comprehensive list of distributions and the packages they belong to. But wait, there's more: OK, following @whuber's comment I'll try to address the other points. Regarding point 1, I never go by a goodness-of-fit approach. Instead I always think about the origin of the signal, like what causes the phenomenon, is there some natural symmetries in what produces it etc. You need several book's chapters to cover it so I'll just give two examples. If the data are counts and there is no upper limit, I try a Poisson. Poisson variables can be interpreted as the counts of successive independent during a time window, which is a very general framework. I fit the distribution and see (often visually) whether the variance is well described. Quite often, the variance of the sample is much higher, in which case I use a Negative Binomial. Negative Binomial can be interpreted as a mix of Poisson with different variables, which is even more general, so this usually fits very well to the sample. If I think that the data is symmetric around the mean, i.e. that deviations are equally likely to be positive or negative, I try to fit a Gaussian. I then check (again visually) whether there is a lot of outliers, i.e data points very far away from the mean. If there are, I use a Student's t instead. The Student's t distribution can be interpreted as a mixture of Gaussian with different variances, which is again very general. In those examples, when I say visually, I mean that I use a Q-Q plot Point 3, also deserves several book's chapters. The effects of using a distribution instead of another are limitless. So instead of going through it all, I will continue the two examples above. In my early days, I did not know that Negative Binomial can have a meaningful interpretation so I used Poisson all the time (because I like to be able to interpret the parameters in human terms). Very often, when you use a Poisson, you fit the mean nicely, but you underestimate the variance. This means that you are unable to reproduce extreme values of your sample and you will consider such values as outliers (data points that do not have the same distribution as the other points) while they are actually not. Again in my early days, I did not know that Student's t also has a meaningful interpretation and I would use the Gaussian all the time. A similar thing happened. I would fit the mean and the variance well, but I would still not capture the outliers because almost all data points are supposed to be within 3 standard deviations of the mean. The same thing happened, I concluded that some points were "extraordinary", while actually they were not.
Simulating distributions I will answer your point about simulations with R because this is the only one I am familiar with. R has a lot of builtin distributions which you can simulate. The logics of naming is that to simulate
26,560
ABC model selection
Nice question building on our work! Are you aware of the follow-up paper where we derive conditions on the summary statistic to achieve consistency in the Bayes factor? This may sound too theoretical but the consequence of the asymptotic results is quite straightforward: Given a summary statistic $T$, run an ABC algorithm based on $T$ for each model under evaluation ($i=1,..,I$) and estimate the parameters $\theta_i$ of those models by the ABC estimate $\hat\theta_i(T)$; simulate the distribution of the statistic $T$ for each model and each estimated parameter, by a Monte Carlo experiment; check whether the means $\mathbb{E}_{\hat\theta_i(T)}[T(X)]$ are all different by using step 2 with a sufficiently large number of iterations and, e.g., a t-test. This procedure is not in the first version of the paper but should soon appear in the revised version
ABC model selection
Nice question building on our work! Are you aware of the follow-up paper where we derive conditions on the summary statistic to achieve consistency in the Bayes factor? This may sound too theoretical
ABC model selection Nice question building on our work! Are you aware of the follow-up paper where we derive conditions on the summary statistic to achieve consistency in the Bayes factor? This may sound too theoretical but the consequence of the asymptotic results is quite straightforward: Given a summary statistic $T$, run an ABC algorithm based on $T$ for each model under evaluation ($i=1,..,I$) and estimate the parameters $\theta_i$ of those models by the ABC estimate $\hat\theta_i(T)$; simulate the distribution of the statistic $T$ for each model and each estimated parameter, by a Monte Carlo experiment; check whether the means $\mathbb{E}_{\hat\theta_i(T)}[T(X)]$ are all different by using step 2 with a sufficiently large number of iterations and, e.g., a t-test. This procedure is not in the first version of the paper but should soon appear in the revised version
ABC model selection Nice question building on our work! Are you aware of the follow-up paper where we derive conditions on the summary statistic to achieve consistency in the Bayes factor? This may sound too theoretical
26,561
How to compare joint distribution to product of marginal distributions?
Assuming that the theoretical distributions of $x_1$ and $x_2$ are not known, a naive algorithm for determining independence would be as follows: Define $x_{1,2}$ to be the set of all co-occurences of values from $x_1$ and $x_2$. For example, if $x_1 = { 1, 2, 2 }$ and $x_2 = { 3, 6, 5}$, the set of co-occurences would be $\{(1,3), (1, 6), (1, 5) , (2, 3), (2,6), (2,5), (2, 3), (2,6), (2,5))\}$. Estimate the probability density functions (PDF's) of $x_1$, $x_2$ and $x_{1,2}$, denoted as $P_{x_1}$, $P_{x_2}$ and $P_{x_{1,2}}$. Compute the mean-square error $y=sqrt(sum(P_{x_{1,2}}(y_1,y_2) - P_{x_1}(y_1) * P_{x_2}(y_2))^2)$, where $(y_1,y_2)$ takes the values of each pair in $x_{1,2}$. if $y$ is close to zero, it means that $x_1$ and $x_2$ are independent. A simple way to estimate a PDF from a sample is to compute the sample's histogram and then to normalize it so that the integral of the PDF sums to 1. Practically, that means that you have to divide the bin counts of the histogram by the factor $h * sum(n)$ where $h$ is the bin width and $n$ is the histogram vector. Note that step 3 of this algorithm requires the user to specify a threshold for deciding whether the signals are independent.
How to compare joint distribution to product of marginal distributions?
Assuming that the theoretical distributions of $x_1$ and $x_2$ are not known, a naive algorithm for determining independence would be as follows: Define $x_{1,2}$ to be the set of all co-occurences of
How to compare joint distribution to product of marginal distributions? Assuming that the theoretical distributions of $x_1$ and $x_2$ are not known, a naive algorithm for determining independence would be as follows: Define $x_{1,2}$ to be the set of all co-occurences of values from $x_1$ and $x_2$. For example, if $x_1 = { 1, 2, 2 }$ and $x_2 = { 3, 6, 5}$, the set of co-occurences would be $\{(1,3), (1, 6), (1, 5) , (2, 3), (2,6), (2,5), (2, 3), (2,6), (2,5))\}$. Estimate the probability density functions (PDF's) of $x_1$, $x_2$ and $x_{1,2}$, denoted as $P_{x_1}$, $P_{x_2}$ and $P_{x_{1,2}}$. Compute the mean-square error $y=sqrt(sum(P_{x_{1,2}}(y_1,y_2) - P_{x_1}(y_1) * P_{x_2}(y_2))^2)$, where $(y_1,y_2)$ takes the values of each pair in $x_{1,2}$. if $y$ is close to zero, it means that $x_1$ and $x_2$ are independent. A simple way to estimate a PDF from a sample is to compute the sample's histogram and then to normalize it so that the integral of the PDF sums to 1. Practically, that means that you have to divide the bin counts of the histogram by the factor $h * sum(n)$ where $h$ is the bin width and $n$ is the histogram vector. Note that step 3 of this algorithm requires the user to specify a threshold for deciding whether the signals are independent.
How to compare joint distribution to product of marginal distributions? Assuming that the theoretical distributions of $x_1$ and $x_2$ are not known, a naive algorithm for determining independence would be as follows: Define $x_{1,2}$ to be the set of all co-occurences of
26,562
How to compare joint distribution to product of marginal distributions?
If you are trying to do a test of independence, it's better to use well developed statistics than to come up with a new one. For example, you can start by computing the Chi-squared test. Of course, visualizing the difference between the product of marginals and the joint will give you a good insight, so I encourage you to compute it as well.
How to compare joint distribution to product of marginal distributions?
If you are trying to do a test of independence, it's better to use well developed statistics than to come up with a new one. For example, you can start by computing the Chi-squared test. Of course, vi
How to compare joint distribution to product of marginal distributions? If you are trying to do a test of independence, it's better to use well developed statistics than to come up with a new one. For example, you can start by computing the Chi-squared test. Of course, visualizing the difference between the product of marginals and the joint will give you a good insight, so I encourage you to compute it as well.
How to compare joint distribution to product of marginal distributions? If you are trying to do a test of independence, it's better to use well developed statistics than to come up with a new one. For example, you can start by computing the Chi-squared test. Of course, vi
26,563
How to compare joint distribution to product of marginal distributions?
You could compare the joint empirical distribution function with the product of the marginal empirical distribution functions. For two samples $x=(x_1,\dots,x_{n_1})$ and $y=(y_1,\dots,y_{n_2})$, let $n=n_1+n_2$ and define the joint empirical distribution function $$ \hat{F}_n(s,t) = \frac{1}{n} \sum_{i=1}^{n_1} \sum_{j=1}^{n_2} I_{[x_i,\infty)\times[y_j,\infty)}(s,t) \, . $$ The marginal empirical distribution functions of each sample are $$ \hat{G}_{n_1}(s) = \frac{1}{n_1} \sum_{i=1}^{n_1} I_{[x_i,\infty)}(s) \quad \textrm{and} \quad \hat{H}_{n_2}(t) = \frac{1}{n_2} \sum_{i=1}^{n_2} I_{[y_i,\infty)}(t) \, . $$ The idea is to compare $\hat{F}_{n}(s,t)$ with the product $\hat{H}_{n_1}(s)\hat{G}_{n_2}(t)$ using some norm. For example, you could use $$ T(x,y) = \sup_{s,t} \bigg\vert \hat{F}_{n}(s,t) - \hat{G}_{n_1}(s)\hat{H}_{n_2}(t) \bigg\vert \, . $$ If we could know the distribution of $T$ under the hypothesis of independence, then we would have a way to compute a $p$-value for this problem. I don't know how this can be done.
How to compare joint distribution to product of marginal distributions?
You could compare the joint empirical distribution function with the product of the marginal empirical distribution functions. For two samples $x=(x_1,\dots,x_{n_1})$ and $y=(y_1,\dots,y_{n_2})$, let
How to compare joint distribution to product of marginal distributions? You could compare the joint empirical distribution function with the product of the marginal empirical distribution functions. For two samples $x=(x_1,\dots,x_{n_1})$ and $y=(y_1,\dots,y_{n_2})$, let $n=n_1+n_2$ and define the joint empirical distribution function $$ \hat{F}_n(s,t) = \frac{1}{n} \sum_{i=1}^{n_1} \sum_{j=1}^{n_2} I_{[x_i,\infty)\times[y_j,\infty)}(s,t) \, . $$ The marginal empirical distribution functions of each sample are $$ \hat{G}_{n_1}(s) = \frac{1}{n_1} \sum_{i=1}^{n_1} I_{[x_i,\infty)}(s) \quad \textrm{and} \quad \hat{H}_{n_2}(t) = \frac{1}{n_2} \sum_{i=1}^{n_2} I_{[y_i,\infty)}(t) \, . $$ The idea is to compare $\hat{F}_{n}(s,t)$ with the product $\hat{H}_{n_1}(s)\hat{G}_{n_2}(t)$ using some norm. For example, you could use $$ T(x,y) = \sup_{s,t} \bigg\vert \hat{F}_{n}(s,t) - \hat{G}_{n_1}(s)\hat{H}_{n_2}(t) \bigg\vert \, . $$ If we could know the distribution of $T$ under the hypothesis of independence, then we would have a way to compute a $p$-value for this problem. I don't know how this can be done.
How to compare joint distribution to product of marginal distributions? You could compare the joint empirical distribution function with the product of the marginal empirical distribution functions. For two samples $x=(x_1,\dots,x_{n_1})$ and $y=(y_1,\dots,y_{n_2})$, let
26,564
Sample size and cross-validation methods for Cox regression predictive models
You have nicely described the problem and have set it up well in a number of ways. I wasn't clear on the definition of "prognostic score", but it is very unlikely that a 2-level score is clinically helpful. It is important to adjust for all pertinent available clinical variables, based on expert opinion when choosing them. Here are some opportunities for improvement: 10-fold cross-validation is unstable and needs to be repeated 100 times to obtain adequate precision (or use the Efron-Gong optimism bootstrap with 400 resamples; both of these are available in the R rms package) Dividing the signal into "good" and "bad" driven by ROC curves is a popular technique but was not based on any good statistical principles. Any biomarker worth its salt should have a dose-response relationship, and division into two very arbitrary groups is unnecessary, misleading, and information- and power-losing. ROC curves have absolutely nothing to offer in this context Choosing cutpoints on the biomarkers is a statistical disaster. Among other things it fails to recognize that mathematically if any cutpoints are useful they can only be on the back end, not on the covariate end, because the cutpoint for each marker depends on the absolute value of all the other marker values for a patient. Stepwise regression without penalization is not reliable. In your setup there is no reason not to put all the markers into one model and to do a likelihood ratio $\chi^2$ test to test the value they add to clinical variables. A good alternative to 5. is to do a redundancy analysis or variable clustering of the biomarkers to reduce their number before relating them to the outcome. If your sample size were larger you could allow all the variables to enter into the model nonlinearly using regression splines. Occasionally allowing one biomarker to be smooth and nonlinear doubles its value over forcing linearity. Let the log likelihood, which is an optimal scoring rule (penalized likelihood would be even better) do its job. Don't spend time on improper accuracy scoring rules. Consider using the "adequacy index", based on log likelihood, for describing the utility of the biomarkers, as described in my book Regression Modeling Strategies.
Sample size and cross-validation methods for Cox regression predictive models
You have nicely described the problem and have set it up well in a number of ways. I wasn't clear on the definition of "prognostic score", but it is very unlikely that a 2-level score is clinically h
Sample size and cross-validation methods for Cox regression predictive models You have nicely described the problem and have set it up well in a number of ways. I wasn't clear on the definition of "prognostic score", but it is very unlikely that a 2-level score is clinically helpful. It is important to adjust for all pertinent available clinical variables, based on expert opinion when choosing them. Here are some opportunities for improvement: 10-fold cross-validation is unstable and needs to be repeated 100 times to obtain adequate precision (or use the Efron-Gong optimism bootstrap with 400 resamples; both of these are available in the R rms package) Dividing the signal into "good" and "bad" driven by ROC curves is a popular technique but was not based on any good statistical principles. Any biomarker worth its salt should have a dose-response relationship, and division into two very arbitrary groups is unnecessary, misleading, and information- and power-losing. ROC curves have absolutely nothing to offer in this context Choosing cutpoints on the biomarkers is a statistical disaster. Among other things it fails to recognize that mathematically if any cutpoints are useful they can only be on the back end, not on the covariate end, because the cutpoint for each marker depends on the absolute value of all the other marker values for a patient. Stepwise regression without penalization is not reliable. In your setup there is no reason not to put all the markers into one model and to do a likelihood ratio $\chi^2$ test to test the value they add to clinical variables. A good alternative to 5. is to do a redundancy analysis or variable clustering of the biomarkers to reduce their number before relating them to the outcome. If your sample size were larger you could allow all the variables to enter into the model nonlinearly using regression splines. Occasionally allowing one biomarker to be smooth and nonlinear doubles its value over forcing linearity. Let the log likelihood, which is an optimal scoring rule (penalized likelihood would be even better) do its job. Don't spend time on improper accuracy scoring rules. Consider using the "adequacy index", based on log likelihood, for describing the utility of the biomarkers, as described in my book Regression Modeling Strategies.
Sample size and cross-validation methods for Cox regression predictive models You have nicely described the problem and have set it up well in a number of ways. I wasn't clear on the definition of "prognostic score", but it is very unlikely that a 2-level score is clinically h
26,565
Reporting results of simple linear regression: what information to include?
For a simple linear regression, I would always produce a plot of the x variable against the y variable, with the regression line super-imposed on the plot (always plot your data whenever its feasible!). This will tell you very easily how well your model fits, and is easy to read for 1 variable regression. Adding that to what you've already got would probably be sufficient, although you may want to include some diagnostic plots (leverage, cooks distance, residuals, etc.). It depends on how good that x-y plot is, and on your intended audience, and any protocols that your audience expect. $R^2$ vs RMSE $R^2$ is a relative measure, whereas the RMSE is more of an absolute measure, as you would expect most observations to be within $\pm$RMSE from the fitted line, and nearly all to be within $\pm 2$RMSE. If you want to convey "explanatory power" $R^2$ is probably better, and if you want to convey "predictive power", the RMSE is probably better.
Reporting results of simple linear regression: what information to include?
For a simple linear regression, I would always produce a plot of the x variable against the y variable, with the regression line super-imposed on the plot (always plot your data whenever its feasible!
Reporting results of simple linear regression: what information to include? For a simple linear regression, I would always produce a plot of the x variable against the y variable, with the regression line super-imposed on the plot (always plot your data whenever its feasible!). This will tell you very easily how well your model fits, and is easy to read for 1 variable regression. Adding that to what you've already got would probably be sufficient, although you may want to include some diagnostic plots (leverage, cooks distance, residuals, etc.). It depends on how good that x-y plot is, and on your intended audience, and any protocols that your audience expect. $R^2$ vs RMSE $R^2$ is a relative measure, whereas the RMSE is more of an absolute measure, as you would expect most observations to be within $\pm$RMSE from the fitted line, and nearly all to be within $\pm 2$RMSE. If you want to convey "explanatory power" $R^2$ is probably better, and if you want to convey "predictive power", the RMSE is probably better.
Reporting results of simple linear regression: what information to include? For a simple linear regression, I would always produce a plot of the x variable against the y variable, with the regression line super-imposed on the plot (always plot your data whenever its feasible!
26,566
Reporting results of simple linear regression: what information to include?
I use to report the β coefficient plus the 95% CI, the p value and adjusted Rsquared. Ex: (β = 1.46, 95% CI [1.19, 1.8], p = 0.001 **, adjusted R2 = 0.48) If reporting a multiple regression or a regression with factor variables, I report the coefficient, the 95% CI, the p values and then separately the F(degres of freedom) statistics, the adjusted R2 and the p value of the model.
Reporting results of simple linear regression: what information to include?
I use to report the β coefficient plus the 95% CI, the p value and adjusted Rsquared. Ex: (β = 1.46, 95% CI [1.19, 1.8], p = 0.001 **, adjusted R2 = 0.48) If reporting a multiple regression or a reg
Reporting results of simple linear regression: what information to include? I use to report the β coefficient plus the 95% CI, the p value and adjusted Rsquared. Ex: (β = 1.46, 95% CI [1.19, 1.8], p = 0.001 **, adjusted R2 = 0.48) If reporting a multiple regression or a regression with factor variables, I report the coefficient, the 95% CI, the p values and then separately the F(degres of freedom) statistics, the adjusted R2 and the p value of the model.
Reporting results of simple linear regression: what information to include? I use to report the β coefficient plus the 95% CI, the p value and adjusted Rsquared. Ex: (β = 1.46, 95% CI [1.19, 1.8], p = 0.001 **, adjusted R2 = 0.48) If reporting a multiple regression or a reg
26,567
Combining two time-series by averaging the data points
Assuming you have the Squared Prediction Errors for the forecast and backcast individually I would recommend this: Let w be a vector of length 12, let m be the month that you are interested in. w=rep(NA,12); for(w in 1:12){ w[m]=SPE_Backcast[m]/(SPE_Backcast[m]+SPE_Forecast[m]); } Now w is the weight for the forecast and 1-w is the weight for the backcast.
Combining two time-series by averaging the data points
Assuming you have the Squared Prediction Errors for the forecast and backcast individually I would recommend this: Let w be a vector of length 12, let m be the month that you are interested in. w=rep(
Combining two time-series by averaging the data points Assuming you have the Squared Prediction Errors for the forecast and backcast individually I would recommend this: Let w be a vector of length 12, let m be the month that you are interested in. w=rep(NA,12); for(w in 1:12){ w[m]=SPE_Backcast[m]/(SPE_Backcast[m]+SPE_Forecast[m]); } Now w is the weight for the forecast and 1-w is the weight for the backcast.
Combining two time-series by averaging the data points Assuming you have the Squared Prediction Errors for the forecast and backcast individually I would recommend this: Let w be a vector of length 12, let m be the month that you are interested in. w=rep(
26,568
Combining two time-series by averaging the data points
Your purpose is to perform a fixed interval (FI) smoothing of the time series. The smoothed value of the observation at time $t$ is defined as a conditional expectation $$ \widehat{Y}_{t} := \mathbb{E}(Y_t|\mathbf{Y}_{1:r},\,\mathbf{Y}_{s:n}) $$ where the notation $\mathbf{Y}_{u:v} := [Y_u,\,Y_{u+1}, \, \dots,\,Y_v]$ is for the vector of the observations from time $u$ to time $v$. Above, the gap is assumed to be the interval ranging from time $r+1$ to $s-1$, and $n$ is the whole series' length. The time $t$ is in the gap and the expectation could be written $\widehat{Y}_{t|1:r, s:n}$ to recall its conditional nature. The smoothed value does not have the simple form you guess. For a gaussian stationnary time series with known covariance structure, the estimated $ \widehat{Y}_{t}$ for $t$ in the gap can be found by solving a linear system. When the time series model can be put in State Space (SS) form, the FI smoothing is a standard operation based on Kalman filtering and it can be done e.g. using available R functions. You simply need to specify that the values in the gap are missing. The smoothing algorithm estimates the hidden state $\boldsymbol{\alpha}_t$ which contains all the relevant information about $Y_t$ for $t$ in the gap. ARIMA models can be put in SS form. Interestingly, the FI smoothing can be written as a combination of two filters: one forward and one backward, leading to a formula of the kind you expected, but for the hidden state estimation $\boldsymbol{\alpha}_t$ (forecast and backcast), but not for the observation $Y_t$. This is known as Rauch-Tung-Striebel filtering. At least in the multiplicative versions, 'ad hoc' forecasting procedures like Holt-Winters rely on stochastic models with no simple FI algorithms since they can not be put in SS form. The smoothing formula can probably be approximated by using SS model, but it is much simpler then to use Structural Time Series models with log transformations. The 'KalmanSmooth', 'tsSmooth' and 'StructTS' functions of the R stats package can do the job. You should have a look at the books by Harvey or by Durbin and Koopman cited in the R help pages. The smoothing algorithm can provide a conditional variance for the estimated $Y_t$ and can be used to build smoothing intervals, which usually tend to be larger in the middle of the gap. Note however that the estimation of Stuctural Models can be difficult. AP <- log10(AirPassengers) ## Fit a Basic Structural Model fit <- StructTS(AP, type = "BSM") ## Fit with a gap AP.gap <- AP AP.gap[73:96] <- NA fit.gap <- StructTS(AP.gap, type = "BSM", optim.control = list(trace = TRUE)) # plot in orginal (non-logged) scale plot(AirPassengers, col = "black", ylab = "AirPass") AP.missing <- ts(AirPassengers[73:96], start=1955, , freq=12) lines(AP.missing, col = "grey", lwd = 1) ## smooth and sum 'level' and 'sea' to retrieve series sm <- tsSmooth(fit.gap) fill <- apply(as.matrix(sm[ , c(1,3)]), 1, sum) AP.fill <- ts(fill[73:96], start=1955, , freq=12) lines(10^AP.fill, col = "red", lwd = 1)
Combining two time-series by averaging the data points
Your purpose is to perform a fixed interval (FI) smoothing of the time series. The smoothed value of the observation at time $t$ is defined as a conditional expectation $$ \widehat{Y}_{t} := \mathb
Combining two time-series by averaging the data points Your purpose is to perform a fixed interval (FI) smoothing of the time series. The smoothed value of the observation at time $t$ is defined as a conditional expectation $$ \widehat{Y}_{t} := \mathbb{E}(Y_t|\mathbf{Y}_{1:r},\,\mathbf{Y}_{s:n}) $$ where the notation $\mathbf{Y}_{u:v} := [Y_u,\,Y_{u+1}, \, \dots,\,Y_v]$ is for the vector of the observations from time $u$ to time $v$. Above, the gap is assumed to be the interval ranging from time $r+1$ to $s-1$, and $n$ is the whole series' length. The time $t$ is in the gap and the expectation could be written $\widehat{Y}_{t|1:r, s:n}$ to recall its conditional nature. The smoothed value does not have the simple form you guess. For a gaussian stationnary time series with known covariance structure, the estimated $ \widehat{Y}_{t}$ for $t$ in the gap can be found by solving a linear system. When the time series model can be put in State Space (SS) form, the FI smoothing is a standard operation based on Kalman filtering and it can be done e.g. using available R functions. You simply need to specify that the values in the gap are missing. The smoothing algorithm estimates the hidden state $\boldsymbol{\alpha}_t$ which contains all the relevant information about $Y_t$ for $t$ in the gap. ARIMA models can be put in SS form. Interestingly, the FI smoothing can be written as a combination of two filters: one forward and one backward, leading to a formula of the kind you expected, but for the hidden state estimation $\boldsymbol{\alpha}_t$ (forecast and backcast), but not for the observation $Y_t$. This is known as Rauch-Tung-Striebel filtering. At least in the multiplicative versions, 'ad hoc' forecasting procedures like Holt-Winters rely on stochastic models with no simple FI algorithms since they can not be put in SS form. The smoothing formula can probably be approximated by using SS model, but it is much simpler then to use Structural Time Series models with log transformations. The 'KalmanSmooth', 'tsSmooth' and 'StructTS' functions of the R stats package can do the job. You should have a look at the books by Harvey or by Durbin and Koopman cited in the R help pages. The smoothing algorithm can provide a conditional variance for the estimated $Y_t$ and can be used to build smoothing intervals, which usually tend to be larger in the middle of the gap. Note however that the estimation of Stuctural Models can be difficult. AP <- log10(AirPassengers) ## Fit a Basic Structural Model fit <- StructTS(AP, type = "BSM") ## Fit with a gap AP.gap <- AP AP.gap[73:96] <- NA fit.gap <- StructTS(AP.gap, type = "BSM", optim.control = list(trace = TRUE)) # plot in orginal (non-logged) scale plot(AirPassengers, col = "black", ylab = "AirPass") AP.missing <- ts(AirPassengers[73:96], start=1955, , freq=12) lines(AP.missing, col = "grey", lwd = 1) ## smooth and sum 'level' and 'sea' to retrieve series sm <- tsSmooth(fit.gap) fill <- apply(as.matrix(sm[ , c(1,3)]), 1, sum) AP.fill <- ts(fill[73:96], start=1955, , freq=12) lines(10^AP.fill, col = "red", lwd = 1)
Combining two time-series by averaging the data points Your purpose is to perform a fixed interval (FI) smoothing of the time series. The smoothed value of the observation at time $t$ is defined as a conditional expectation $$ \widehat{Y}_{t} := \mathb
26,569
Combining two time-series by averaging the data points
I find your suggested approach, of taking the means of the fore- and back-casts, interesting. One thing that might be worth pointing out is that in any system exhibiting chaotic structure the forecasts are likely to be more accurate over shorter periods. That isn't the case for all systems, for example a damped pendulum could be modelled by a function with the wrong period, in which case all the medium term forecasts are likely to be wrong, while the long term ones are all going to be very accurate, as the system converges to zero. But it looks to me, from the graph in the question, that this might be a reasonable assumption to make here. That implies that we might be better off relying more on the forecast data for the earlier part of the missing period, and more on the back-cast data for the latter part. The simplest way to do this would be to have using a linearly decreasing weight for the forecast, and the opposite for the back-cast: > n <- [number of missing datapoints] > w <- seq(1, 0, by = -1/(n+1))[2:(n+1)] This gives a little weight of the backcast on the first element. You could also use n-1, without the subscripts at the end, if you wanted to use only the forecast value on the first interpolated point. > w [1] 0.92307692 0.84615385 0.76923077 0.69230769 0.61538462 0.53846154 [7] 0.46153846 0.38461538 0.30769231 0.23076923 0.15384615 0.07692308 I don't have your data, so let's try this on the AirPassenger dataset in R. I'll just remove a two-year period near the centre: > APearly <- ts(AirPassengers[1:72], start=1949, freq=12) > APlate <- ts(AirPassengers[97:144], start=1957, freq=12) > APmissing <- ts(AirPassengers[73:96], start=1955, freq=12) > plot(AirPassengers) # plot the "missing data" for comparison > lines(APmissing, col="#eeeeee") # use the HoltWinters algorithm to predict the mean: > APforecast <- hw(APearly)[2]$mean > lines(APforecast, col="red") # HoltWinters doesn't appear to do backcasting, so reverse the ts, forecast, # and reverse again (feel free to edit if there's a better process) > backwards <- ts(rev(APlate), freq=12) > backcast <- hw(backwards)[2]$mean > APbackcast <- ts(rev(backcast), start=1955, freq=12) > lines(APbackcast, col='blue') # now the magic: > n <- 24 > w <- seq(1, 0, by=-1/(n+1))[2:(n+1)] > interpolation = APforecast * w + (1 - w) * APbackcast > lines(interpolation, col='purple', lwd=2) And there's your interpolation. Of course, it's not perfect. I guess that's a result of the patterns in the earlier part of the data being different to those in the latter part (the Jul-Aug peak is not so strong in earlier years). But as you can see from the image, it's clearly better than just the forecasting or the back casting alone. I would imagine that your data may get slightly less reliable results, as there is not such a strong seasonal variation. My guess would be that you could try this including the confidence intervals too, but I'm not sure of the validity of doing it as simply as this.
Combining two time-series by averaging the data points
I find your suggested approach, of taking the means of the fore- and back-casts, interesting. One thing that might be worth pointing out is that in any system exhibiting chaotic structure the forecast
Combining two time-series by averaging the data points I find your suggested approach, of taking the means of the fore- and back-casts, interesting. One thing that might be worth pointing out is that in any system exhibiting chaotic structure the forecasts are likely to be more accurate over shorter periods. That isn't the case for all systems, for example a damped pendulum could be modelled by a function with the wrong period, in which case all the medium term forecasts are likely to be wrong, while the long term ones are all going to be very accurate, as the system converges to zero. But it looks to me, from the graph in the question, that this might be a reasonable assumption to make here. That implies that we might be better off relying more on the forecast data for the earlier part of the missing period, and more on the back-cast data for the latter part. The simplest way to do this would be to have using a linearly decreasing weight for the forecast, and the opposite for the back-cast: > n <- [number of missing datapoints] > w <- seq(1, 0, by = -1/(n+1))[2:(n+1)] This gives a little weight of the backcast on the first element. You could also use n-1, without the subscripts at the end, if you wanted to use only the forecast value on the first interpolated point. > w [1] 0.92307692 0.84615385 0.76923077 0.69230769 0.61538462 0.53846154 [7] 0.46153846 0.38461538 0.30769231 0.23076923 0.15384615 0.07692308 I don't have your data, so let's try this on the AirPassenger dataset in R. I'll just remove a two-year period near the centre: > APearly <- ts(AirPassengers[1:72], start=1949, freq=12) > APlate <- ts(AirPassengers[97:144], start=1957, freq=12) > APmissing <- ts(AirPassengers[73:96], start=1955, freq=12) > plot(AirPassengers) # plot the "missing data" for comparison > lines(APmissing, col="#eeeeee") # use the HoltWinters algorithm to predict the mean: > APforecast <- hw(APearly)[2]$mean > lines(APforecast, col="red") # HoltWinters doesn't appear to do backcasting, so reverse the ts, forecast, # and reverse again (feel free to edit if there's a better process) > backwards <- ts(rev(APlate), freq=12) > backcast <- hw(backwards)[2]$mean > APbackcast <- ts(rev(backcast), start=1955, freq=12) > lines(APbackcast, col='blue') # now the magic: > n <- 24 > w <- seq(1, 0, by=-1/(n+1))[2:(n+1)] > interpolation = APforecast * w + (1 - w) * APbackcast > lines(interpolation, col='purple', lwd=2) And there's your interpolation. Of course, it's not perfect. I guess that's a result of the patterns in the earlier part of the data being different to those in the latter part (the Jul-Aug peak is not so strong in earlier years). But as you can see from the image, it's clearly better than just the forecasting or the back casting alone. I would imagine that your data may get slightly less reliable results, as there is not such a strong seasonal variation. My guess would be that you could try this including the confidence intervals too, but I'm not sure of the validity of doing it as simply as this.
Combining two time-series by averaging the data points I find your suggested approach, of taking the means of the fore- and back-casts, interesting. One thing that might be worth pointing out is that in any system exhibiting chaotic structure the forecast
26,570
Visualizing multi-dimensional data (LSI) in 2D
This is what MDS (multidimensional scaling) is designed for. In short, if you're given a similarity matrix M, you want to find the closest approximation $S = X X^\top$ where $S$ has rank 2. This can be done by computing the SVD of $M = V \Lambda V^\top = X X^\top$ where $X = V \Lambda^{1/2}$. Now, assuming that $\Lambda$ is permuted so the eigenvalues are in decreasing order, the first two columns of $X$ are your desired embedding in the plane. There's lots of code available for MDS (and I'd be surprised if scipy doesn't have some version of it). In any case as long as you have access to some SVD routine in python you're set.
Visualizing multi-dimensional data (LSI) in 2D
This is what MDS (multidimensional scaling) is designed for. In short, if you're given a similarity matrix M, you want to find the closest approximation $S = X X^\top$ where $S$ has rank 2. This can b
Visualizing multi-dimensional data (LSI) in 2D This is what MDS (multidimensional scaling) is designed for. In short, if you're given a similarity matrix M, you want to find the closest approximation $S = X X^\top$ where $S$ has rank 2. This can be done by computing the SVD of $M = V \Lambda V^\top = X X^\top$ where $X = V \Lambda^{1/2}$. Now, assuming that $\Lambda$ is permuted so the eigenvalues are in decreasing order, the first two columns of $X$ are your desired embedding in the plane. There's lots of code available for MDS (and I'd be surprised if scipy doesn't have some version of it). In any case as long as you have access to some SVD routine in python you're set.
Visualizing multi-dimensional data (LSI) in 2D This is what MDS (multidimensional scaling) is designed for. In short, if you're given a similarity matrix M, you want to find the closest approximation $S = X X^\top$ where $S$ has rank 2. This can b
26,571
Visualizing multi-dimensional data (LSI) in 2D
There is a piece of software called ggobi that can help you. It lets you explore multi-dimensional pseudo-spaces. It's mostly for data exploration but its interface is extremely friendly and 'it-just-works'! You just need a CSV format (in R I usually just use write.csv with the default parameters) or an XML file (this format allows you more control; I usually save my table in CSV then export it to XML with ggobi and edit it manually for instance to change the order of some factors).
Visualizing multi-dimensional data (LSI) in 2D
There is a piece of software called ggobi that can help you. It lets you explore multi-dimensional pseudo-spaces. It's mostly for data exploration but its interface is extremely friendly and 'it-just-
Visualizing multi-dimensional data (LSI) in 2D There is a piece of software called ggobi that can help you. It lets you explore multi-dimensional pseudo-spaces. It's mostly for data exploration but its interface is extremely friendly and 'it-just-works'! You just need a CSV format (in R I usually just use write.csv with the default parameters) or an XML file (this format allows you more control; I usually save my table in CSV then export it to XML with ggobi and edit it manually for instance to change the order of some factors).
Visualizing multi-dimensional data (LSI) in 2D There is a piece of software called ggobi that can help you. It lets you explore multi-dimensional pseudo-spaces. It's mostly for data exploration but its interface is extremely friendly and 'it-just-
26,572
Longitudinal data: time series, repeated measures, or something else?
As Jeromy Anglim said, it would help to know the number of time points you have for each individual; as you said "many" I would venture that functional analysis might be a viable alternative. You might want to check the R package fda and look at the book by Ramsay and Silverman.
Longitudinal data: time series, repeated measures, or something else?
As Jeromy Anglim said, it would help to know the number of time points you have for each individual; as you said "many" I would venture that functional analysis might be a viable alternative. You migh
Longitudinal data: time series, repeated measures, or something else? As Jeromy Anglim said, it would help to know the number of time points you have for each individual; as you said "many" I would venture that functional analysis might be a viable alternative. You might want to check the R package fda and look at the book by Ramsay and Silverman.
Longitudinal data: time series, repeated measures, or something else? As Jeromy Anglim said, it would help to know the number of time points you have for each individual; as you said "many" I would venture that functional analysis might be a viable alternative. You migh
26,573
Longitudinal data: time series, repeated measures, or something else?
Since originally posing this question, I have come to the conclusion that mixed-effect models with subjects as the random blocking factor are the practical solution to this problem, i.e. option #2 in my original post. If the random argument to lme is set to ~1|ID (where ID identifies observations coming from the same test subject) then a random intercept model is fitted. If it is set to ~TIME|ID then a random slope and intercept model is fitted. Any right-sided formula containing variables that vary within the same individual can be placed between the ~ and the |ID, but overly complicated formulas will result in a saturated model and/or various numerical errors. Therefore, one can use a likelihood ratio test (anova(myModel, update(myModel,random=~TIME|ID))) to compare a random intercept model to a random slope and intercept model or other candidate random effect models. If the difference in fit is not significant then stick with the simpler model. It was overkill for me to go into random trig functions in my original post. The other issue I raised was one of model selection. It seems like people don't like model selection of any kind, but nobody has any practical alternatives. If you blindly believe the researcher who collected the data about what explanatory variables are and are not relevant, you will often be blindly accepting their untested assumptions. If you take into account every possible bit of information, you will often end up with a saturated model. If you arbitrarily choose a particular model and variables because they're easy, you will again be accepting untested assumptions, this time your own. So, in summary, for repeated measures it's lme models followed by trimming via MASS:::stepAIC or MuMIn:::dredge and/or nlme:::anova.lme until and unless someone has a better idea. I'll leave this self-answer up for a while before accepting it to see if anybody has any rebuttals. Thanks for your time, and if you're reading this because you have the same sort of question I have, good luck and welcome to semi-uncharted territory.
Longitudinal data: time series, repeated measures, or something else?
Since originally posing this question, I have come to the conclusion that mixed-effect models with subjects as the random blocking factor are the practical solution to this problem, i.e. option #2 in
Longitudinal data: time series, repeated measures, or something else? Since originally posing this question, I have come to the conclusion that mixed-effect models with subjects as the random blocking factor are the practical solution to this problem, i.e. option #2 in my original post. If the random argument to lme is set to ~1|ID (where ID identifies observations coming from the same test subject) then a random intercept model is fitted. If it is set to ~TIME|ID then a random slope and intercept model is fitted. Any right-sided formula containing variables that vary within the same individual can be placed between the ~ and the |ID, but overly complicated formulas will result in a saturated model and/or various numerical errors. Therefore, one can use a likelihood ratio test (anova(myModel, update(myModel,random=~TIME|ID))) to compare a random intercept model to a random slope and intercept model or other candidate random effect models. If the difference in fit is not significant then stick with the simpler model. It was overkill for me to go into random trig functions in my original post. The other issue I raised was one of model selection. It seems like people don't like model selection of any kind, but nobody has any practical alternatives. If you blindly believe the researcher who collected the data about what explanatory variables are and are not relevant, you will often be blindly accepting their untested assumptions. If you take into account every possible bit of information, you will often end up with a saturated model. If you arbitrarily choose a particular model and variables because they're easy, you will again be accepting untested assumptions, this time your own. So, in summary, for repeated measures it's lme models followed by trimming via MASS:::stepAIC or MuMIn:::dredge and/or nlme:::anova.lme until and unless someone has a better idea. I'll leave this self-answer up for a while before accepting it to see if anybody has any rebuttals. Thanks for your time, and if you're reading this because you have the same sort of question I have, good luck and welcome to semi-uncharted territory.
Longitudinal data: time series, repeated measures, or something else? Since originally posing this question, I have come to the conclusion that mixed-effect models with subjects as the random blocking factor are the practical solution to this problem, i.e. option #2 in
26,574
Why doesn't recall take into account true negatives?
Recall (in combination with precision) is generally used in areas where one is primarily interested in finding the Positives. An example for such an area is e.g. Performance Marketing or (as already suggested by ch'ls link) the area of Information Retrieval. So: If you are primarily interested in finding the negatives, "True Negative Rate" (as already suggested by chl) is the way to go. But don't forget to look at a "precision for focus on negatives"-metric (i.e. $\frac{TN}{TN + FN}$, because otherwise the "True Negative Rate" can be optimized by setting the prediction to "Negative" for all data points). If you are interested in optimizing recall for both negatives AND positives, you should look at "Accuracy" (see again chl's link). But beware of class skew (i.e. you have many many more positives than negatives or vice versa ... in this case one can "optimize" accuracy by setting the prediction to the major class for all data points).
Why doesn't recall take into account true negatives?
Recall (in combination with precision) is generally used in areas where one is primarily interested in finding the Positives. An example for such an area is e.g. Performance Marketing or (as already s
Why doesn't recall take into account true negatives? Recall (in combination with precision) is generally used in areas where one is primarily interested in finding the Positives. An example for such an area is e.g. Performance Marketing or (as already suggested by ch'ls link) the area of Information Retrieval. So: If you are primarily interested in finding the negatives, "True Negative Rate" (as already suggested by chl) is the way to go. But don't forget to look at a "precision for focus on negatives"-metric (i.e. $\frac{TN}{TN + FN}$, because otherwise the "True Negative Rate" can be optimized by setting the prediction to "Negative" for all data points). If you are interested in optimizing recall for both negatives AND positives, you should look at "Accuracy" (see again chl's link). But beware of class skew (i.e. you have many many more positives than negatives or vice versa ... in this case one can "optimize" accuracy by setting the prediction to the major class for all data points).
Why doesn't recall take into account true negatives? Recall (in combination with precision) is generally used in areas where one is primarily interested in finding the Positives. An example for such an area is e.g. Performance Marketing or (as already s
26,575
How to present the gain in explained variance thanks to the correlation of Y and X?
Here are some suggestions (about your plot, not about how I would illustrate correlation/regression analysis): The two univariate plots you show in the right and left margins may be simplified with a call to rug(); I find more informative to show a density plot of $X$ and $Y$ or a boxplot, at risk of being evocative of the idea of a bi-normality assumption which makes no sense in this context; In addition to the regression line, it is worth showing a non-parametric estimate of the trend, like a loess (this is good practice and highly informative about possible local non linearities); Points might be highlighted (with varying color or size) according to Leverage effect or Cook distances, i.e. any of those measures that show how influential individual values are on the estimated regression line. I'll second @DWin's comment and I think it is better to highlight how individual points "degrade" goodness-of-fit or induce some kind of departure from the linearity assumption. Of note, this graph assumes X and Y are non-paired data, otherwise I would stick to a Bland-Altman plot ($(X-Y)$ against $(X+Y)/2$), in addition to scatterplot.
How to present the gain in explained variance thanks to the correlation of Y and X?
Here are some suggestions (about your plot, not about how I would illustrate correlation/regression analysis): The two univariate plots you show in the right and left margins may be simplified with a
How to present the gain in explained variance thanks to the correlation of Y and X? Here are some suggestions (about your plot, not about how I would illustrate correlation/regression analysis): The two univariate plots you show in the right and left margins may be simplified with a call to rug(); I find more informative to show a density plot of $X$ and $Y$ or a boxplot, at risk of being evocative of the idea of a bi-normality assumption which makes no sense in this context; In addition to the regression line, it is worth showing a non-parametric estimate of the trend, like a loess (this is good practice and highly informative about possible local non linearities); Points might be highlighted (with varying color or size) according to Leverage effect or Cook distances, i.e. any of those measures that show how influential individual values are on the estimated regression line. I'll second @DWin's comment and I think it is better to highlight how individual points "degrade" goodness-of-fit or induce some kind of departure from the linearity assumption. Of note, this graph assumes X and Y are non-paired data, otherwise I would stick to a Bland-Altman plot ($(X-Y)$ against $(X+Y)/2$), in addition to scatterplot.
How to present the gain in explained variance thanks to the correlation of Y and X? Here are some suggestions (about your plot, not about how I would illustrate correlation/regression analysis): The two univariate plots you show in the right and left margins may be simplified with a
26,576
How to present the gain in explained variance thanks to the correlation of Y and X?
Not answering to your exact question, but the followings could be interesting by visualizing one possible pitfall of linear correlations based on an answer from stackoveflow: par(mfrow=c(2,1)) set.seed(1) x <- rnorm(1000) y <- rnorm(1000) plot(y~x, ylab = "", main=paste('1000 random values (r=', round(cor(x,y), 4), ')', sep='')) abline(lm(y~x), col = 2, lwd = 2) x <- c(x, 500) y <- c(y, 500) cor(x,y) plot(y~x, ylab = "", main=paste('1000 random values and (500, 500) (r=', round(cor(x,y), 4), ')', sep='')) abline(lm(y~x), col = 2, lwd = 2) @Gavin Simpson's and @bill_080's answer also includes nice plots of correlation in the same topic.
How to present the gain in explained variance thanks to the correlation of Y and X?
Not answering to your exact question, but the followings could be interesting by visualizing one possible pitfall of linear correlations based on an answer from stackoveflow: par(mfrow=c(2,1)) set.se
How to present the gain in explained variance thanks to the correlation of Y and X? Not answering to your exact question, but the followings could be interesting by visualizing one possible pitfall of linear correlations based on an answer from stackoveflow: par(mfrow=c(2,1)) set.seed(1) x <- rnorm(1000) y <- rnorm(1000) plot(y~x, ylab = "", main=paste('1000 random values (r=', round(cor(x,y), 4), ')', sep='')) abline(lm(y~x), col = 2, lwd = 2) x <- c(x, 500) y <- c(y, 500) cor(x,y) plot(y~x, ylab = "", main=paste('1000 random values and (500, 500) (r=', round(cor(x,y), 4), ')', sep='')) abline(lm(y~x), col = 2, lwd = 2) @Gavin Simpson's and @bill_080's answer also includes nice plots of correlation in the same topic.
How to present the gain in explained variance thanks to the correlation of Y and X? Not answering to your exact question, but the followings could be interesting by visualizing one possible pitfall of linear correlations based on an answer from stackoveflow: par(mfrow=c(2,1)) set.se
26,577
How to present the gain in explained variance thanks to the correlation of Y and X?
I'd have two two-panel plots, both have the xy plot on the left, and a histogram on the right. In the first plot, a horizontal line is placed at the mean of y and lines extend from this to each point, representing the residuals of y values from the mean. The histogram with this simply plots these residuals. Then in the next pair, the xy plot contains a line representing the linear fit and again vertical lines representing the residuals, which are represented in a histogram to the right. Keep x axis of the histograms constant to highlight the shift to lower values in the linear fit relative to the mean "fit".
How to present the gain in explained variance thanks to the correlation of Y and X?
I'd have two two-panel plots, both have the xy plot on the left, and a histogram on the right. In the first plot, a horizontal line is placed at the mean of y and lines extend from this to each point,
How to present the gain in explained variance thanks to the correlation of Y and X? I'd have two two-panel plots, both have the xy plot on the left, and a histogram on the right. In the first plot, a horizontal line is placed at the mean of y and lines extend from this to each point, representing the residuals of y values from the mean. The histogram with this simply plots these residuals. Then in the next pair, the xy plot contains a line representing the linear fit and again vertical lines representing the residuals, which are represented in a histogram to the right. Keep x axis of the histograms constant to highlight the shift to lower values in the linear fit relative to the mean "fit".
How to present the gain in explained variance thanks to the correlation of Y and X? I'd have two two-panel plots, both have the xy plot on the left, and a histogram on the right. In the first plot, a horizontal line is placed at the mean of y and lines extend from this to each point,
26,578
How to present the gain in explained variance thanks to the correlation of Y and X?
I think what you propose is good, but I would do it in three different examples 1) X and Y are completely unrelated. Simply remove "x" from the r code that generates y (y1<-rnorm(50)) 2) The example you posted (y2 <- x+rnorm(50)) 3) The X are Y are the same variable. Simply remove "rnorm(50)" from the r code that generates y (y3<-x) This would more explicitly show how increasing the correlation decreases the variability in the residuals. You would just need to make sure that the vertical axis doesn't change with each plot, which may happen if you're using default scaling. So you could compare three plots r1 vs x, r2 vs x and r3 vs x. I am using "r" to indicate the residuals from the fit using y1, y2, and y3 respectively. My R skills in plotting are quite hopeless, so I can't offer much help here.
How to present the gain in explained variance thanks to the correlation of Y and X?
I think what you propose is good, but I would do it in three different examples 1) X and Y are completely unrelated. Simply remove "x" from the r code that generates y (y1<-rnorm(50)) 2) The example
How to present the gain in explained variance thanks to the correlation of Y and X? I think what you propose is good, but I would do it in three different examples 1) X and Y are completely unrelated. Simply remove "x" from the r code that generates y (y1<-rnorm(50)) 2) The example you posted (y2 <- x+rnorm(50)) 3) The X are Y are the same variable. Simply remove "rnorm(50)" from the r code that generates y (y3<-x) This would more explicitly show how increasing the correlation decreases the variability in the residuals. You would just need to make sure that the vertical axis doesn't change with each plot, which may happen if you're using default scaling. So you could compare three plots r1 vs x, r2 vs x and r3 vs x. I am using "r" to indicate the residuals from the fit using y1, y2, and y3 respectively. My R skills in plotting are quite hopeless, so I can't offer much help here.
How to present the gain in explained variance thanks to the correlation of Y and X? I think what you propose is good, but I would do it in three different examples 1) X and Y are completely unrelated. Simply remove "x" from the r code that generates y (y1<-rnorm(50)) 2) The example
26,579
Why Use the Cornish-Fisher Expansion Instead of Sample Quantile?
I have never seen C-F used for empirical estimates. Why bother? You have outlined a good set of reasons why not. (I don't think C-F "wins" even in case 1 due to the instability of estimates of higher-order cumulants and their lack of resistance.) It is intended for theoretical approximations. Johnson & Kotz, in their encyclopedic work on distributions, routinely use C-F expansions to develop approximations to distribution functions. Such approximations were useful to supplement tables (or even create them) before powerful statistical software was widespread. They can still be useful on platforms where appropriate code is not available such as quick-and-dirty spreadsheet calculations.
Why Use the Cornish-Fisher Expansion Instead of Sample Quantile?
I have never seen C-F used for empirical estimates. Why bother? You have outlined a good set of reasons why not. (I don't think C-F "wins" even in case 1 due to the instability of estimates of high
Why Use the Cornish-Fisher Expansion Instead of Sample Quantile? I have never seen C-F used for empirical estimates. Why bother? You have outlined a good set of reasons why not. (I don't think C-F "wins" even in case 1 due to the instability of estimates of higher-order cumulants and their lack of resistance.) It is intended for theoretical approximations. Johnson & Kotz, in their encyclopedic work on distributions, routinely use C-F expansions to develop approximations to distribution functions. Such approximations were useful to supplement tables (or even create them) before powerful statistical software was widespread. They can still be useful on platforms where appropriate code is not available such as quick-and-dirty spreadsheet calculations.
Why Use the Cornish-Fisher Expansion Instead of Sample Quantile? I have never seen C-F used for empirical estimates. Why bother? You have outlined a good set of reasons why not. (I don't think C-F "wins" even in case 1 due to the instability of estimates of high
26,580
What is a test of independence?
I would start by defining what you mean by independence. For example, If two variables are independent this means that knowing the value of one variable does not tell you anything about the value of the other variable. Then I would describe the test: To test for independence we construct a table of values that we would expect to see if the variables were independent. If we observed something "very" different from these expected values, we would conclude that the variables are unlikely to be independent.
What is a test of independence?
I would start by defining what you mean by independence. For example, If two variables are independent this means that knowing the value of one variable does not tell you anything about the va
What is a test of independence? I would start by defining what you mean by independence. For example, If two variables are independent this means that knowing the value of one variable does not tell you anything about the value of the other variable. Then I would describe the test: To test for independence we construct a table of values that we would expect to see if the variables were independent. If we observed something "very" different from these expected values, we would conclude that the variables are unlikely to be independent.
What is a test of independence? I would start by defining what you mean by independence. For example, If two variables are independent this means that knowing the value of one variable does not tell you anything about the va
26,581
What is a test of independence?
Why don't you take the definition of wikipedia. It's quite short und doesn't use heavily statistic terms. A test of independence assesses whether paired observations on two variables, expressed in a contingency table, are independent of each other – for example, whether people from different regions differ in the frequency with which they report that they support a political candidate.
What is a test of independence?
Why don't you take the definition of wikipedia. It's quite short und doesn't use heavily statistic terms. A test of independence assesses whether paired observations on two variables, expressed in a
What is a test of independence? Why don't you take the definition of wikipedia. It's quite short und doesn't use heavily statistic terms. A test of independence assesses whether paired observations on two variables, expressed in a contingency table, are independent of each other – for example, whether people from different regions differ in the frequency with which they report that they support a political candidate.
What is a test of independence? Why don't you take the definition of wikipedia. It's quite short und doesn't use heavily statistic terms. A test of independence assesses whether paired observations on two variables, expressed in a
26,582
Example of a process that is 2nd order stationary but not strictly stationary
Take any process $(X_t)_t$ with independent components that has a constant first and second moment and put a varying third moment. It is second order stationnary because $E[ X_t X_{t+h} ]=0$ and it is not strictly stationnary because $P( X_t \geq x_t, X_{t+1} \geq x_{t+1})$ depends upon $t$
Example of a process that is 2nd order stationary but not strictly stationary
Take any process $(X_t)_t$ with independent components that has a constant first and second moment and put a varying third moment. It is second order stationnary because $E[ X_t X_{t+h} ]=0$ and it i
Example of a process that is 2nd order stationary but not strictly stationary Take any process $(X_t)_t$ with independent components that has a constant first and second moment and put a varying third moment. It is second order stationnary because $E[ X_t X_{t+h} ]=0$ and it is not strictly stationnary because $P( X_t \geq x_t, X_{t+1} \geq x_{t+1})$ depends upon $t$
Example of a process that is 2nd order stationary but not strictly stationary Take any process $(X_t)_t$ with independent components that has a constant first and second moment and put a varying third moment. It is second order stationnary because $E[ X_t X_{t+h} ]=0$ and it i
26,583
RFE vs Backward Elimination - is there a difference?
Quoting Guyon in the paper that introduced RFE: This [RFE] iterative procedure is an instance of backward feature elimination (Kohavi, 2000 and references therein) Indeed, when introducing RFE, Guyon does so using Support Vector Machines, and proposes two different methods to rank the single predictors. At the same time Kohavi tests backward elimination both on tree classifiers and naive bayes - therefore the scoring methods for the features were different. All and all, the two methods are the same thing - starting from a model with all predictors and removing them one by one based on some scoring function (Z-score for linear regression, Gini for tree based methods, etc.), with the goal of maximizing some target metric (AIC, or test performance).
RFE vs Backward Elimination - is there a difference?
Quoting Guyon in the paper that introduced RFE: This [RFE] iterative procedure is an instance of backward feature elimination (Kohavi, 2000 and references therein) Indeed, when introducing RFE, Gu
RFE vs Backward Elimination - is there a difference? Quoting Guyon in the paper that introduced RFE: This [RFE] iterative procedure is an instance of backward feature elimination (Kohavi, 2000 and references therein) Indeed, when introducing RFE, Guyon does so using Support Vector Machines, and proposes two different methods to rank the single predictors. At the same time Kohavi tests backward elimination both on tree classifiers and naive bayes - therefore the scoring methods for the features were different. All and all, the two methods are the same thing - starting from a model with all predictors and removing them one by one based on some scoring function (Z-score for linear regression, Gini for tree based methods, etc.), with the goal of maximizing some target metric (AIC, or test performance).
RFE vs Backward Elimination - is there a difference? Quoting Guyon in the paper that introduced RFE: This [RFE] iterative procedure is an instance of backward feature elimination (Kohavi, 2000 and references therein) Indeed, when introducing RFE, Gu
26,584
RFE vs Backward Elimination - is there a difference?
RFE is a bit of a hybrid. It looks and acts as a wrapper, similar to backward selection. But its main drawback is its selection of variables is essentially univariate. It uses a univariate measure of goodness to rank order the variables. In this sense it behaves like a filter. I like this description of feature selection methods: Filters: fast univariate measure process that relates a single independent variable to the dependent variable. A filter can be linear (e.g., Pearson correlation) or nonlinear (e.g, univariate tree model). Filters typically scale linearly with the number of variables. We want them to be very fast , so we use a filter first to get rid of many candidate variables, then use a wrapper. Wrappers: called such because there is a model "wrapped" around the feature selection method. A good wrapper is multivariate and will rank order variables in order of multivariate power, thus removing correlations. Wrappers build many, many models and scale nonlinearly with the number of variables. It's best to use a fast, simple nonlinear model for a wrapper (eg, single decision tree with depth about 5, random forest with about 5 simple trees, LGBM with about 20 simple trees). It really doesn't matter what model you use for the wrapper (DT, RF, LGBM, Catboost, SVM...). as long as it's a very simple nonlinear model. After you do the proper wrapper the result is a sorted list of variables in order of multivariate importance. You don't get this with RFE. In practice you might create thousands of candidate variables. You then do feature selection to get a short list that takes correlations into account. You first do a filter, a univariate measure, to get down to a short list of maybe 50 to 100 candidate variables. Then you run a (proper) wrapper to get you list sorted by multivariate importance, and you typically find maybe 10 to 20 or so variables are sufficient for a good model. Then you use this small number for your model exploration, tuning and selection. Sure, many nonlinear models by themselves will give you a sorted list of variables by importance, but it's impractical to run a full complex nonlinear model with hundreds or thousands of candidate variables. That's why we do feature selection as a step to reduce the variables before we explore the final nonlinear models. RFE/RFECV is a poor stepchild in between these. Its variable ranking is essentially univariate, like a filter. It doesn't remove correlations, also like a filter. But it has a model wrapper around it, so it looks like a wrapper. It decides how many variables are ranked #1, sorts the rest by univariate importance, and doesn't sort the #1 variables (or any variables) by multivariate importance. My opinion: RFE is a popular but lousy wrapper. Use sequentialfeatureselector.
RFE vs Backward Elimination - is there a difference?
RFE is a bit of a hybrid. It looks and acts as a wrapper, similar to backward selection. But its main drawback is its selection of variables is essentially univariate. It uses a univariate measure of
RFE vs Backward Elimination - is there a difference? RFE is a bit of a hybrid. It looks and acts as a wrapper, similar to backward selection. But its main drawback is its selection of variables is essentially univariate. It uses a univariate measure of goodness to rank order the variables. In this sense it behaves like a filter. I like this description of feature selection methods: Filters: fast univariate measure process that relates a single independent variable to the dependent variable. A filter can be linear (e.g., Pearson correlation) or nonlinear (e.g, univariate tree model). Filters typically scale linearly with the number of variables. We want them to be very fast , so we use a filter first to get rid of many candidate variables, then use a wrapper. Wrappers: called such because there is a model "wrapped" around the feature selection method. A good wrapper is multivariate and will rank order variables in order of multivariate power, thus removing correlations. Wrappers build many, many models and scale nonlinearly with the number of variables. It's best to use a fast, simple nonlinear model for a wrapper (eg, single decision tree with depth about 5, random forest with about 5 simple trees, LGBM with about 20 simple trees). It really doesn't matter what model you use for the wrapper (DT, RF, LGBM, Catboost, SVM...). as long as it's a very simple nonlinear model. After you do the proper wrapper the result is a sorted list of variables in order of multivariate importance. You don't get this with RFE. In practice you might create thousands of candidate variables. You then do feature selection to get a short list that takes correlations into account. You first do a filter, a univariate measure, to get down to a short list of maybe 50 to 100 candidate variables. Then you run a (proper) wrapper to get you list sorted by multivariate importance, and you typically find maybe 10 to 20 or so variables are sufficient for a good model. Then you use this small number for your model exploration, tuning and selection. Sure, many nonlinear models by themselves will give you a sorted list of variables by importance, but it's impractical to run a full complex nonlinear model with hundreds or thousands of candidate variables. That's why we do feature selection as a step to reduce the variables before we explore the final nonlinear models. RFE/RFECV is a poor stepchild in between these. Its variable ranking is essentially univariate, like a filter. It doesn't remove correlations, also like a filter. But it has a model wrapper around it, so it looks like a wrapper. It decides how many variables are ranked #1, sorts the rest by univariate importance, and doesn't sort the #1 variables (or any variables) by multivariate importance. My opinion: RFE is a popular but lousy wrapper. Use sequentialfeatureselector.
RFE vs Backward Elimination - is there a difference? RFE is a bit of a hybrid. It looks and acts as a wrapper, similar to backward selection. But its main drawback is its selection of variables is essentially univariate. It uses a univariate measure of
26,585
Real World Consequences of Misinterpreting Confidence Intervals?
Where the confidence and credible intervals have similar bounds it is hard to see that interpreting one as if it were the other is an important problem. I have not come across a real-world circumstance where the distinction between a fixed unknown parameter value and a parameter value that is a random variable is problematical of itself. However, it is important to realise that credible intervals can be shaped by an informative prior and such an interval will be different from the frequentist confidence interval derived for the same data. In that circumstance it would be more problematical to assume that the confidence interval reflected the (posterior) probability of the parameter values of interest.
Real World Consequences of Misinterpreting Confidence Intervals?
Where the confidence and credible intervals have similar bounds it is hard to see that interpreting one as if it were the other is an important problem. I have not come across a real-world circumstan
Real World Consequences of Misinterpreting Confidence Intervals? Where the confidence and credible intervals have similar bounds it is hard to see that interpreting one as if it were the other is an important problem. I have not come across a real-world circumstance where the distinction between a fixed unknown parameter value and a parameter value that is a random variable is problematical of itself. However, it is important to realise that credible intervals can be shaped by an informative prior and such an interval will be different from the frequentist confidence interval derived for the same data. In that circumstance it would be more problematical to assume that the confidence interval reflected the (posterior) probability of the parameter values of interest.
Real World Consequences of Misinterpreting Confidence Intervals? Where the confidence and credible intervals have similar bounds it is hard to see that interpreting one as if it were the other is an important problem. I have not come across a real-world circumstan
26,586
Real World Consequences of Misinterpreting Confidence Intervals?
Firstly, there's artificial examples. E.g. a 95% CI that contains the whole parameter space 95% of the time and the is just the interval from 11.23234576 to 11.2323457 5% of the time is obviously a valid 95% CI (i.e. contains the true value at least 95% of the time), but also completely useless. Nobody uses that kind of thing in practice, so let's ignore these types of examples. Secondly, there's the situation where you use something like sample mean $\pm$ 1.96 $\times$ standard error. I.e. you are effectively using flat priors. If you truly knew absolutely nothing about the problem at hand, I guess that might be a reasonable prior belief and okay. In practice, we are hardly ever in that situation. We often have some idea of the variability of the measurement, as well as location (e.g. we have some idea by how much someone's blood pressure can change without treatment, we have some idea how much of an effect a new blood pressure drug could possibly have etc.). If one ignores that prior information, then extreme estimates and confidence intervals created using flat priors are much more likely to not contain the true value than the ones that are more consistent with the prior information. In practice, I believe the issue is usually from the second case. This is a real issue e.g. for a drug company that has a series of drugs coming out of pre-clinical research and there's not that much to distinguish them (all are considered sort of promising and showed some promising results in in-vitro and animal experiments). Now, these drugs are each being tested in small proof-of-concept studies (especially so, if these are powered for e.g. 80% power at the 10% one-sided significance level for very optimistic assumed effect sizes). If about 30 to 50% of the new drugs being tried really have a meaningfully large effect (with some obvious bounds on that), then confidence intervals containing very large effect sizes are more likely to lie above the true effect than those that are less "optimistic". If you then need to make decisions (proceed or not) and for determining the size of the next studies (and likelihood of success), using the confidence intervals as if they were 95% credible intervals is likely being much too optimistic. That's why in practice people will take much more Bayesian approaches (see e.g. what one company does and another).
Real World Consequences of Misinterpreting Confidence Intervals?
Firstly, there's artificial examples. E.g. a 95% CI that contains the whole parameter space 95% of the time and the is just the interval from 11.23234576 to 11.2323457 5% of the time is obviously a va
Real World Consequences of Misinterpreting Confidence Intervals? Firstly, there's artificial examples. E.g. a 95% CI that contains the whole parameter space 95% of the time and the is just the interval from 11.23234576 to 11.2323457 5% of the time is obviously a valid 95% CI (i.e. contains the true value at least 95% of the time), but also completely useless. Nobody uses that kind of thing in practice, so let's ignore these types of examples. Secondly, there's the situation where you use something like sample mean $\pm$ 1.96 $\times$ standard error. I.e. you are effectively using flat priors. If you truly knew absolutely nothing about the problem at hand, I guess that might be a reasonable prior belief and okay. In practice, we are hardly ever in that situation. We often have some idea of the variability of the measurement, as well as location (e.g. we have some idea by how much someone's blood pressure can change without treatment, we have some idea how much of an effect a new blood pressure drug could possibly have etc.). If one ignores that prior information, then extreme estimates and confidence intervals created using flat priors are much more likely to not contain the true value than the ones that are more consistent with the prior information. In practice, I believe the issue is usually from the second case. This is a real issue e.g. for a drug company that has a series of drugs coming out of pre-clinical research and there's not that much to distinguish them (all are considered sort of promising and showed some promising results in in-vitro and animal experiments). Now, these drugs are each being tested in small proof-of-concept studies (especially so, if these are powered for e.g. 80% power at the 10% one-sided significance level for very optimistic assumed effect sizes). If about 30 to 50% of the new drugs being tried really have a meaningfully large effect (with some obvious bounds on that), then confidence intervals containing very large effect sizes are more likely to lie above the true effect than those that are less "optimistic". If you then need to make decisions (proceed or not) and for determining the size of the next studies (and likelihood of success), using the confidence intervals as if they were 95% credible intervals is likely being much too optimistic. That's why in practice people will take much more Bayesian approaches (see e.g. what one company does and another).
Real World Consequences of Misinterpreting Confidence Intervals? Firstly, there's artificial examples. E.g. a 95% CI that contains the whole parameter space 95% of the time and the is just the interval from 11.23234576 to 11.2323457 5% of the time is obviously a va
26,587
Real World Consequences of Misinterpreting Confidence Intervals?
Much research is done in many fields. In many journals and fields statistical significance is seen as a requirement for a "discovery" worth publishing. This means that there is publication bias; significant studies (i.e. studies where the confidence interval does not include "no effect", often formalised as parameter zero) are published and insignificant ones disappear, and people look at many things until they find something significant. The consequence of this is that published significant effects are actually biased high; in other words, confidence intervals are too far on one side, away from "no effect". Using credible intervals should force the researcher to think about prior information and plausibility. (In fact this often doesn't happen, supposedly "informationless" priors are used, and the situation may not be better than with confidence intervals.) Andrew Gelman argues on his blog that in many social science studies priors should be chosen so that small and zero effects are more and large effects are less likely, so that credible intervals have a better chance to include zero and small effects, mitigating the publication bias problem. There are various references on the blog to studies that find significant and in fact implausibly large effects using frequentist inference, e.g., that more attractive parents tend to have daughters rather than sons, or that voting behaviour of women depends on how long after ovulation the election takes place. It's somewhat hard to find a specific one using the blog search system but there are many of them. One related posting is this. I admit though that this issue is only somewhat loosely related to the question, as the problem with confidence intervals here is not in the first place that they are wrongly interpreted as credible intervals (and in fact credible intervals may mess things up as well with an unsuitable prior). There is some relation though. The bare fact that the confidence interval has large effect sizes in it shouldn't lead us to believe that these are plausible or true with high probability. They are mathematically correct, but the confidence level is a performance characteristic rather than a measure of plausibility/probability of parameters, and the performance characteristic has limited value or even requires adjustment in case many confidence intervals are in fact run, or they are run conditionally on other diagnoses performed on the same data. Bayesian analyses grant epistemic interpretation, i.e., probabilities assigned to parameters regard our knowledge/expectations of the characteristics of the underlying process rather than the performance of the method. This will however not necessarily solve the problem. In particular, publication and selection bias can still bite if results based on priors are favoured that lead to headline-grabbing claims or if priors are chosen dependent on the data. Furthermore all Bayesian results are of course conditional on prior and model, and can only do better if these are chosen taken information appropriately into account. Often "informationless priors" are used that simply reproduce a problematic frequentist analysis. I should also mention that frequentists argue that we actually should be interested in performance characteristics, and that it is a bug rather than a feature of Bayesian analyses that they don't bother with this.
Real World Consequences of Misinterpreting Confidence Intervals?
Much research is done in many fields. In many journals and fields statistical significance is seen as a requirement for a "discovery" worth publishing. This means that there is publication bias; signi
Real World Consequences of Misinterpreting Confidence Intervals? Much research is done in many fields. In many journals and fields statistical significance is seen as a requirement for a "discovery" worth publishing. This means that there is publication bias; significant studies (i.e. studies where the confidence interval does not include "no effect", often formalised as parameter zero) are published and insignificant ones disappear, and people look at many things until they find something significant. The consequence of this is that published significant effects are actually biased high; in other words, confidence intervals are too far on one side, away from "no effect". Using credible intervals should force the researcher to think about prior information and plausibility. (In fact this often doesn't happen, supposedly "informationless" priors are used, and the situation may not be better than with confidence intervals.) Andrew Gelman argues on his blog that in many social science studies priors should be chosen so that small and zero effects are more and large effects are less likely, so that credible intervals have a better chance to include zero and small effects, mitigating the publication bias problem. There are various references on the blog to studies that find significant and in fact implausibly large effects using frequentist inference, e.g., that more attractive parents tend to have daughters rather than sons, or that voting behaviour of women depends on how long after ovulation the election takes place. It's somewhat hard to find a specific one using the blog search system but there are many of them. One related posting is this. I admit though that this issue is only somewhat loosely related to the question, as the problem with confidence intervals here is not in the first place that they are wrongly interpreted as credible intervals (and in fact credible intervals may mess things up as well with an unsuitable prior). There is some relation though. The bare fact that the confidence interval has large effect sizes in it shouldn't lead us to believe that these are plausible or true with high probability. They are mathematically correct, but the confidence level is a performance characteristic rather than a measure of plausibility/probability of parameters, and the performance characteristic has limited value or even requires adjustment in case many confidence intervals are in fact run, or they are run conditionally on other diagnoses performed on the same data. Bayesian analyses grant epistemic interpretation, i.e., probabilities assigned to parameters regard our knowledge/expectations of the characteristics of the underlying process rather than the performance of the method. This will however not necessarily solve the problem. In particular, publication and selection bias can still bite if results based on priors are favoured that lead to headline-grabbing claims or if priors are chosen dependent on the data. Furthermore all Bayesian results are of course conditional on prior and model, and can only do better if these are chosen taken information appropriately into account. Often "informationless priors" are used that simply reproduce a problematic frequentist analysis. I should also mention that frequentists argue that we actually should be interested in performance characteristics, and that it is a bug rather than a feature of Bayesian analyses that they don't bother with this.
Real World Consequences of Misinterpreting Confidence Intervals? Much research is done in many fields. In many journals and fields statistical significance is seen as a requirement for a "discovery" worth publishing. This means that there is publication bias; signi
26,588
Real World Consequences of Misinterpreting Confidence Intervals?
A 95% credible interval has a 95% posterior probability of containing the parameter. It seems very unlikely that anyone would interpret the 95% confidence level as a 95% posterior probability, and hence it is unlikely that anyone would interpret a CI as if it were a credible interval, contrary to the implications in your question. Rather, the common misinterpretation of CIs by non-statisticians revolves around not understanding the distinction between pre-sample and post-sample, and hence confusing pre-sample 95% probability with post-sample 95% confidence. The latter probability is not a posterior probability.
Real World Consequences of Misinterpreting Confidence Intervals?
A 95% credible interval has a 95% posterior probability of containing the parameter. It seems very unlikely that anyone would interpret the 95% confidence level as a 95% posterior probability, and hen
Real World Consequences of Misinterpreting Confidence Intervals? A 95% credible interval has a 95% posterior probability of containing the parameter. It seems very unlikely that anyone would interpret the 95% confidence level as a 95% posterior probability, and hence it is unlikely that anyone would interpret a CI as if it were a credible interval, contrary to the implications in your question. Rather, the common misinterpretation of CIs by non-statisticians revolves around not understanding the distinction between pre-sample and post-sample, and hence confusing pre-sample 95% probability with post-sample 95% confidence. The latter probability is not a posterior probability.
Real World Consequences of Misinterpreting Confidence Intervals? A 95% credible interval has a 95% posterior probability of containing the parameter. It seems very unlikely that anyone would interpret the 95% confidence level as a 95% posterior probability, and hen
26,589
Uncertainty estimation in high-dimensional inference problems without sampling?
First of all, I think your statistical model is wrong. I change your notation to one more familiar to statisticians, thus let $$\mathbf{d}=\mathbf{y}=(y_1,\dots,y_N),\ N=10^6$$ be your vector of observations (data), and $$\begin{align} \mathbf{x}&=\boldsymbol{\theta}=(\theta_1,\dots,\theta_p) \\ \mathbf{y}&=\boldsymbol{\phi}=(\phi_1,\dots,\phi_p) \\ \mathbf{z}&=\boldsymbol{\rho}=(\rho_1,\dots,\rho_p), \ p \approx 650 \\ \end{align}$$ your vectors of parameters, of total dimension $d=3p \approx 2000$. Then, if I understood correctly, you assume a model $$ \mathbf{y} = \mathbf{G}\mathbf{r_1}(\boldsymbol{\theta}, \boldsymbol{\phi})+\boldsymbol{\rho}\mathbf{G}\mathbf{r_2}(\boldsymbol{\theta}, \boldsymbol{\phi}))+\boldsymbol{\epsilon},\ \boldsymbol{\epsilon}\sim\mathcal{N}(0,I_N) $$ where $\mathbf{G}$ is the $N\times d$ spline interpolation matrix. This is clearly wrong. There's no way the errors at different points in the image from the same camera, and at the same point in images from different cameras, are independent. You should look into spatial statistics and models such as generalized least squares, semivariogram estimation, kriging, Gaussian Processes, etc. Having said that, since your question is not whether the model is a good approximation of the actual data generating process, but how to estimate such a model, I'll show you a few options to do that. HMC 2000 parameters is not a very large model, unless you're training this thing on a laptop. The dataset is bigger ($10^6$ data points), but still, if you have access to cloud instances or machines with GPUs, frameworks such as Pyro or Tensorflow Probability will make short work of such a problem. Thus, you could simply use GPU-powered Hamiltonian Monte Carlo. Pros: "exact" inference, in the limit of a infinite number of samples from the chain. Cons: no tight bound on the estimation error, multiple convergence diagnostic metrics exist, but none is ideal. Large sample approximation With an abuse of notation, let's denote by $\theta$ the vector obtained by concatenating your three vectors of parameters. Then, using the Bayesian central limit theorem (Bernstein-von Mises), you could approximate $p(\theta\vert \mathbf{y})$ with $\mathcal{N}(\hat{\theta_0}_n,I_n^{-1}(\theta_0))$, where $\theta_0$ is the "true" parameter value, $\hat{\theta_0}_n$ is the MLE estimate of $\theta_0$ and $I_n^{-1}(\theta_0)$ is the Fisher information matrix evaluated at $ \theta_0$. Of course, $\theta_0$ being unknown, we'll use $I_n^{-1}(\hat{\theta_0}_n)$ instead. The validity of the Bernstein-von Mises theorem depends on a few hypotheses which you can find, e.e g., here: in your case, assuming that $R_1,R_2$ are smooth and differentiable, the theorem is valid, because the support of a Gaussian prior is the whole parameter space. Or, better, it would be valid, if your data were actually i.i.d. as you assume, but I don't believe they are, as I explained in the beginning. Pros: especially useful in the $p<<N$ case. Guaranteed to converge to the right answer, in the iid setting, when the likelihood is smooth and differentiable and the prior is nonzero in a neighborhood of $\theta_0$. Cons: The biggest con, as you noted, is the need to invert the Fisher information matrix. Also, I wouldn't know how to judge the accuracy of the approximation empirically, short of using a MCMC sampler to draw samples from $p(\theta\vert \mathbf{y})$. Of course, this would defeat the utility of using B-vM in the first place. Variational inference In this case, rather than finding the exact $p(\theta\vert \mathbf{y})$ (which would require the computation of a $d-$dimensional integral), we choose to approximate $p$ with $q_{\phi}(\theta)$, where $q$ belongs to the parametric family $\mathcal{Q}_{\phi}$ indexed by the parameter vector $\phi$. We look for $\phi^*$ s.t. some measure of discrepancy between $q$ and $p$ is minimzed. Choosing this measure to be the KL divergence, we obtain the Variational Inference method: $$\DeclareMathOperator*{\argmin}{arg\,min} \phi^*=\argmin_{\phi\in\Phi}D_{KL}(q_{\phi}(\theta)||p(\theta\vert\mathbf{y}))$$ Requirements on $q_{\phi}(\theta)$: it should be differentiable with respect to $\phi$, so that we can apply methods for large scale optimization, such as Stochastic Gradient Descent, to solve the minimization problem. it should be flexible enough that it can approximate accurately $p(\theta\vert\mathbf{y})$ for some value of $\phi$, but also simple enough that it's easy to sample from. This is because estimating the KL divergence (our optimization objective) requires estimating an expectation w.r.t $q$. You might choose $q_{\phi}(\theta)$ to be fully factorized, i.e., the product of $d$ univariate probability distributions: $$ q_{\phi}(\theta)=\prod_{i=1}^d q_{\phi_i}(\theta_i)$$ this is the so-called mean-field Variational Bayes method. One can prove (see, e.g., Chapter 10 of this book) that the optimal solution for each of the factors $q_{\phi_j}(\theta_j)$ is $$ \log{q_j^*(\theta_j)} = \mathbb{E}_{i\neq j}[\log{p(\mathbf{y},\theta)}] + \text{const.}$$ where $p(\mathbf{y},\theta)$ is the joint distribution of parameters and data (in your case, it's the product of your Gaussian likelihood and the Gaussian priors over the parameters) and the expectation is with respect to the other variational univariate distributions $q_1^*(\theta_1),\dots,q_{j-1}^*(\theta_{j-1}),q_{j+1}^*(\theta_{j+1}),\dots,q_{d}^*(\theta_{d})$. Of course, since the solution for one of the factors depends on all the other factors, we must apply an iterative procedure, initializing all the distributions $q_{i}(\theta_{i})$ to some initial guess and then iteratively updating them one at a time with the equation above. Note that instead of computing the expectation above as a $(d-1)-$dimensional integral, which would be prohibitive in your case where the priors and the likelihood aren't conjugate, you could use Monte Carlo estimation to approximate the expectation. The mean-field Variational Bayes algorithm is not the only possible VI algorithm you could use: the Variational Autoencoder presented in Kingma & Welling, 2014, "Auto-encoding Variational Bayes" is an interesting alternative, where, rather than assuming a fully factorized form for $q$, and then deriving a closed-form expression for the $q_i$, $q$ is assumed to be multivariate Gaussian, but with possibly different parameters at each of the $N$ data points. To amortize the cost of inference, a neural network is used to map the input space to the variational parameters space. See the paper for a detailed description of the algorithm: VAE implementations are again available in all the major Deep Learning frameworks.
Uncertainty estimation in high-dimensional inference problems without sampling?
First of all, I think your statistical model is wrong. I change your notation to one more familiar to statisticians, thus let $$\mathbf{d}=\mathbf{y}=(y_1,\dots,y_N),\ N=10^6$$ be your vector of obser
Uncertainty estimation in high-dimensional inference problems without sampling? First of all, I think your statistical model is wrong. I change your notation to one more familiar to statisticians, thus let $$\mathbf{d}=\mathbf{y}=(y_1,\dots,y_N),\ N=10^6$$ be your vector of observations (data), and $$\begin{align} \mathbf{x}&=\boldsymbol{\theta}=(\theta_1,\dots,\theta_p) \\ \mathbf{y}&=\boldsymbol{\phi}=(\phi_1,\dots,\phi_p) \\ \mathbf{z}&=\boldsymbol{\rho}=(\rho_1,\dots,\rho_p), \ p \approx 650 \\ \end{align}$$ your vectors of parameters, of total dimension $d=3p \approx 2000$. Then, if I understood correctly, you assume a model $$ \mathbf{y} = \mathbf{G}\mathbf{r_1}(\boldsymbol{\theta}, \boldsymbol{\phi})+\boldsymbol{\rho}\mathbf{G}\mathbf{r_2}(\boldsymbol{\theta}, \boldsymbol{\phi}))+\boldsymbol{\epsilon},\ \boldsymbol{\epsilon}\sim\mathcal{N}(0,I_N) $$ where $\mathbf{G}$ is the $N\times d$ spline interpolation matrix. This is clearly wrong. There's no way the errors at different points in the image from the same camera, and at the same point in images from different cameras, are independent. You should look into spatial statistics and models such as generalized least squares, semivariogram estimation, kriging, Gaussian Processes, etc. Having said that, since your question is not whether the model is a good approximation of the actual data generating process, but how to estimate such a model, I'll show you a few options to do that. HMC 2000 parameters is not a very large model, unless you're training this thing on a laptop. The dataset is bigger ($10^6$ data points), but still, if you have access to cloud instances or machines with GPUs, frameworks such as Pyro or Tensorflow Probability will make short work of such a problem. Thus, you could simply use GPU-powered Hamiltonian Monte Carlo. Pros: "exact" inference, in the limit of a infinite number of samples from the chain. Cons: no tight bound on the estimation error, multiple convergence diagnostic metrics exist, but none is ideal. Large sample approximation With an abuse of notation, let's denote by $\theta$ the vector obtained by concatenating your three vectors of parameters. Then, using the Bayesian central limit theorem (Bernstein-von Mises), you could approximate $p(\theta\vert \mathbf{y})$ with $\mathcal{N}(\hat{\theta_0}_n,I_n^{-1}(\theta_0))$, where $\theta_0$ is the "true" parameter value, $\hat{\theta_0}_n$ is the MLE estimate of $\theta_0$ and $I_n^{-1}(\theta_0)$ is the Fisher information matrix evaluated at $ \theta_0$. Of course, $\theta_0$ being unknown, we'll use $I_n^{-1}(\hat{\theta_0}_n)$ instead. The validity of the Bernstein-von Mises theorem depends on a few hypotheses which you can find, e.e g., here: in your case, assuming that $R_1,R_2$ are smooth and differentiable, the theorem is valid, because the support of a Gaussian prior is the whole parameter space. Or, better, it would be valid, if your data were actually i.i.d. as you assume, but I don't believe they are, as I explained in the beginning. Pros: especially useful in the $p<<N$ case. Guaranteed to converge to the right answer, in the iid setting, when the likelihood is smooth and differentiable and the prior is nonzero in a neighborhood of $\theta_0$. Cons: The biggest con, as you noted, is the need to invert the Fisher information matrix. Also, I wouldn't know how to judge the accuracy of the approximation empirically, short of using a MCMC sampler to draw samples from $p(\theta\vert \mathbf{y})$. Of course, this would defeat the utility of using B-vM in the first place. Variational inference In this case, rather than finding the exact $p(\theta\vert \mathbf{y})$ (which would require the computation of a $d-$dimensional integral), we choose to approximate $p$ with $q_{\phi}(\theta)$, where $q$ belongs to the parametric family $\mathcal{Q}_{\phi}$ indexed by the parameter vector $\phi$. We look for $\phi^*$ s.t. some measure of discrepancy between $q$ and $p$ is minimzed. Choosing this measure to be the KL divergence, we obtain the Variational Inference method: $$\DeclareMathOperator*{\argmin}{arg\,min} \phi^*=\argmin_{\phi\in\Phi}D_{KL}(q_{\phi}(\theta)||p(\theta\vert\mathbf{y}))$$ Requirements on $q_{\phi}(\theta)$: it should be differentiable with respect to $\phi$, so that we can apply methods for large scale optimization, such as Stochastic Gradient Descent, to solve the minimization problem. it should be flexible enough that it can approximate accurately $p(\theta\vert\mathbf{y})$ for some value of $\phi$, but also simple enough that it's easy to sample from. This is because estimating the KL divergence (our optimization objective) requires estimating an expectation w.r.t $q$. You might choose $q_{\phi}(\theta)$ to be fully factorized, i.e., the product of $d$ univariate probability distributions: $$ q_{\phi}(\theta)=\prod_{i=1}^d q_{\phi_i}(\theta_i)$$ this is the so-called mean-field Variational Bayes method. One can prove (see, e.g., Chapter 10 of this book) that the optimal solution for each of the factors $q_{\phi_j}(\theta_j)$ is $$ \log{q_j^*(\theta_j)} = \mathbb{E}_{i\neq j}[\log{p(\mathbf{y},\theta)}] + \text{const.}$$ where $p(\mathbf{y},\theta)$ is the joint distribution of parameters and data (in your case, it's the product of your Gaussian likelihood and the Gaussian priors over the parameters) and the expectation is with respect to the other variational univariate distributions $q_1^*(\theta_1),\dots,q_{j-1}^*(\theta_{j-1}),q_{j+1}^*(\theta_{j+1}),\dots,q_{d}^*(\theta_{d})$. Of course, since the solution for one of the factors depends on all the other factors, we must apply an iterative procedure, initializing all the distributions $q_{i}(\theta_{i})$ to some initial guess and then iteratively updating them one at a time with the equation above. Note that instead of computing the expectation above as a $(d-1)-$dimensional integral, which would be prohibitive in your case where the priors and the likelihood aren't conjugate, you could use Monte Carlo estimation to approximate the expectation. The mean-field Variational Bayes algorithm is not the only possible VI algorithm you could use: the Variational Autoencoder presented in Kingma & Welling, 2014, "Auto-encoding Variational Bayes" is an interesting alternative, where, rather than assuming a fully factorized form for $q$, and then deriving a closed-form expression for the $q_i$, $q$ is assumed to be multivariate Gaussian, but with possibly different parameters at each of the $N$ data points. To amortize the cost of inference, a neural network is used to map the input space to the variational parameters space. See the paper for a detailed description of the algorithm: VAE implementations are again available in all the major Deep Learning frameworks.
Uncertainty estimation in high-dimensional inference problems without sampling? First of all, I think your statistical model is wrong. I change your notation to one more familiar to statisticians, thus let $$\mathbf{d}=\mathbf{y}=(y_1,\dots,y_N),\ N=10^6$$ be your vector of obser
26,590
Uncertainty estimation in high-dimensional inference problems without sampling?
you may want to check out some of the "bayesX" software and possibly also the "inla" software. both of these are likely to have some ideas that you can try. Google it both rely very heavily on exploiting sparsity in the parameterisation of the precision matrix (I.e conditional independence, markov type model) - and have inversion algorithms designed for this. most of the examples are based on either multi level or auto regressive guassian models. should be fairly similar to the example you posted
Uncertainty estimation in high-dimensional inference problems without sampling?
you may want to check out some of the "bayesX" software and possibly also the "inla" software. both of these are likely to have some ideas that you can try. Google it both rely very heavily on exploit
Uncertainty estimation in high-dimensional inference problems without sampling? you may want to check out some of the "bayesX" software and possibly also the "inla" software. both of these are likely to have some ideas that you can try. Google it both rely very heavily on exploiting sparsity in the parameterisation of the precision matrix (I.e conditional independence, markov type model) - and have inversion algorithms designed for this. most of the examples are based on either multi level or auto regressive guassian models. should be fairly similar to the example you posted
Uncertainty estimation in high-dimensional inference problems without sampling? you may want to check out some of the "bayesX" software and possibly also the "inla" software. both of these are likely to have some ideas that you can try. Google it both rely very heavily on exploit
26,591
What is the relation between SVD and ALS?
... I confused about what ALS actually is? I thought it was something akin to SVD, or any other matrix factorization algorithm. Would It matter that i'm using explicit data instead of implicit data? See: "Simple Movie Recommender Using SVD" and "ALS Implicit Collaborative Filtering": "Implicit vs explicit data Explicit data is data where we have some sort of rating. Like the 1 to 5 ratings from the MovieLens or Netflix dataset. Here we know how much a user likes or dislikes an item which is great, but this data is hard to come by. Your users might not spend the time to rate items or your app might not work well with a rating approach in the first place. Implicit data (the type of data we’re using here) is data we gather from the users behaviour, with no ratings or specific actions needed. It could be what items a user purchased, how many times they played a song or watched a movie, how long they’ve spent reading a specific article etc. The upside is that we have a lot more of this data, the downside is that it’s more noisy and not always apparent what it means. For example, with star ratings we know that a 1 means the user did not like that item and a 5 that they really loved it. With song plays it might be that the user played a song and hated it, or loved it, or somewhere in-between. If they did not play a song it might be since they don’t like it or that they would love it if they just knew about. So instead we focus on what we know the user has consumed and the confidence we have in whether or not they like any given item. We can for example measure how often they play a song and assume a higher confidence if they’ve listened to it 500 times vs. one time. Implicit recommendations are becoming an increasingly important part of many recommendation systems as the amount of implicit data grows. For example the original Netflix challenge focused only on explicit data but they’re now relying more and more on implicit signals. The same thing goes for Hulu, Spotify, Etsy and many others.". There are different ways to factor a matrix, like Singular Value Decomposition (SVD) or Probabilistic Latent Semantic Analysis (PLSA) if we’re dealing with explicit data. A least squares approach in it’s basic forms means fitting some line to the data, measuring the sum of squared distances from all points to the line and trying to get an optimal fit for missing points. With the alternating least squares approach we use the same idea but iteratively alternate between optimizing U and fixing V and vice versa. It is an iterative optimization process where we for every iteration try to arrive closer and closer to a factorized representation of our original data. Singular-Value Decomposition is a matrix decomposition method for reducing a matrix to its constituent parts in order to make certain subsequent matrix calculations simpler. The tutorials in the first two and last link should be very helpful.
What is the relation between SVD and ALS?
... I confused about what ALS actually is? I thought it was something akin to SVD, or any other matrix factorization algorithm. Would It matter that i'm using explicit data instead of implicit data?
What is the relation between SVD and ALS? ... I confused about what ALS actually is? I thought it was something akin to SVD, or any other matrix factorization algorithm. Would It matter that i'm using explicit data instead of implicit data? See: "Simple Movie Recommender Using SVD" and "ALS Implicit Collaborative Filtering": "Implicit vs explicit data Explicit data is data where we have some sort of rating. Like the 1 to 5 ratings from the MovieLens or Netflix dataset. Here we know how much a user likes or dislikes an item which is great, but this data is hard to come by. Your users might not spend the time to rate items or your app might not work well with a rating approach in the first place. Implicit data (the type of data we’re using here) is data we gather from the users behaviour, with no ratings or specific actions needed. It could be what items a user purchased, how many times they played a song or watched a movie, how long they’ve spent reading a specific article etc. The upside is that we have a lot more of this data, the downside is that it’s more noisy and not always apparent what it means. For example, with star ratings we know that a 1 means the user did not like that item and a 5 that they really loved it. With song plays it might be that the user played a song and hated it, or loved it, or somewhere in-between. If they did not play a song it might be since they don’t like it or that they would love it if they just knew about. So instead we focus on what we know the user has consumed and the confidence we have in whether or not they like any given item. We can for example measure how often they play a song and assume a higher confidence if they’ve listened to it 500 times vs. one time. Implicit recommendations are becoming an increasingly important part of many recommendation systems as the amount of implicit data grows. For example the original Netflix challenge focused only on explicit data but they’re now relying more and more on implicit signals. The same thing goes for Hulu, Spotify, Etsy and many others.". There are different ways to factor a matrix, like Singular Value Decomposition (SVD) or Probabilistic Latent Semantic Analysis (PLSA) if we’re dealing with explicit data. A least squares approach in it’s basic forms means fitting some line to the data, measuring the sum of squared distances from all points to the line and trying to get an optimal fit for missing points. With the alternating least squares approach we use the same idea but iteratively alternate between optimizing U and fixing V and vice versa. It is an iterative optimization process where we for every iteration try to arrive closer and closer to a factorized representation of our original data. Singular-Value Decomposition is a matrix decomposition method for reducing a matrix to its constituent parts in order to make certain subsequent matrix calculations simpler. The tutorials in the first two and last link should be very helpful.
What is the relation between SVD and ALS? ... I confused about what ALS actually is? I thought it was something akin to SVD, or any other matrix factorization algorithm. Would It matter that i'm using explicit data instead of implicit data?
26,592
What is the relation between SVD and ALS?
The relationship between ALS and SVD in latent factor recommender systems is the same as the relationship between OLS and Normal Equations in Linear Regression Under the hood, Alternating Least Squres (henceforth ALS) is a 'fancy' two step gradient descent technique to find matrices $P$, the user factors matrix and $Q$, the item factor matrix such that $U \approx PQ^T$. The gradient descent works to minimise the square of the L2 norm of the matrix $|U - PQ^T|$ This is very similar to Ordinary Least Squares (henceforth OLS), which also involves gradient descent to find the coefficient matrix $w$ of a linear regression model $y = Xw + \epsilon$. The gradient descent works to minimize the square of the L2 norm of the vector $|y - Xw|$ This establishes the fact that ALS in recommender systems is an analogue of OLS in linear regression. Notation alert: $||Z||_2 ^2$ is just the formal way to write the square of the L2 norm of $|Z|$ Now, let us ask ourselves this In a linear regression setting, can we estimate the coefficient matrix $w$ in any other way than using gradient descent? Perhaps by using a direct formula? The reason as to why we would want to do this is simple. Gradient descent has lots of problems, starting from the fact that it is very slow, can get stuck in local optima, does not guarantee convergence if loss function landscape is non-convex, among other things. Would it not be better if we had a formula for linear regression that would directly give us the coefficient matrix $w$ which minimises the loss function $||y - Xw||_2 ^2$? This would aloow us to just plug in the values of $X$ and $y$ and get $w$ directly. No iterative and computationally intensive steps need to be taken. Well, does such a formula exist to find the coefficient matrix of linear regression as a function of $X$ and $y$? Absolutely, and it comes from normal equations, and the formula looks something like this $$ w = (X^T X)^{-1} X^Ty$$ And it is orders of magnitude faster than estimating $w$ using gradient descent. And it always gives the exact answer answer, unlike approximations from gradient descent. For recommender systems, there also exists such a formula that finds the matrices $P$ and $Q$ such that $||U - PQ^T||_2 ^2$ is minimised. This formula is given by Singular Value Decomposition (henceforth SVD), whose computation is straightforward and way less computionally and time intensive than the gradient descent based ALS technique. tl;dr SVD is the analytical analogue to ALS in recommender systems, as Normal Equations are the analytical analogue to OLS in linear regression
What is the relation between SVD and ALS?
The relationship between ALS and SVD in latent factor recommender systems is the same as the relationship between OLS and Normal Equations in Linear Regression Under the hood, Alternating Least Squres
What is the relation between SVD and ALS? The relationship between ALS and SVD in latent factor recommender systems is the same as the relationship between OLS and Normal Equations in Linear Regression Under the hood, Alternating Least Squres (henceforth ALS) is a 'fancy' two step gradient descent technique to find matrices $P$, the user factors matrix and $Q$, the item factor matrix such that $U \approx PQ^T$. The gradient descent works to minimise the square of the L2 norm of the matrix $|U - PQ^T|$ This is very similar to Ordinary Least Squares (henceforth OLS), which also involves gradient descent to find the coefficient matrix $w$ of a linear regression model $y = Xw + \epsilon$. The gradient descent works to minimize the square of the L2 norm of the vector $|y - Xw|$ This establishes the fact that ALS in recommender systems is an analogue of OLS in linear regression. Notation alert: $||Z||_2 ^2$ is just the formal way to write the square of the L2 norm of $|Z|$ Now, let us ask ourselves this In a linear regression setting, can we estimate the coefficient matrix $w$ in any other way than using gradient descent? Perhaps by using a direct formula? The reason as to why we would want to do this is simple. Gradient descent has lots of problems, starting from the fact that it is very slow, can get stuck in local optima, does not guarantee convergence if loss function landscape is non-convex, among other things. Would it not be better if we had a formula for linear regression that would directly give us the coefficient matrix $w$ which minimises the loss function $||y - Xw||_2 ^2$? This would aloow us to just plug in the values of $X$ and $y$ and get $w$ directly. No iterative and computationally intensive steps need to be taken. Well, does such a formula exist to find the coefficient matrix of linear regression as a function of $X$ and $y$? Absolutely, and it comes from normal equations, and the formula looks something like this $$ w = (X^T X)^{-1} X^Ty$$ And it is orders of magnitude faster than estimating $w$ using gradient descent. And it always gives the exact answer answer, unlike approximations from gradient descent. For recommender systems, there also exists such a formula that finds the matrices $P$ and $Q$ such that $||U - PQ^T||_2 ^2$ is minimised. This formula is given by Singular Value Decomposition (henceforth SVD), whose computation is straightforward and way less computionally and time intensive than the gradient descent based ALS technique. tl;dr SVD is the analytical analogue to ALS in recommender systems, as Normal Equations are the analytical analogue to OLS in linear regression
What is the relation between SVD and ALS? The relationship between ALS and SVD in latent factor recommender systems is the same as the relationship between OLS and Normal Equations in Linear Regression Under the hood, Alternating Least Squres
26,593
MAPE vs R-squared in regression models
Let's look at the definition of $R^2$: if $y_i$ are the actuals and $f_i$ the predictions, then we let $\overline{y}$ denote the mean of the actuals and $e_i = y_i-f_i$ the error. Then $$ R^2 := 1-\frac{\sum e_i^2}{\sum (y_i-\overline{y})^2}. $$ If we want to maximize $R^2$, we note that we cannot influence the denominator in this formula. Thus, maximizing $R^2$ is equivalent to minimizing the sum of squared errors (or the Mean Squared Error, mse). And this actually makes a lot of sense. The prediction that minimizes the expected MSE is the expected value of each $Y_i$ (the distribution from which we observe $y_i$). This is often what we want. Note that other error measures like the MAPE may be minimized by other quantities, so minimizing the MAPE may yield biased point predictions. For the difference between the MAPE and $R^2$ (also known as MSE, as we have seen), see this earlier post: The difference between MSE and MAPE.
MAPE vs R-squared in regression models
Let's look at the definition of $R^2$: if $y_i$ are the actuals and $f_i$ the predictions, then we let $\overline{y}$ denote the mean of the actuals and $e_i = y_i-f_i$ the error. Then $$ R^2 := 1-\fr
MAPE vs R-squared in regression models Let's look at the definition of $R^2$: if $y_i$ are the actuals and $f_i$ the predictions, then we let $\overline{y}$ denote the mean of the actuals and $e_i = y_i-f_i$ the error. Then $$ R^2 := 1-\frac{\sum e_i^2}{\sum (y_i-\overline{y})^2}. $$ If we want to maximize $R^2$, we note that we cannot influence the denominator in this formula. Thus, maximizing $R^2$ is equivalent to minimizing the sum of squared errors (or the Mean Squared Error, mse). And this actually makes a lot of sense. The prediction that minimizes the expected MSE is the expected value of each $Y_i$ (the distribution from which we observe $y_i$). This is often what we want. Note that other error measures like the MAPE may be minimized by other quantities, so minimizing the MAPE may yield biased point predictions. For the difference between the MAPE and $R^2$ (also known as MSE, as we have seen), see this earlier post: The difference between MSE and MAPE.
MAPE vs R-squared in regression models Let's look at the definition of $R^2$: if $y_i$ are the actuals and $f_i$ the predictions, then we let $\overline{y}$ denote the mean of the actuals and $e_i = y_i-f_i$ the error. Then $$ R^2 := 1-\fr
26,594
Is it true that K-Means has an assumption "each cluster has a roughly equal number of observations"?
There is certainly no assumption in standard K-means algorithms that assumes an equal number of points in each cluster. However, certain standard algorithms do have a tendency towards equalising the spatial variance of clusters, which can result in a (rough) tendency towards equality of cluster sizes in cases where there is overlap between the clusters. For example, one standard method is to estimate the clusters by minimising the within-cluster sum-of-squares (WCSS). In cases where there are several overlapping clusters, this method has a tendency to allocate points in a way that (roughly) equalises the spacial variance of the clusters, which may result in (rough) equalisation of the number of points in each cluster. Alternative methods that use parametric forms to allow greater freedom of variance in each cluster will lack this tendency.
Is it true that K-Means has an assumption "each cluster has a roughly equal number of observations"?
There is certainly no assumption in standard K-means algorithms that assumes an equal number of points in each cluster. However, certain standard algorithms do have a tendency towards equalising the
Is it true that K-Means has an assumption "each cluster has a roughly equal number of observations"? There is certainly no assumption in standard K-means algorithms that assumes an equal number of points in each cluster. However, certain standard algorithms do have a tendency towards equalising the spatial variance of clusters, which can result in a (rough) tendency towards equality of cluster sizes in cases where there is overlap between the clusters. For example, one standard method is to estimate the clusters by minimising the within-cluster sum-of-squares (WCSS). In cases where there are several overlapping clusters, this method has a tendency to allocate points in a way that (roughly) equalises the spacial variance of the clusters, which may result in (rough) equalisation of the number of points in each cluster. Alternative methods that use parametric forms to allow greater freedom of variance in each cluster will lack this tendency.
Is it true that K-Means has an assumption "each cluster has a roughly equal number of observations"? There is certainly no assumption in standard K-means algorithms that assumes an equal number of points in each cluster. However, certain standard algorithms do have a tendency towards equalising the
26,595
Is it true that K-Means has an assumption "each cluster has a roughly equal number of observations"?
I would recommend to interpret it as often similar area/extend. But of course, opinions may vary. As the decision boundaries of neighboring clusters are exactly halfway in between of the centers (c.f. Voronoi diagram), there is a good argument to call these 'somewhat equal', even if it does not hold on the empty outside 'areas' that can even be infinite. So formally, area will be a problem, too. I don't see much of a "tendency" to produce the same number of observations unless you assume, for example, a uniform distribution. On the contrary, it is trivial to construct counterexamples. For example, the 1 dimensional data set 1,2,3,4,5,6,7,8,9,10,100 will produce maximally unbalanced clusters (as unbalanced as you can have with 2 partitions). Because of such examples, I'd reject the claim that kmeans produces clusters of the same cardinality. And there exist modifications for exactly this purpose of balancing the cardinalities: e.g., https://elki-project.github.io/tutorial/same-size_k_means
Is it true that K-Means has an assumption "each cluster has a roughly equal number of observations"?
I would recommend to interpret it as often similar area/extend. But of course, opinions may vary. As the decision boundaries of neighboring clusters are exactly halfway in between of the centers (c.f.
Is it true that K-Means has an assumption "each cluster has a roughly equal number of observations"? I would recommend to interpret it as often similar area/extend. But of course, opinions may vary. As the decision boundaries of neighboring clusters are exactly halfway in between of the centers (c.f. Voronoi diagram), there is a good argument to call these 'somewhat equal', even if it does not hold on the empty outside 'areas' that can even be infinite. So formally, area will be a problem, too. I don't see much of a "tendency" to produce the same number of observations unless you assume, for example, a uniform distribution. On the contrary, it is trivial to construct counterexamples. For example, the 1 dimensional data set 1,2,3,4,5,6,7,8,9,10,100 will produce maximally unbalanced clusters (as unbalanced as you can have with 2 partitions). Because of such examples, I'd reject the claim that kmeans produces clusters of the same cardinality. And there exist modifications for exactly this purpose of balancing the cardinalities: e.g., https://elki-project.github.io/tutorial/same-size_k_means
Is it true that K-Means has an assumption "each cluster has a roughly equal number of observations"? I would recommend to interpret it as often similar area/extend. But of course, opinions may vary. As the decision boundaries of neighboring clusters are exactly halfway in between of the centers (c.f.
26,596
Is it true that K-Means has an assumption "each cluster has a roughly equal number of observations"?
Let's review k-means first: k-means algorithm clusters the data points based on the update of the centroids (hence the category, center-based clustering). Centroids are initially randomly chosen. (Not in all variations of k-means, of course.) Centroids will be updated by getting the mean of each cluster in each iteration. I think we are assuming the distance metric is Euclidean. (Yes, it matters.) These facts, results in spherical clustering. That means, in a 2D space, clusters can be separated using k disks. In n-dimensional space, k n-spheres will partition the space. Your question here, is whether these disks (n-spheres) wrap around roughly equal number of data points. My answer is "yes and no"! Let me elaborate on that: If the distribution of data is uniform, no matter how the centroids are initially chosen, the space (and hence the data) will be roughly equally partitioned. If the distribution is NOT uniform, the results depend on how the centroids are chosen. But, still, it is not easy to think of an example where the space (and NOT the data) is not equally partitioned. Therefore, we cannot say if k-means produces equal-size clusters (clusters with the same number of data-pints), unless it is assumed that the date have a uniform (or even Gaussian) distribution. Since in many scientific examples, data are normally distributed, very often we don't bother mentioning that what assumptions we are building our theories on. As a side note, we already know that none of the clustering algorithms that tend to find spherical clusters should be used on data points of a particular distribution. Look at the master-piece on the right figure, done by a wrong choice of the method: Yet, this does not make k-means a less powerful clustering algorithm. It is still the first tool for us to get to know our data, since it is fast (O(n⋅k⋅d⋅i) -here-, Lloyd version), and very easy to interpret. Depending on the type of clusters we should expect (well-separated, contiguous, center-based, density-based, etc), we need to utilize different clustering algorithms. References for the figures: The first two set of plots are my screen-shots of a great applet to play around with k-means: Naftali Harris The last plot, I borrowed from a great post on Spectral Clustering by Sandipan Dey.
Is it true that K-Means has an assumption "each cluster has a roughly equal number of observations"?
Let's review k-means first: k-means algorithm clusters the data points based on the update of the centroids (hence the category, center-based clustering). Centroids are initially randomly chosen. (No
Is it true that K-Means has an assumption "each cluster has a roughly equal number of observations"? Let's review k-means first: k-means algorithm clusters the data points based on the update of the centroids (hence the category, center-based clustering). Centroids are initially randomly chosen. (Not in all variations of k-means, of course.) Centroids will be updated by getting the mean of each cluster in each iteration. I think we are assuming the distance metric is Euclidean. (Yes, it matters.) These facts, results in spherical clustering. That means, in a 2D space, clusters can be separated using k disks. In n-dimensional space, k n-spheres will partition the space. Your question here, is whether these disks (n-spheres) wrap around roughly equal number of data points. My answer is "yes and no"! Let me elaborate on that: If the distribution of data is uniform, no matter how the centroids are initially chosen, the space (and hence the data) will be roughly equally partitioned. If the distribution is NOT uniform, the results depend on how the centroids are chosen. But, still, it is not easy to think of an example where the space (and NOT the data) is not equally partitioned. Therefore, we cannot say if k-means produces equal-size clusters (clusters with the same number of data-pints), unless it is assumed that the date have a uniform (or even Gaussian) distribution. Since in many scientific examples, data are normally distributed, very often we don't bother mentioning that what assumptions we are building our theories on. As a side note, we already know that none of the clustering algorithms that tend to find spherical clusters should be used on data points of a particular distribution. Look at the master-piece on the right figure, done by a wrong choice of the method: Yet, this does not make k-means a less powerful clustering algorithm. It is still the first tool for us to get to know our data, since it is fast (O(n⋅k⋅d⋅i) -here-, Lloyd version), and very easy to interpret. Depending on the type of clusters we should expect (well-separated, contiguous, center-based, density-based, etc), we need to utilize different clustering algorithms. References for the figures: The first two set of plots are my screen-shots of a great applet to play around with k-means: Naftali Harris The last plot, I borrowed from a great post on Spectral Clustering by Sandipan Dey.
Is it true that K-Means has an assumption "each cluster has a roughly equal number of observations"? Let's review k-means first: k-means algorithm clusters the data points based on the update of the centroids (hence the category, center-based clustering). Centroids are initially randomly chosen. (No
26,597
When is having an unbiased estimator important?
I think it's safe to say there's no situation when one needs an unbiased estimator; for example, if $\mu = 1$ and we have $E[\hat \mu] = \mu + \epsilon$, there has got be an $\epsilon$ small enough that you cannot possibly care. With that said, I think it's important to see unbiased estimators as more of the limit of something that is good. All else remaining the same, less bias is better. And there are plenty of consistent estimators in which the bias is so high in moderate samples that the estimator is greatly impacted. For example, in most maximum likelihood estimators, the estimate of variance components is often downward biased. In the cases of prediction intervals, for example, this can be a really big problem in the face of over fitting. In short, I would extremely hard pressed to find a situation in which truly unbiased estimates are needed. However, it's quite easy to come up with problems in which the bias of an estimator is the crucial problem. Having an estimator be unbiased is probably never an absolute requirement, but having an estimator unbiased does mean that there's one potentially serious issue taken care of. EDIT: After thinking about it a little more, it occurred to me that out-of-sample error is the perfect answer for your request. The "classic" method for estimating out-of-sample error is the maximum likelihood estimator, which in the case of normal data, reduces to the in-sample error. While this estimator is consistent, with models with large degrees of freedom, the bias is so bad that it will recommend degenerate models (i.e. estimate 0 out-of-sample error with models that are heavily over fit). Cross-validation is a clever way of getting an unbiased estimate of the out-of-sample error. If you use cross-validation to do model selection, you again downwardly bias your out-of-sample error estimate...which is why you hold a validation dataset to get an unbiased estimate of the final selected model. Of course, my comment about truly unbiased still remains: if I had an estimator had expected value of the out-of-sample error + $\epsilon$, I would happily use it instead for small enough $\epsilon$. But the method of cross-validation is motivated by trying to get an unbiased estimator of the out-of-sample error. And without cross-validation, the field of machine learning would look completely different than it does now.
When is having an unbiased estimator important?
I think it's safe to say there's no situation when one needs an unbiased estimator; for example, if $\mu = 1$ and we have $E[\hat \mu] = \mu + \epsilon$, there has got be an $\epsilon$ small enough th
When is having an unbiased estimator important? I think it's safe to say there's no situation when one needs an unbiased estimator; for example, if $\mu = 1$ and we have $E[\hat \mu] = \mu + \epsilon$, there has got be an $\epsilon$ small enough that you cannot possibly care. With that said, I think it's important to see unbiased estimators as more of the limit of something that is good. All else remaining the same, less bias is better. And there are plenty of consistent estimators in which the bias is so high in moderate samples that the estimator is greatly impacted. For example, in most maximum likelihood estimators, the estimate of variance components is often downward biased. In the cases of prediction intervals, for example, this can be a really big problem in the face of over fitting. In short, I would extremely hard pressed to find a situation in which truly unbiased estimates are needed. However, it's quite easy to come up with problems in which the bias of an estimator is the crucial problem. Having an estimator be unbiased is probably never an absolute requirement, but having an estimator unbiased does mean that there's one potentially serious issue taken care of. EDIT: After thinking about it a little more, it occurred to me that out-of-sample error is the perfect answer for your request. The "classic" method for estimating out-of-sample error is the maximum likelihood estimator, which in the case of normal data, reduces to the in-sample error. While this estimator is consistent, with models with large degrees of freedom, the bias is so bad that it will recommend degenerate models (i.e. estimate 0 out-of-sample error with models that are heavily over fit). Cross-validation is a clever way of getting an unbiased estimate of the out-of-sample error. If you use cross-validation to do model selection, you again downwardly bias your out-of-sample error estimate...which is why you hold a validation dataset to get an unbiased estimate of the final selected model. Of course, my comment about truly unbiased still remains: if I had an estimator had expected value of the out-of-sample error + $\epsilon$, I would happily use it instead for small enough $\epsilon$. But the method of cross-validation is motivated by trying to get an unbiased estimator of the out-of-sample error. And without cross-validation, the field of machine learning would look completely different than it does now.
When is having an unbiased estimator important? I think it's safe to say there's no situation when one needs an unbiased estimator; for example, if $\mu = 1$ and we have $E[\hat \mu] = \mu + \epsilon$, there has got be an $\epsilon$ small enough th
26,598
Difference between the Law of Large Numbers and the Central Limit Theorem in layman's term? [duplicate]
The law of large numbers stems from two things: The variance of the estimator of the mean goes like ~ 1/N Markov's inequality You can do it with a few definitions of Markov's inequality: \begin{eqnarray} P(\vert X \vert \ge a) \le & \frac{\mathrm{E}\left(X\right)}{a} \end{eqnarray} and statistical properties of the estimatory of the mean: \begin{eqnarray} \bar{X} &=& \sum_{n=1}^N \frac{x}{N} \\ \mathrm{E}\left(\bar{X} \right) &=& \mu \\ \mathrm{Var}\left( \bar{X}^2 \right) &=& \frac{\sigma^2}{N} \end{eqnarray} Doing a quick trick we find: \begin{eqnarray} P( (\bar{X}-\mu)^2 \ge \epsilon^2 ) &\le& \frac{\mathrm{E}\left((\bar{X}-\mu)^2 \right)}{\epsilon^2}\\ & \le & \frac{\sigma^2}{N \epsilon} \end{eqnarray} So as $N \to \infty$, the right hand side goes to zero, and so find the $\bar{X}$ becomes arbitrarily $\epsilon$-close to the real mean, $\mu$. The central limit theorem hinges on: As you sum up a bunch for random variables $x_1+x_2+x_3$, no matter their distributions, you are essentially "mixing" the probability densities $P(x_1), P(x_2), P(x_3)$ -- technically you're convolving them in $x$ space and multiplying their characteristic fucntions $\phi(k)$ in fourier space -- making the resulting "aggregate" pdf $P(x_1+x_2+x_3)$ look more and more Gaussian. The proof of why this happens is a bit tricky. But stems from the fact that pdf's are upper bounded by 1 and positive definite, so when you multiply / convolve them over and over again, they are efficiently described in both fourier $k$ and $x$ space by a log taylor expansion up to second order -- meaning you only need to now the first and second cumulants, the mean $\mu$ and variance $\sigma^2$.
Difference between the Law of Large Numbers and the Central Limit Theorem in layman's term? [duplica
The law of large numbers stems from two things: The variance of the estimator of the mean goes like ~ 1/N Markov's inequality You can do it with a few definitions of Markov's inequality: \begin{eqna
Difference between the Law of Large Numbers and the Central Limit Theorem in layman's term? [duplicate] The law of large numbers stems from two things: The variance of the estimator of the mean goes like ~ 1/N Markov's inequality You can do it with a few definitions of Markov's inequality: \begin{eqnarray} P(\vert X \vert \ge a) \le & \frac{\mathrm{E}\left(X\right)}{a} \end{eqnarray} and statistical properties of the estimatory of the mean: \begin{eqnarray} \bar{X} &=& \sum_{n=1}^N \frac{x}{N} \\ \mathrm{E}\left(\bar{X} \right) &=& \mu \\ \mathrm{Var}\left( \bar{X}^2 \right) &=& \frac{\sigma^2}{N} \end{eqnarray} Doing a quick trick we find: \begin{eqnarray} P( (\bar{X}-\mu)^2 \ge \epsilon^2 ) &\le& \frac{\mathrm{E}\left((\bar{X}-\mu)^2 \right)}{\epsilon^2}\\ & \le & \frac{\sigma^2}{N \epsilon} \end{eqnarray} So as $N \to \infty$, the right hand side goes to zero, and so find the $\bar{X}$ becomes arbitrarily $\epsilon$-close to the real mean, $\mu$. The central limit theorem hinges on: As you sum up a bunch for random variables $x_1+x_2+x_3$, no matter their distributions, you are essentially "mixing" the probability densities $P(x_1), P(x_2), P(x_3)$ -- technically you're convolving them in $x$ space and multiplying their characteristic fucntions $\phi(k)$ in fourier space -- making the resulting "aggregate" pdf $P(x_1+x_2+x_3)$ look more and more Gaussian. The proof of why this happens is a bit tricky. But stems from the fact that pdf's are upper bounded by 1 and positive definite, so when you multiply / convolve them over and over again, they are efficiently described in both fourier $k$ and $x$ space by a log taylor expansion up to second order -- meaning you only need to now the first and second cumulants, the mean $\mu$ and variance $\sigma^2$.
Difference between the Law of Large Numbers and the Central Limit Theorem in layman's term? [duplica The law of large numbers stems from two things: The variance of the estimator of the mean goes like ~ 1/N Markov's inequality You can do it with a few definitions of Markov's inequality: \begin{eqna
26,599
Proof of Pitman–Koopman–Darmois theorem
The proof is somewhat technical and can be found in its original form here: On Distributions Admitting a Sufficient Statistic -- Koopman: http://yaroslavvb.com/papers/koopman-on.pdf
Proof of Pitman–Koopman–Darmois theorem
The proof is somewhat technical and can be found in its original form here: On Distributions Admitting a Sufficient Statistic -- Koopman: http://yaroslavvb.com/papers/koopman-on.pdf
Proof of Pitman–Koopman–Darmois theorem The proof is somewhat technical and can be found in its original form here: On Distributions Admitting a Sufficient Statistic -- Koopman: http://yaroslavvb.com/papers/koopman-on.pdf
Proof of Pitman–Koopman–Darmois theorem The proof is somewhat technical and can be found in its original form here: On Distributions Admitting a Sufficient Statistic -- Koopman: http://yaroslavvb.com/papers/koopman-on.pdf
26,600
Different results after propensity score matching in R
This happens when you have (at least) two individuals that have the same propensity score. MatchIt randomly selects one to include in the matched set. My recommendation would be to select one matched set and carry out your analysis with it. I agree that trying other conditioning methods such as full matching and IPW would be a good idea. You could report results of various analyses in a sensitivity analysis section. Edit: This is probably the wrong answer. See Viktor's answer for what is likely the actual cause. Edit 2020-12-07: For MatchIt version less than 4.0.0, the only random selection that would occur when nearest neighbor matching was when ties were present or when m.order = "random", which is not the default. If few variables were used in matching, and especially if they were all categorical or took few values, ties are possible. As of version 4.0.0, there are no longer any random processes unless m.order = "random"; all ties are broken deterministically based on the order of the data.
Different results after propensity score matching in R
This happens when you have (at least) two individuals that have the same propensity score. MatchIt randomly selects one to include in the matched set. My recommendation would be to select one matched
Different results after propensity score matching in R This happens when you have (at least) two individuals that have the same propensity score. MatchIt randomly selects one to include in the matched set. My recommendation would be to select one matched set and carry out your analysis with it. I agree that trying other conditioning methods such as full matching and IPW would be a good idea. You could report results of various analyses in a sensitivity analysis section. Edit: This is probably the wrong answer. See Viktor's answer for what is likely the actual cause. Edit 2020-12-07: For MatchIt version less than 4.0.0, the only random selection that would occur when nearest neighbor matching was when ties were present or when m.order = "random", which is not the default. If few variables were used in matching, and especially if they were all categorical or took few values, ties are possible. As of version 4.0.0, there are no longer any random processes unless m.order = "random"; all ties are broken deterministically based on the order of the data.
Different results after propensity score matching in R This happens when you have (at least) two individuals that have the same propensity score. MatchIt randomly selects one to include in the matched set. My recommendation would be to select one matched