idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
20,401 | TfidfVectorizer: should it be used on train only or train+test | An incremental approach is robust to leakage.
Use case: train document classification model against large corpus and test on a new set of documents.
At training Time: Calculate TF-IDF on training data and use as features for classification model.
At test Time: Add new documents to corpus and recalculate TF-IDF on whole corpus. Use TF-IDF values for the new document as inputs to model for scoring.
If the number of documents being tested/scored is small, to speed up the process, you may wish to recalculate only the TF and use the existing IDF figures as they won't be affected much by a small number of docs.
Live Use: Same as Test. I.e. the approach is robust to live use and doesn't leak. | TfidfVectorizer: should it be used on train only or train+test | An incremental approach is robust to leakage.
Use case: train document classification model against large corpus and test on a new set of documents.
At training Time: Calculate TF-IDF on training data | TfidfVectorizer: should it be used on train only or train+test
An incremental approach is robust to leakage.
Use case: train document classification model against large corpus and test on a new set of documents.
At training Time: Calculate TF-IDF on training data and use as features for classification model.
At test Time: Add new documents to corpus and recalculate TF-IDF on whole corpus. Use TF-IDF values for the new document as inputs to model for scoring.
If the number of documents being tested/scored is small, to speed up the process, you may wish to recalculate only the TF and use the existing IDF figures as they won't be affected much by a small number of docs.
Live Use: Same as Test. I.e. the approach is robust to live use and doesn't leak. | TfidfVectorizer: should it be used on train only or train+test
An incremental approach is robust to leakage.
Use case: train document classification model against large corpus and test on a new set of documents.
At training Time: Calculate TF-IDF on training data |
20,402 | TfidfVectorizer: should it be used on train only or train+test | Ideally it should fit on entire corpus, so as to learn vocabulary and give score to each.
Corpus = Grouped Data around certain entity or entire corpus.
Train = Training Data Split from certain entity or entire corpus.
Test = Test Data Split from certain entity or entire corpus.
vec = TfidfVectorizer()
vec.fit(corpus)
trainx = tf.transform(train)
testx = tf.transform(test) | TfidfVectorizer: should it be used on train only or train+test | Ideally it should fit on entire corpus, so as to learn vocabulary and give score to each.
Corpus = Grouped Data around certain entity or entire corpus.
Train = Training Data Split from certain entity | TfidfVectorizer: should it be used on train only or train+test
Ideally it should fit on entire corpus, so as to learn vocabulary and give score to each.
Corpus = Grouped Data around certain entity or entire corpus.
Train = Training Data Split from certain entity or entire corpus.
Test = Test Data Split from certain entity or entire corpus.
vec = TfidfVectorizer()
vec.fit(corpus)
trainx = tf.transform(train)
testx = tf.transform(test) | TfidfVectorizer: should it be used on train only or train+test
Ideally it should fit on entire corpus, so as to learn vocabulary and give score to each.
Corpus = Grouped Data around certain entity or entire corpus.
Train = Training Data Split from certain entity |
20,403 | Median Absolute Deviation vs Standard Deviation | Robustness to outliers is a double-edged sword: Sometimes we want to estimate things in a way that is robust to outliers, which means that we do not mind getting large outliers. At other times we want to avoid large outliers, so we want to estimate things in a way that is not robust to outliers. Similarly, with measures of spread, sometimes we want something that is robust to outliers, so that large outliers do not increase the measure. At other times we want our measure of spread to reflect the presence of large outliers by manifesting in a larger value.
In decision-theory, issues like this are dealt with by specifying a penalty/loss function which penalises you for your error in estimation of a quantity. Two common loss functions are absolute-error loss and squared-error loss (shown in the following plots, taken from this answer by Jean-Paul).
Absolute-error loss penalises you according to the absolute deviation of your estimate from the true value. This form of loss function leads to estimation using medians. This form of loss function is robust to outliers in the sense that outliers contribute a penalty that is proportionate to their size. Measures of spread in this context reflect the expected loss of a particular estimate of central location, with the expected loss being a weighted sum of absolute deviations from the estimated central location.
Squared-error loss penalises you according to the squared deviation of your estimate from the true value. This form of loss function leads to estimation using means. This form of loss function is sensitive to outliers in the sense that outliers contribute a penalty that is proportionate to their squared deviation - this magnifies the effect of large outliers. Measures of spread in this context reflect the expected loss of a particular estimate of central location, with the expected loss being a weighted sum of squared deviations from the estimated central location.
In regard to the choice between median absolute deviation and standard deviation these same considerations apply. The former measure is a measure of spread that represents expected absolute-error loss, and is more robust to outliers. In this case, outliers do not manifest in large increases in the measure of spread. The latter is a measure of spread that represents expected squared-error loss, and is more sensitive to outliers. In this case, the outliers will manifest in large increases in the measure of spread. | Median Absolute Deviation vs Standard Deviation | Robustness to outliers is a double-edged sword: Sometimes we want to estimate things in a way that is robust to outliers, which means that we do not mind getting large outliers. At other times we wan | Median Absolute Deviation vs Standard Deviation
Robustness to outliers is a double-edged sword: Sometimes we want to estimate things in a way that is robust to outliers, which means that we do not mind getting large outliers. At other times we want to avoid large outliers, so we want to estimate things in a way that is not robust to outliers. Similarly, with measures of spread, sometimes we want something that is robust to outliers, so that large outliers do not increase the measure. At other times we want our measure of spread to reflect the presence of large outliers by manifesting in a larger value.
In decision-theory, issues like this are dealt with by specifying a penalty/loss function which penalises you for your error in estimation of a quantity. Two common loss functions are absolute-error loss and squared-error loss (shown in the following plots, taken from this answer by Jean-Paul).
Absolute-error loss penalises you according to the absolute deviation of your estimate from the true value. This form of loss function leads to estimation using medians. This form of loss function is robust to outliers in the sense that outliers contribute a penalty that is proportionate to their size. Measures of spread in this context reflect the expected loss of a particular estimate of central location, with the expected loss being a weighted sum of absolute deviations from the estimated central location.
Squared-error loss penalises you according to the squared deviation of your estimate from the true value. This form of loss function leads to estimation using means. This form of loss function is sensitive to outliers in the sense that outliers contribute a penalty that is proportionate to their squared deviation - this magnifies the effect of large outliers. Measures of spread in this context reflect the expected loss of a particular estimate of central location, with the expected loss being a weighted sum of squared deviations from the estimated central location.
In regard to the choice between median absolute deviation and standard deviation these same considerations apply. The former measure is a measure of spread that represents expected absolute-error loss, and is more robust to outliers. In this case, outliers do not manifest in large increases in the measure of spread. The latter is a measure of spread that represents expected squared-error loss, and is more sensitive to outliers. In this case, the outliers will manifest in large increases in the measure of spread. | Median Absolute Deviation vs Standard Deviation
Robustness to outliers is a double-edged sword: Sometimes we want to estimate things in a way that is robust to outliers, which means that we do not mind getting large outliers. At other times we wan |
20,404 | Why does the policy iteration algorithm converge to optimal policy and value function? | I think the part you are missing is that $V^{\pi_2} \ge V^{\pi_1}$ is guaranteed for the same reason we can order $\pi_2 \ge \pi_1$. That is the essentially the definition of one policy being better than another - that its value function is greater or equal in all states. You have guaranteed this by choosing the maximising actions - no state value can possibly be worse than it was before, and if just one action choice has changed to choose a better maximising action, then you already know (but may not have calculated) that the $V^{\pi_2}(s)$ for that state is going to be higher than it was for $V^{\pi_1}(s)$.
When we choose to maximise outcomes to generate $\pi_2$, we don't know what the new $V^{\pi_2}(s)$ is going to be for any state, but we do know that $\forall s: V^{\pi_2}(s) \ge V^{\pi_1}(s)$.
Therefore, going back through the loop and calculating $V^{\pi_2}$ for the new policy is guaranteed to have same or higher values than before, and when it comes to update the policy again, $\pi_3 \ge \pi_2 \ge \pi_1$. | Why does the policy iteration algorithm converge to optimal policy and value function? | I think the part you are missing is that $V^{\pi_2} \ge V^{\pi_1}$ is guaranteed for the same reason we can order $\pi_2 \ge \pi_1$. That is the essentially the definition of one policy being better t | Why does the policy iteration algorithm converge to optimal policy and value function?
I think the part you are missing is that $V^{\pi_2} \ge V^{\pi_1}$ is guaranteed for the same reason we can order $\pi_2 \ge \pi_1$. That is the essentially the definition of one policy being better than another - that its value function is greater or equal in all states. You have guaranteed this by choosing the maximising actions - no state value can possibly be worse than it was before, and if just one action choice has changed to choose a better maximising action, then you already know (but may not have calculated) that the $V^{\pi_2}(s)$ for that state is going to be higher than it was for $V^{\pi_1}(s)$.
When we choose to maximise outcomes to generate $\pi_2$, we don't know what the new $V^{\pi_2}(s)$ is going to be for any state, but we do know that $\forall s: V^{\pi_2}(s) \ge V^{\pi_1}(s)$.
Therefore, going back through the loop and calculating $V^{\pi_2}$ for the new policy is guaranteed to have same or higher values than before, and when it comes to update the policy again, $\pi_3 \ge \pi_2 \ge \pi_1$. | Why does the policy iteration algorithm converge to optimal policy and value function?
I think the part you are missing is that $V^{\pi_2} \ge V^{\pi_1}$ is guaranteed for the same reason we can order $\pi_2 \ge \pi_1$. That is the essentially the definition of one policy being better t |
20,405 | Why does the policy iteration algorithm converge to optimal policy and value function? | First let's see why Policy Iteration Algorithm works. It has two steps.
Policy Evaluation Step:
$v_n = r_{d_n} + \gamma P_{d_n}v_n$ is the general vectorial form of the system of linear equations.
Here, the terms $r_{d_n}, P_{d_n}$ are immediate rewards and corresponding rows of the transition matrix.
These terms are dependent on the policy $\Pi_n$
Solving the above system of equations we can find the values of $v_n$
Policy Improvement Step:
Assume that we were able to find a new policy $\Pi_{n+1}$ such that
\begin{align}
r_{d_n+1} + \gamma P_{d_n+1}v_n & \ge r_{d_n} + \gamma P_{d_n}v_n \\
\implies r_{d_n+1} & \ge [I - \gamma P_{d_n+1}]v_n \quad \text{say this is eqn. 1}\\
\end{align}
Now, based on the new policy $\Pi_{n+1}$,
we can find
$v_{n+1} = r_{d_{n+1}} + \gamma P_{d_{n+1}}v_{n+1}$, say this is equation 2.
We are going to show that $v_{n+1} \ge v_n$ ;
i.e. essentially for all the states, the newly chosen policy $\Pi_{n+1}$ gives a better value compared to the previous policy $\Pi_{n}$
Proof:
From, equation 2, we have,
$[I - \gamma P_{d_{n+1}}]v_{n+1} = r_{d_n+1}$
From, $1 \&2$, we have
$v_{n+1} \ge v_{n}$
Essentially, the values are monotonically increasing with each iteration.
This is important to understand why Policy Interation will not be stuck at a local maximum.
A Policy is nothing but a state-action space.
At every policy iteration step, we try to find at least one state-action which is different between $\Pi_{n+1}$ and $\Pi_{n}$ and see if $ \quad r_{d_n+1} + \gamma P_{d_n+1}v_n \ge r_{d_n} + \gamma P_{d_n}v_n$. Only if the condition is satisfied we will compute the solution to the new system of linear equations.
Assume $\Pi^*$ and $\Pi^\#$ are the global and local optimum respectively.
Implies, $v_* \ge v_\#$
Assume the algorithm is stuck at the local optimum.
If this is the case, then the policy improvement step will not stop at the local optimum state-action space $\Pi^\#$,
as there exists at least one state-action in $\Pi^*$ which is different from $\Pi^\#$ and yields a higher value of $v_{*}$ compared to $v_{\#}$
or, in other words,
$[I-\gamma P_{d_*}]v_* \ge [I-\gamma P_{d_*}]v_{\#}$
$\implies r_{d_*} \ge [I-\gamma P_{d_*}]v_{\#}$
$\implies r_{d_*} + \gamma P_{d_*}v_{\#} \ge v_{\#}$
$\implies r_{d_*} + \gamma P_{d_*}v_{\#} \ge r_{d_\#} + \gamma P_{d_\#}v_\#$
Hence, the Policy iteration does not stop at a local optimum | Why does the policy iteration algorithm converge to optimal policy and value function? | First let's see why Policy Iteration Algorithm works. It has two steps.
Policy Evaluation Step:
$v_n = r_{d_n} + \gamma P_{d_n}v_n$ is the general vectorial form of the system of linear equations.
H | Why does the policy iteration algorithm converge to optimal policy and value function?
First let's see why Policy Iteration Algorithm works. It has two steps.
Policy Evaluation Step:
$v_n = r_{d_n} + \gamma P_{d_n}v_n$ is the general vectorial form of the system of linear equations.
Here, the terms $r_{d_n}, P_{d_n}$ are immediate rewards and corresponding rows of the transition matrix.
These terms are dependent on the policy $\Pi_n$
Solving the above system of equations we can find the values of $v_n$
Policy Improvement Step:
Assume that we were able to find a new policy $\Pi_{n+1}$ such that
\begin{align}
r_{d_n+1} + \gamma P_{d_n+1}v_n & \ge r_{d_n} + \gamma P_{d_n}v_n \\
\implies r_{d_n+1} & \ge [I - \gamma P_{d_n+1}]v_n \quad \text{say this is eqn. 1}\\
\end{align}
Now, based on the new policy $\Pi_{n+1}$,
we can find
$v_{n+1} = r_{d_{n+1}} + \gamma P_{d_{n+1}}v_{n+1}$, say this is equation 2.
We are going to show that $v_{n+1} \ge v_n$ ;
i.e. essentially for all the states, the newly chosen policy $\Pi_{n+1}$ gives a better value compared to the previous policy $\Pi_{n}$
Proof:
From, equation 2, we have,
$[I - \gamma P_{d_{n+1}}]v_{n+1} = r_{d_n+1}$
From, $1 \&2$, we have
$v_{n+1} \ge v_{n}$
Essentially, the values are monotonically increasing with each iteration.
This is important to understand why Policy Interation will not be stuck at a local maximum.
A Policy is nothing but a state-action space.
At every policy iteration step, we try to find at least one state-action which is different between $\Pi_{n+1}$ and $\Pi_{n}$ and see if $ \quad r_{d_n+1} + \gamma P_{d_n+1}v_n \ge r_{d_n} + \gamma P_{d_n}v_n$. Only if the condition is satisfied we will compute the solution to the new system of linear equations.
Assume $\Pi^*$ and $\Pi^\#$ are the global and local optimum respectively.
Implies, $v_* \ge v_\#$
Assume the algorithm is stuck at the local optimum.
If this is the case, then the policy improvement step will not stop at the local optimum state-action space $\Pi^\#$,
as there exists at least one state-action in $\Pi^*$ which is different from $\Pi^\#$ and yields a higher value of $v_{*}$ compared to $v_{\#}$
or, in other words,
$[I-\gamma P_{d_*}]v_* \ge [I-\gamma P_{d_*}]v_{\#}$
$\implies r_{d_*} \ge [I-\gamma P_{d_*}]v_{\#}$
$\implies r_{d_*} + \gamma P_{d_*}v_{\#} \ge v_{\#}$
$\implies r_{d_*} + \gamma P_{d_*}v_{\#} \ge r_{d_\#} + \gamma P_{d_\#}v_\#$
Hence, the Policy iteration does not stop at a local optimum | Why does the policy iteration algorithm converge to optimal policy and value function?
First let's see why Policy Iteration Algorithm works. It has two steps.
Policy Evaluation Step:
$v_n = r_{d_n} + \gamma P_{d_n}v_n$ is the general vectorial form of the system of linear equations.
H |
20,406 | Why does the policy iteration algorithm converge to optimal policy and value function? | The optimal value function $v_*(s) := \max_\pi v_\pi(s)$ fulfills the Bellman optimality equation
$$ v_\pi(s) = \max_a R(s,a) + \gamma \sum_{s'} P(s'|s,a) v_\pi(s').$$
Also we know by Banach's fixed point theorem that any other value function $v_\pi$ that fulfills this equation is equal to $v_*$, which implies that $\pi$ must be an optimal policy as well.
If we have the statements in the previous paragraph we can argue as follows: the policy iteration algorithm stops once the policy improvement doesn't change $\pi$. That is the exactly the case, when $v_\pi$ fulfills the Bellman optimality equation. Thus $\pi$ must be equal to an optimal policy, once policy iteration stops. | Why does the policy iteration algorithm converge to optimal policy and value function? | The optimal value function $v_*(s) := \max_\pi v_\pi(s)$ fulfills the Bellman optimality equation
$$ v_\pi(s) = \max_a R(s,a) + \gamma \sum_{s'} P(s'|s,a) v_\pi(s').$$
Also we know by Banach's fixed p | Why does the policy iteration algorithm converge to optimal policy and value function?
The optimal value function $v_*(s) := \max_\pi v_\pi(s)$ fulfills the Bellman optimality equation
$$ v_\pi(s) = \max_a R(s,a) + \gamma \sum_{s'} P(s'|s,a) v_\pi(s').$$
Also we know by Banach's fixed point theorem that any other value function $v_\pi$ that fulfills this equation is equal to $v_*$, which implies that $\pi$ must be an optimal policy as well.
If we have the statements in the previous paragraph we can argue as follows: the policy iteration algorithm stops once the policy improvement doesn't change $\pi$. That is the exactly the case, when $v_\pi$ fulfills the Bellman optimality equation. Thus $\pi$ must be equal to an optimal policy, once policy iteration stops. | Why does the policy iteration algorithm converge to optimal policy and value function?
The optimal value function $v_*(s) := \max_\pi v_\pi(s)$ fulfills the Bellman optimality equation
$$ v_\pi(s) = \max_a R(s,a) + \gamma \sum_{s'} P(s'|s,a) v_\pi(s').$$
Also we know by Banach's fixed p |
20,407 | Multiple linear regression degrees of freedom | It's the number of predictor (x) variables; the additional -1 in the formula is for the intercept - it's an additional predictor. The Y doesn't count. So in your example $k=2$ and the error df is $N-3$ | Multiple linear regression degrees of freedom | It's the number of predictor (x) variables; the additional -1 in the formula is for the intercept - it's an additional predictor. The Y doesn't count. So in your example $k=2$ and the error df is $N-3 | Multiple linear regression degrees of freedom
It's the number of predictor (x) variables; the additional -1 in the formula is for the intercept - it's an additional predictor. The Y doesn't count. So in your example $k=2$ and the error df is $N-3$ | Multiple linear regression degrees of freedom
It's the number of predictor (x) variables; the additional -1 in the formula is for the intercept - it's an additional predictor. The Y doesn't count. So in your example $k=2$ and the error df is $N-3 |
20,408 | Clustering probability distributions - methods & metrics? | (Computational) Information Geometry is a field which deals exactly with these kind of problems. K-means has an extension called Bregman k-means which use divergences (whose squared Euclidean of the standard K-means is a particular case, but also Kullback-Leibler). A given divergence is associated to a distribution, e.g. squared Euclidean to Gaussian.
You can also have a look on the work of Frank Nielsen, for example
clustering histograms with k-means
You can also have a look on Wasserstein distances (optimal transport), mentioned as Earth Mover Distance in a previous post. | Clustering probability distributions - methods & metrics? | (Computational) Information Geometry is a field which deals exactly with these kind of problems. K-means has an extension called Bregman k-means which use divergences (whose squared Euclidean of the s | Clustering probability distributions - methods & metrics?
(Computational) Information Geometry is a field which deals exactly with these kind of problems. K-means has an extension called Bregman k-means which use divergences (whose squared Euclidean of the standard K-means is a particular case, but also Kullback-Leibler). A given divergence is associated to a distribution, e.g. squared Euclidean to Gaussian.
You can also have a look on the work of Frank Nielsen, for example
clustering histograms with k-means
You can also have a look on Wasserstein distances (optimal transport), mentioned as Earth Mover Distance in a previous post. | Clustering probability distributions - methods & metrics?
(Computational) Information Geometry is a field which deals exactly with these kind of problems. K-means has an extension called Bregman k-means which use divergences (whose squared Euclidean of the s |
20,409 | Clustering probability distributions - methods & metrics? | In their paper on the EP-Means algorithm, Henderson et al review approaches to this problem and give their own. They consider:
Parameter clustering - determine parameters for the distributions based on prior knowledge of the distribution, and cluster based on those parameters
note that here, you could actually use any functional on the data, not just parameter estimates, which is useful if you know your data comes from different distributions
Histogram binning - separate the data into bins, and consider each bin as a dimension to be used in spatial clustering
EP-Means (their approach) - define distributional centroids (mixture of all distributions assigned to a cluster) and minimize the sum of the squares of the Earth Mover's Distance (something like the expected value of the $L^1$ distance between CDFs) between the distributional centroids and the distributions assigned to that cluster.
Another technique that I've used with success is to cluster all the observed points from all the distributions individually, and then assign to distribution i the soft probability corresponding with the proportion of its points which end up in each cluster. On the downside, it's much harder to separate distributions that way. On the upside, it kind of auto-regularizes and assumes that all distributions are the same. I would only use it when that regularization property is desired, though. | Clustering probability distributions - methods & metrics? | In their paper on the EP-Means algorithm, Henderson et al review approaches to this problem and give their own. They consider:
Parameter clustering - determine parameters for the distributions based | Clustering probability distributions - methods & metrics?
In their paper on the EP-Means algorithm, Henderson et al review approaches to this problem and give their own. They consider:
Parameter clustering - determine parameters for the distributions based on prior knowledge of the distribution, and cluster based on those parameters
note that here, you could actually use any functional on the data, not just parameter estimates, which is useful if you know your data comes from different distributions
Histogram binning - separate the data into bins, and consider each bin as a dimension to be used in spatial clustering
EP-Means (their approach) - define distributional centroids (mixture of all distributions assigned to a cluster) and minimize the sum of the squares of the Earth Mover's Distance (something like the expected value of the $L^1$ distance between CDFs) between the distributional centroids and the distributions assigned to that cluster.
Another technique that I've used with success is to cluster all the observed points from all the distributions individually, and then assign to distribution i the soft probability corresponding with the proportion of its points which end up in each cluster. On the downside, it's much harder to separate distributions that way. On the upside, it kind of auto-regularizes and assumes that all distributions are the same. I would only use it when that regularization property is desired, though. | Clustering probability distributions - methods & metrics?
In their paper on the EP-Means algorithm, Henderson et al review approaches to this problem and give their own. They consider:
Parameter clustering - determine parameters for the distributions based |
20,410 | Clustering probability distributions - methods & metrics? | You should proceed in two steps. (1) Data reduction and (2) Clustering.
For step (1), you should carefully inspect your data and determine a reasonable probability distribution for your data. You seem to have thought about this step already. The next step is to estimate the parameters of these distributions. You might fit a model separately for each unit to be clustered, or it may be appropriate to use a more sophisticated model such as a generalized linear mixed model.
For step (2), you can then cluster based on these parameter estimates. At this stage you should have a small number of parameter estimates per unit. As described in the answer to this post, you can then cluster on these parameter estimates.
This answer is necessarily somewhat vague -- there is no "canned" solution here, and a great deal of statistical insight is needed for each step to select from a nearly infinite number of methods that may be relevant, depending on your unique problem. The statement of your question shows that you have self-tought yourself a good deal of statistical knowledge, which is commendable, but you still have some fundamental misunderstandings of core statistical concepts, such as the distinction between a probability distribution and observations from a probability distribution. Consider taking/auditing a mathematical statistics course or two. | Clustering probability distributions - methods & metrics? | You should proceed in two steps. (1) Data reduction and (2) Clustering.
For step (1), you should carefully inspect your data and determine a reasonable probability distribution for your data. You seem | Clustering probability distributions - methods & metrics?
You should proceed in two steps. (1) Data reduction and (2) Clustering.
For step (1), you should carefully inspect your data and determine a reasonable probability distribution for your data. You seem to have thought about this step already. The next step is to estimate the parameters of these distributions. You might fit a model separately for each unit to be clustered, or it may be appropriate to use a more sophisticated model such as a generalized linear mixed model.
For step (2), you can then cluster based on these parameter estimates. At this stage you should have a small number of parameter estimates per unit. As described in the answer to this post, you can then cluster on these parameter estimates.
This answer is necessarily somewhat vague -- there is no "canned" solution here, and a great deal of statistical insight is needed for each step to select from a nearly infinite number of methods that may be relevant, depending on your unique problem. The statement of your question shows that you have self-tought yourself a good deal of statistical knowledge, which is commendable, but you still have some fundamental misunderstandings of core statistical concepts, such as the distinction between a probability distribution and observations from a probability distribution. Consider taking/auditing a mathematical statistics course or two. | Clustering probability distributions - methods & metrics?
You should proceed in two steps. (1) Data reduction and (2) Clustering.
For step (1), you should carefully inspect your data and determine a reasonable probability distribution for your data. You seem |
20,411 | How to determine the sample size needed for repeated measurement ANOVA? | How to perform power analysis on repeated measures ANOVA?
G*Power 3 is free software that provides a user-friendly GUI interface for performing power calculations.
It supports power calculations for repeated measures ANOVA.
What is the appropriate analysis for your design?
Here are a range of points related to what you have mentioned:
More time points will give a clearer indication
of how the effect, if any, of your intervention operates over time. Thus, if the improvements decay over time or get greater, more time points will give a clearer sense of these patterns, both on average, and at an individual-level.
If you have 12 time points or more, I'd look at multilevel modelling, particularly if you are expecting any missing observations. You are unlikely to be interested in whether there is an effect of time. Rather you are likely to be interested in various specific effects (e.g., changes pre and post intervention; perhaps a linear or quadratic improvement effect post-intervention). You could also look at using planned contrasts on top of repeated measures ANOVA. Applied Longitudinal Data Analysis: Modeling Change and Event Occurrence is a good starting point to learn about multilevel modelling of repeated measures data.
The number of time points over and above pre- and post- wont do much to increase your power to detect the effect of your intervention. More time points will increase your reliability of measurement, and it might ensure that you capture the time period where the effect applies, but probably the bigger issue will be the sample size in the two conditions.
Assuming you are truly randomly allocating cases to conditions, the populations are by definition equal on the dependent variable, and one could argue that a significance test of baseline differences is meaningless. That said, researchers often still do it, and I suppose it does provide some evidence that random allocation has actually occurred.
There is a fair amount of debate about the best way to test the effect of an intervention in a pre-post-intervention-control design. A few options include: (a) the condition * time interaction; (b) the effect of condition but just at post intervention; (c) an ANCOVA looking at the effect of condition, controlling for pre, with post as the DV. | How to determine the sample size needed for repeated measurement ANOVA? | How to perform power analysis on repeated measures ANOVA?
G*Power 3 is free software that provides a user-friendly GUI interface for performing power calculations.
It supports power calculations for r | How to determine the sample size needed for repeated measurement ANOVA?
How to perform power analysis on repeated measures ANOVA?
G*Power 3 is free software that provides a user-friendly GUI interface for performing power calculations.
It supports power calculations for repeated measures ANOVA.
What is the appropriate analysis for your design?
Here are a range of points related to what you have mentioned:
More time points will give a clearer indication
of how the effect, if any, of your intervention operates over time. Thus, if the improvements decay over time or get greater, more time points will give a clearer sense of these patterns, both on average, and at an individual-level.
If you have 12 time points or more, I'd look at multilevel modelling, particularly if you are expecting any missing observations. You are unlikely to be interested in whether there is an effect of time. Rather you are likely to be interested in various specific effects (e.g., changes pre and post intervention; perhaps a linear or quadratic improvement effect post-intervention). You could also look at using planned contrasts on top of repeated measures ANOVA. Applied Longitudinal Data Analysis: Modeling Change and Event Occurrence is a good starting point to learn about multilevel modelling of repeated measures data.
The number of time points over and above pre- and post- wont do much to increase your power to detect the effect of your intervention. More time points will increase your reliability of measurement, and it might ensure that you capture the time period where the effect applies, but probably the bigger issue will be the sample size in the two conditions.
Assuming you are truly randomly allocating cases to conditions, the populations are by definition equal on the dependent variable, and one could argue that a significance test of baseline differences is meaningless. That said, researchers often still do it, and I suppose it does provide some evidence that random allocation has actually occurred.
There is a fair amount of debate about the best way to test the effect of an intervention in a pre-post-intervention-control design. A few options include: (a) the condition * time interaction; (b) the effect of condition but just at post intervention; (c) an ANCOVA looking at the effect of condition, controlling for pre, with post as the DV. | How to determine the sample size needed for repeated measurement ANOVA?
How to perform power analysis on repeated measures ANOVA?
G*Power 3 is free software that provides a user-friendly GUI interface for performing power calculations.
It supports power calculations for r |
20,412 | SVD dimensionality reduction for time series of different length | There is a reasonably new area of research called Matrix Completion, that probably does what you want. A really nice introduction is given in this lecture by Emmanuel Candes | SVD dimensionality reduction for time series of different length | There is a reasonably new area of research called Matrix Completion, that probably does what you want. A really nice introduction is given in this lecture by Emmanuel Candes | SVD dimensionality reduction for time series of different length
There is a reasonably new area of research called Matrix Completion, that probably does what you want. A really nice introduction is given in this lecture by Emmanuel Candes | SVD dimensionality reduction for time series of different length
There is a reasonably new area of research called Matrix Completion, that probably does what you want. A really nice introduction is given in this lecture by Emmanuel Candes |
20,413 | SVD dimensionality reduction for time series of different length | Filling with zero is bad. Try filling with resampling using observations from the past. | SVD dimensionality reduction for time series of different length | Filling with zero is bad. Try filling with resampling using observations from the past. | SVD dimensionality reduction for time series of different length
Filling with zero is bad. Try filling with resampling using observations from the past. | SVD dimensionality reduction for time series of different length
Filling with zero is bad. Try filling with resampling using observations from the past. |
20,414 | SVD dimensionality reduction for time series of different length | Just a thought: you might not need the full SVD for your problem. Let M = U S V* be the SVD of your d by n matrix (i.e., the time series are the columns). To achieve the dimension reduction you'll be using the matrices V and S. You can find them by diagonalizing M* M = V (S*S) V*. However, because you are missing some values, you cannot compute M* M. Nevertheless, you can estimate it. Its entries are sums of products of columns of M. When computing any of the SSPs, ignore pairs involving missing values. Rescale each product to account for the missing values: that is, whenever a SSP involves n-k pairs, rescale it by n/(n-k). This procedure is a "reasonable" estimator of M* M and you can proceed from there. If you want to get fancier, maybe multiple imputation techniques or Matrix Completion will help.
(This can be carried out in many statistical packages by computing a pairwise covariance matrix of the transposed dataset and applying PCA or factor analysis to it.) | SVD dimensionality reduction for time series of different length | Just a thought: you might not need the full SVD for your problem. Let M = U S V* be the SVD of your d by n matrix (i.e., the time series are the columns). To achieve the dimension reduction you'll b | SVD dimensionality reduction for time series of different length
Just a thought: you might not need the full SVD for your problem. Let M = U S V* be the SVD of your d by n matrix (i.e., the time series are the columns). To achieve the dimension reduction you'll be using the matrices V and S. You can find them by diagonalizing M* M = V (S*S) V*. However, because you are missing some values, you cannot compute M* M. Nevertheless, you can estimate it. Its entries are sums of products of columns of M. When computing any of the SSPs, ignore pairs involving missing values. Rescale each product to account for the missing values: that is, whenever a SSP involves n-k pairs, rescale it by n/(n-k). This procedure is a "reasonable" estimator of M* M and you can proceed from there. If you want to get fancier, maybe multiple imputation techniques or Matrix Completion will help.
(This can be carried out in many statistical packages by computing a pairwise covariance matrix of the transposed dataset and applying PCA or factor analysis to it.) | SVD dimensionality reduction for time series of different length
Just a thought: you might not need the full SVD for your problem. Let M = U S V* be the SVD of your d by n matrix (i.e., the time series are the columns). To achieve the dimension reduction you'll b |
20,415 | SVD dimensionality reduction for time series of different length | You could estimate univariate time series models for the 'short' series and extrapolate them into the future to 'align' all the series. | SVD dimensionality reduction for time series of different length | You could estimate univariate time series models for the 'short' series and extrapolate them into the future to 'align' all the series. | SVD dimensionality reduction for time series of different length
You could estimate univariate time series models for the 'short' series and extrapolate them into the future to 'align' all the series. | SVD dimensionality reduction for time series of different length
You could estimate univariate time series models for the 'short' series and extrapolate them into the future to 'align' all the series. |
20,416 | SVD dimensionality reduction for time series of different length | I'm somewhat confused by your example code, as it seems you drop the V variable from the computation of newX. Are you looking to model X as a reduced rank product, or are you interested in a reduced column space of X? in the latter case, I think an EM-PCA approach would work. you can find matlab code under the title Probabilistic PCA with missing values.
hth, | SVD dimensionality reduction for time series of different length | I'm somewhat confused by your example code, as it seems you drop the V variable from the computation of newX. Are you looking to model X as a reduced rank product, or are you interested in a reduced c | SVD dimensionality reduction for time series of different length
I'm somewhat confused by your example code, as it seems you drop the V variable from the computation of newX. Are you looking to model X as a reduced rank product, or are you interested in a reduced column space of X? in the latter case, I think an EM-PCA approach would work. you can find matlab code under the title Probabilistic PCA with missing values.
hth, | SVD dimensionality reduction for time series of different length
I'm somewhat confused by your example code, as it seems you drop the V variable from the computation of newX. Are you looking to model X as a reduced rank product, or are you interested in a reduced c |
20,417 | Support Vector Machine - Calculate w by hand | Solving the SVM problem by inspection
By inspection we can see that the boundary decision line is the function $x_2 = x_1 - 3$. Using the formula $w^T x + b = 0$ we can obtain a first guess of the parameters as
$$ w = [1,-1] \ \ b = -3$$
Using these values we would obtain the following width between the support vectors: $\frac{2}{\sqrt{2}} = \sqrt{2}$. Again by inspection we see that the width between the support vectors is in fact of length $4 \sqrt{2}$ meaning that these values are incorrect.
Recall that scaling the boundary by a factor of $c$ does not change the boundary line, hence we can generalize the equation as
$$ cx_1 - xc_2 - 3c = 0$$
$$ w = [c,-c] \ \ b = -3c$$
Plugging back into the equation for the width we get
\begin{aligned}
\frac{2}{||w||} & = 4 \sqrt{2}
\\
\frac{2}{\sqrt{2}c} & = 4 \sqrt{2}
\\
c = \frac{1}{4}
\end{aligned}
Hence the parameters are in fact
$$ w = [\frac{1}{4},-\frac{1}{4}] \ \ b = -\frac{3}{4}$$
To find the values of $\alpha_i$ we can use the following two constraints which come from the dual problem:
$$ w = \sum_i^m \alpha_i y^{(i)} x^{(i)} $$
$$\sum_i^m \alpha_i y^{(i)} = 0 $$
And using the fact that $\alpha_i \geq 0$ for support vectors only (i.e. 3 vectors in this case) we obtain the system of simultaneous linear equations:
\begin{aligned}
\begin{bmatrix} 6 \alpha_1 - 2 \alpha_2 - 3 \alpha_3 \\ -1 \alpha_1 - 3 \alpha_2 - 4 \alpha_3 \\ 1 \alpha_1 - 2 \alpha_2 - 1 \alpha_3 \end{bmatrix} & = \begin{bmatrix} 1/4 \\ -1/4 \\ 0 \end{bmatrix}
\\
\alpha & = \begin{bmatrix} 1/16 \\ 1/16 \\ 0 \end{bmatrix}
\end{aligned}
Source
https://ai6034.mit.edu/wiki/images/SVM_and_Boosting.pdf
Full post here | Support Vector Machine - Calculate w by hand | Solving the SVM problem by inspection
By inspection we can see that the boundary decision line is the function $x_2 = x_1 - 3$. Using the formula $w^T x + b = 0$ we can obtain a first guess of the par | Support Vector Machine - Calculate w by hand
Solving the SVM problem by inspection
By inspection we can see that the boundary decision line is the function $x_2 = x_1 - 3$. Using the formula $w^T x + b = 0$ we can obtain a first guess of the parameters as
$$ w = [1,-1] \ \ b = -3$$
Using these values we would obtain the following width between the support vectors: $\frac{2}{\sqrt{2}} = \sqrt{2}$. Again by inspection we see that the width between the support vectors is in fact of length $4 \sqrt{2}$ meaning that these values are incorrect.
Recall that scaling the boundary by a factor of $c$ does not change the boundary line, hence we can generalize the equation as
$$ cx_1 - xc_2 - 3c = 0$$
$$ w = [c,-c] \ \ b = -3c$$
Plugging back into the equation for the width we get
\begin{aligned}
\frac{2}{||w||} & = 4 \sqrt{2}
\\
\frac{2}{\sqrt{2}c} & = 4 \sqrt{2}
\\
c = \frac{1}{4}
\end{aligned}
Hence the parameters are in fact
$$ w = [\frac{1}{4},-\frac{1}{4}] \ \ b = -\frac{3}{4}$$
To find the values of $\alpha_i$ we can use the following two constraints which come from the dual problem:
$$ w = \sum_i^m \alpha_i y^{(i)} x^{(i)} $$
$$\sum_i^m \alpha_i y^{(i)} = 0 $$
And using the fact that $\alpha_i \geq 0$ for support vectors only (i.e. 3 vectors in this case) we obtain the system of simultaneous linear equations:
\begin{aligned}
\begin{bmatrix} 6 \alpha_1 - 2 \alpha_2 - 3 \alpha_3 \\ -1 \alpha_1 - 3 \alpha_2 - 4 \alpha_3 \\ 1 \alpha_1 - 2 \alpha_2 - 1 \alpha_3 \end{bmatrix} & = \begin{bmatrix} 1/4 \\ -1/4 \\ 0 \end{bmatrix}
\\
\alpha & = \begin{bmatrix} 1/16 \\ 1/16 \\ 0 \end{bmatrix}
\end{aligned}
Source
https://ai6034.mit.edu/wiki/images/SVM_and_Boosting.pdf
Full post here | Support Vector Machine - Calculate w by hand
Solving the SVM problem by inspection
By inspection we can see that the boundary decision line is the function $x_2 = x_1 - 3$. Using the formula $w^T x + b = 0$ we can obtain a first guess of the par |
20,418 | Advice on collaborations with applied scientists | You are getting good advice, but as your experience widens, it will diversify.
Other possibilities include:
Scientists should have considerable subject-matter expertise, for example on measurement and what kind of relationships make physical (biological, whatever) sense. Showing that you respect their expertise is a natural and congenial way to establish a good relationship.
Scientists may know statistical stuff you don't. For example, most astronomers know more about irregular time series and non-detection problems than many statisticians do. Many fields use circular statistics, which even a full statistical education rarely includes.
Graphs are often a lingua franca. Curiously or not, economists often distrust graphs as they are schooled to treat statistics in a highly formal fashion (your mileage may vary) and to avoid subjectivity (meaning, judgement).
Sometimes you need to back off. If scientists don't know what they expect, but merely ask for the analysis or something that's publishable, they're wasting your time and you've better things to do. If the data are a haphazard mess, then they can't be rescued by any smart analysis.
Always establish an escape route. Your conditions could include (a) agreeing only to preliminary discussion (b) a limit on your time or other commitment (c) the right to back off if they won't follow your advice (d) some kind of idea on conditions for co-authorship. Beware the situation when a scientist just keeps coming back for a little more. Also, beware the situation in which you're treated like a person from the gas company or a plumber: you are called in to clear up a mess but they feel no obligation to maintain a relationship once that's done.
I am not a statistician but write from experience in so far as I know more statistics than most of my scientist colleagues. If each party respects the other, the relationship can be highly fruitful. | Advice on collaborations with applied scientists | You are getting good advice, but as your experience widens, it will diversify.
Other possibilities include:
Scientists should have considerable subject-matter expertise, for example on measurement | Advice on collaborations with applied scientists
You are getting good advice, but as your experience widens, it will diversify.
Other possibilities include:
Scientists should have considerable subject-matter expertise, for example on measurement and what kind of relationships make physical (biological, whatever) sense. Showing that you respect their expertise is a natural and congenial way to establish a good relationship.
Scientists may know statistical stuff you don't. For example, most astronomers know more about irregular time series and non-detection problems than many statisticians do. Many fields use circular statistics, which even a full statistical education rarely includes.
Graphs are often a lingua franca. Curiously or not, economists often distrust graphs as they are schooled to treat statistics in a highly formal fashion (your mileage may vary) and to avoid subjectivity (meaning, judgement).
Sometimes you need to back off. If scientists don't know what they expect, but merely ask for the analysis or something that's publishable, they're wasting your time and you've better things to do. If the data are a haphazard mess, then they can't be rescued by any smart analysis.
Always establish an escape route. Your conditions could include (a) agreeing only to preliminary discussion (b) a limit on your time or other commitment (c) the right to back off if they won't follow your advice (d) some kind of idea on conditions for co-authorship. Beware the situation when a scientist just keeps coming back for a little more. Also, beware the situation in which you're treated like a person from the gas company or a plumber: you are called in to clear up a mess but they feel no obligation to maintain a relationship once that's done.
I am not a statistician but write from experience in so far as I know more statistics than most of my scientist colleagues. If each party respects the other, the relationship can be highly fruitful. | Advice on collaborations with applied scientists
You are getting good advice, but as your experience widens, it will diversify.
Other possibilities include:
Scientists should have considerable subject-matter expertise, for example on measurement |
20,419 | Advice on collaborations with applied scientists | Of course, your attitude is everything. If your clients/collaborators feel that you are there to support—as opposed to judge—that will go a long way. But, even then, there are issues that pop up. The two bullets you mention are key.
First, always stress that you want them to produce the very best science, and while you recognize that there may be discipline specific conventions, that doesn't mean there may not be better ways to accomplish the task. To that end, your two best friends would be: (1) the research question, and (2) any and all of the model assumptions. If the answer to the RQs can be obtained (even imperfectly) from the "conventional" approach, it probably will be reasonable. If the violations of the assumptions become too egregious...then you can reference back to wanting to produce the best science.
Hope my reflections are useful to you. | Advice on collaborations with applied scientists | Of course, your attitude is everything. If your clients/collaborators feel that you are there to support—as opposed to judge—that will go a long way. But, even then, there are issues that pop up. The | Advice on collaborations with applied scientists
Of course, your attitude is everything. If your clients/collaborators feel that you are there to support—as opposed to judge—that will go a long way. But, even then, there are issues that pop up. The two bullets you mention are key.
First, always stress that you want them to produce the very best science, and while you recognize that there may be discipline specific conventions, that doesn't mean there may not be better ways to accomplish the task. To that end, your two best friends would be: (1) the research question, and (2) any and all of the model assumptions. If the answer to the RQs can be obtained (even imperfectly) from the "conventional" approach, it probably will be reasonable. If the violations of the assumptions become too egregious...then you can reference back to wanting to produce the best science.
Hope my reflections are useful to you. | Advice on collaborations with applied scientists
Of course, your attitude is everything. If your clients/collaborators feel that you are there to support—as opposed to judge—that will go a long way. But, even then, there are issues that pop up. The |
20,420 | Advice on collaborations with applied scientists | Hard skills are your foot in the door, and soft skills are the key to actually implementing a solution. Being the smartest person in the room doesn't earn you points.
That being said, you don't have to learn on your own. As cliche as it is, Dale Carnegie's How to Win Friends and Influence People actually can make you a better person. In the same vein, behavioral economics-type podcasts are good at surfacing research, making you think critically, and keeping it lively. See Freakonomics, for example.
Reading and listening are great, but you actually have to change how you act in order to affect good results.
Specific to your case, I've had success by trying all methods and comparing to an agreed-upon metric of "goodness". There's no need to argue if you can objectively test which model is best. This can be in minimizing error, having the best explanatory value, yielding the best "story", etc. | Advice on collaborations with applied scientists | Hard skills are your foot in the door, and soft skills are the key to actually implementing a solution. Being the smartest person in the room doesn't earn you points.
That being said, you don't have | Advice on collaborations with applied scientists
Hard skills are your foot in the door, and soft skills are the key to actually implementing a solution. Being the smartest person in the room doesn't earn you points.
That being said, you don't have to learn on your own. As cliche as it is, Dale Carnegie's How to Win Friends and Influence People actually can make you a better person. In the same vein, behavioral economics-type podcasts are good at surfacing research, making you think critically, and keeping it lively. See Freakonomics, for example.
Reading and listening are great, but you actually have to change how you act in order to affect good results.
Specific to your case, I've had success by trying all methods and comparing to an agreed-upon metric of "goodness". There's no need to argue if you can objectively test which model is best. This can be in minimizing error, having the best explanatory value, yielding the best "story", etc. | Advice on collaborations with applied scientists
Hard skills are your foot in the door, and soft skills are the key to actually implementing a solution. Being the smartest person in the room doesn't earn you points.
That being said, you don't have |
20,421 | Equivalence of (0 + factor|group) and (1|group) + (1|group:factor) random effect specifications in case of compound symmetry | In this example, there are three observations for each combination of the three machines (A, B, C) and the six workers. I'll use $I_n$ to denote a $n$-dimensional identity matrix and $1_n$ to denote an $n$-dimensional vector of ones. Let's say $y$ is the vector of observations, that I will assume is ordered by worker then machine then replicate. Let $\mu$ be the corresponding expected values (e.g. the fixed effects), and let $\gamma$ be a vector of group-specific deviations from the expected values (e.g. the random effects). Conditional on $\gamma$, the model for $y$ can be written:
$$y \sim \mathcal{N}(\mu + \gamma, \sigma^2_y I_{54})$$
where $\sigma^2_y$ is the "residual" variance.
To understand how the covariance structure of the random effects induces a covariance structure among observations, it is more intuitive to work with the equivalent "marginal" representation, which integrates over the random effects $\gamma$. The marginal form of this model is,
$$y \sim \mathcal{N}(\mu, \sigma^2_y I_{54} + \Sigma)$$
Here, $\Sigma$ is a covariance matrix that depends on the structure of $\gamma$ (e.g. the "variance components" underlying the random effects). I'll refer to $\Sigma$ as the "marginal" covariance.
In your m1, the random effects decompose as:
$$\gamma = Z \theta$$
Where $Z = I_{18} \otimes 1_3$ is a design matrix that maps the random coefficients onto observations, and $\theta^T = [\theta_{1,A}, \theta_{1,B}, \theta_{1,C} \dots \theta_{6,A}, \theta_{6,B}, \theta_{6,C}]$ is the 18-dimensional vector of random coefficients ordered by worker then machine, and is distributed as:
$$\theta \sim \mathcal{N}(0, I_6 \otimes \Lambda)$$
Here $\Lambda$ is the covariance of the random coefficients. The assumption of compound symmetry means that $\Lambda$ has two parameters, that I'll call $\sigma_\theta$ and $\tau$, and the structure:
$$\Lambda = \left[\begin{matrix} \sigma^2_\theta + \tau^2 & \tau^2 & \tau^2 \\
\tau^2 & \sigma^2_\theta + \tau^2 & \tau^2 \\
\tau^2 & \tau^2 & \sigma^2_\theta + \tau^2 \end{matrix}\right]$$
(In other words, the correlation matrix underlying $\Lambda$ has all the elements on the offdiagonal set to the same value.)
The marginal covariance structure induced by these random effects is $\Sigma = Z (I_6 \otimes \Lambda) Z^T$, so that the variance of a given observation is $\sigma^2_\theta + \tau^2 + \sigma^2_y$ and the covariance between two (separate) observations from workers $i, j$ and machines $u, v$ is:
$$\mathrm{cov}(y_{i,u}, y_{j,v}) = \begin{cases}
0 & \text{if } i\neq j \\
\tau^2 & \text{if } i=j, u\neq v \\
\sigma^2_\theta + \tau^2 & \text{if } i=j, u=v \end{cases}$$
For your m2, the random effects decompose into:
$$\gamma = Z \omega + X \eta$$
Where Z is as before, $X = I_6 \otimes 1_9$ is a design matrix that maps the random intercepts per worker onto observations, $\omega^T = [\omega_{1,A}, \omega_{1,B}, \omega_{1,C}, \dots, \omega_{6,A}, \omega_{6,B}, \omega_{6,C}]$ is the 18-dimensional vector of random intercepts for every combination of machine and worker; and $\eta^T = [\eta_{1}, \dots, \eta_{6}]$ is the 6-dimensional vector of random intercepts for worker. These are distributed as,
$$\eta \sim \mathcal{N}(0, \sigma^2_\eta I_6)$$
$$\omega \sim \mathcal{N}(0, \sigma^2_\omega I_{18})$$
Where $\sigma_\eta^2, \sigma_\omega^2$ are the variances of these random intercepts.
The marginal covariance structure of m2 is $\Sigma = \sigma^2_\omega Z Z^T + \sigma^2_\eta X X^T$, so that the variance of a given observation is $\sigma^2_\omega + \sigma^2_\eta + \sigma^2_y$, and the covariance between two observations from workers $i, j$ and machines $u, v$ is:
$$\mathrm{cov}(y_{i,u}, y_{j,v}) = \begin{cases}
0 & \text{if } i\neq j \\
\sigma_\eta^2 & \text{if } i=j,u\neq v \\
\sigma^2_\omega + \sigma^2_\eta & \text{if } i=j,u=v \end{cases}$$
So ... $\sigma^2_\theta \equiv \sigma^2_\omega$ and $\tau^2 \equiv \sigma^2_\eta$. If m1 assumed compound symmetry (which it doesn't with your call to lmer, because the random effects covariance is unstructured).
Brevity is not my strong point: this is all just a long, convoluted way of saying that each model has two variance parameters for the random effects, and are just two different ways of writing of the same "marginal" model.
In code ...
sigma_theta <- 1.8
tau <- 0.5
sigma_eta <- tau
sigma_omega <- sigma_theta
Z <- kronecker(diag(18), rep(1,3))
rownames(Z) <- paste(paste0("worker", rep(1:6, each=9)),
rep(paste0("machine", rep(1:3, each=3)),6))
X <- kronecker(diag(6), rep(1,9))
rownames(X) <- rownames(Z)
Lambda <- diag(3)*sigma_theta^2 + tau^2
# marginal covariance for m1:
Z%*%kronecker(diag(6), Lambda)%*%t(Z)
# for m2:
X%*%t(X)*sigma_eta^2 + Z%*%t(Z)*sigma_omega^2 | Equivalence of (0 + factor|group) and (1|group) + (1|group:factor) random effect specifications in c | In this example, there are three observations for each combination of the three machines (A, B, C) and the six workers. I'll use $I_n$ to denote a $n$-dimensional identity matrix and $1_n$ to denote a | Equivalence of (0 + factor|group) and (1|group) + (1|group:factor) random effect specifications in case of compound symmetry
In this example, there are three observations for each combination of the three machines (A, B, C) and the six workers. I'll use $I_n$ to denote a $n$-dimensional identity matrix and $1_n$ to denote an $n$-dimensional vector of ones. Let's say $y$ is the vector of observations, that I will assume is ordered by worker then machine then replicate. Let $\mu$ be the corresponding expected values (e.g. the fixed effects), and let $\gamma$ be a vector of group-specific deviations from the expected values (e.g. the random effects). Conditional on $\gamma$, the model for $y$ can be written:
$$y \sim \mathcal{N}(\mu + \gamma, \sigma^2_y I_{54})$$
where $\sigma^2_y$ is the "residual" variance.
To understand how the covariance structure of the random effects induces a covariance structure among observations, it is more intuitive to work with the equivalent "marginal" representation, which integrates over the random effects $\gamma$. The marginal form of this model is,
$$y \sim \mathcal{N}(\mu, \sigma^2_y I_{54} + \Sigma)$$
Here, $\Sigma$ is a covariance matrix that depends on the structure of $\gamma$ (e.g. the "variance components" underlying the random effects). I'll refer to $\Sigma$ as the "marginal" covariance.
In your m1, the random effects decompose as:
$$\gamma = Z \theta$$
Where $Z = I_{18} \otimes 1_3$ is a design matrix that maps the random coefficients onto observations, and $\theta^T = [\theta_{1,A}, \theta_{1,B}, \theta_{1,C} \dots \theta_{6,A}, \theta_{6,B}, \theta_{6,C}]$ is the 18-dimensional vector of random coefficients ordered by worker then machine, and is distributed as:
$$\theta \sim \mathcal{N}(0, I_6 \otimes \Lambda)$$
Here $\Lambda$ is the covariance of the random coefficients. The assumption of compound symmetry means that $\Lambda$ has two parameters, that I'll call $\sigma_\theta$ and $\tau$, and the structure:
$$\Lambda = \left[\begin{matrix} \sigma^2_\theta + \tau^2 & \tau^2 & \tau^2 \\
\tau^2 & \sigma^2_\theta + \tau^2 & \tau^2 \\
\tau^2 & \tau^2 & \sigma^2_\theta + \tau^2 \end{matrix}\right]$$
(In other words, the correlation matrix underlying $\Lambda$ has all the elements on the offdiagonal set to the same value.)
The marginal covariance structure induced by these random effects is $\Sigma = Z (I_6 \otimes \Lambda) Z^T$, so that the variance of a given observation is $\sigma^2_\theta + \tau^2 + \sigma^2_y$ and the covariance between two (separate) observations from workers $i, j$ and machines $u, v$ is:
$$\mathrm{cov}(y_{i,u}, y_{j,v}) = \begin{cases}
0 & \text{if } i\neq j \\
\tau^2 & \text{if } i=j, u\neq v \\
\sigma^2_\theta + \tau^2 & \text{if } i=j, u=v \end{cases}$$
For your m2, the random effects decompose into:
$$\gamma = Z \omega + X \eta$$
Where Z is as before, $X = I_6 \otimes 1_9$ is a design matrix that maps the random intercepts per worker onto observations, $\omega^T = [\omega_{1,A}, \omega_{1,B}, \omega_{1,C}, \dots, \omega_{6,A}, \omega_{6,B}, \omega_{6,C}]$ is the 18-dimensional vector of random intercepts for every combination of machine and worker; and $\eta^T = [\eta_{1}, \dots, \eta_{6}]$ is the 6-dimensional vector of random intercepts for worker. These are distributed as,
$$\eta \sim \mathcal{N}(0, \sigma^2_\eta I_6)$$
$$\omega \sim \mathcal{N}(0, \sigma^2_\omega I_{18})$$
Where $\sigma_\eta^2, \sigma_\omega^2$ are the variances of these random intercepts.
The marginal covariance structure of m2 is $\Sigma = \sigma^2_\omega Z Z^T + \sigma^2_\eta X X^T$, so that the variance of a given observation is $\sigma^2_\omega + \sigma^2_\eta + \sigma^2_y$, and the covariance between two observations from workers $i, j$ and machines $u, v$ is:
$$\mathrm{cov}(y_{i,u}, y_{j,v}) = \begin{cases}
0 & \text{if } i\neq j \\
\sigma_\eta^2 & \text{if } i=j,u\neq v \\
\sigma^2_\omega + \sigma^2_\eta & \text{if } i=j,u=v \end{cases}$$
So ... $\sigma^2_\theta \equiv \sigma^2_\omega$ and $\tau^2 \equiv \sigma^2_\eta$. If m1 assumed compound symmetry (which it doesn't with your call to lmer, because the random effects covariance is unstructured).
Brevity is not my strong point: this is all just a long, convoluted way of saying that each model has two variance parameters for the random effects, and are just two different ways of writing of the same "marginal" model.
In code ...
sigma_theta <- 1.8
tau <- 0.5
sigma_eta <- tau
sigma_omega <- sigma_theta
Z <- kronecker(diag(18), rep(1,3))
rownames(Z) <- paste(paste0("worker", rep(1:6, each=9)),
rep(paste0("machine", rep(1:3, each=3)),6))
X <- kronecker(diag(6), rep(1,9))
rownames(X) <- rownames(Z)
Lambda <- diag(3)*sigma_theta^2 + tau^2
# marginal covariance for m1:
Z%*%kronecker(diag(6), Lambda)%*%t(Z)
# for m2:
X%*%t(X)*sigma_eta^2 + Z%*%t(Z)*sigma_omega^2 | Equivalence of (0 + factor|group) and (1|group) + (1|group:factor) random effect specifications in c
In this example, there are three observations for each combination of the three machines (A, B, C) and the six workers. I'll use $I_n$ to denote a $n$-dimensional identity matrix and $1_n$ to denote a |
20,422 | Is splitting the data into test and training sets purely a "stats" thing? | Not all statistical procedures split in to training/testing data, also called "cross-validation" (although the entire procedure involves a little more than that).
Rather, this is a technique that specifically is used to estimate out-of-sample error; i.e. how well will your model predict new outcomes using a new dataset? This becomes a very important issue when you have, for example, a very large number of predictors relative to the number of samples in your dataset. In such cases, it is really easy to build a model with great in-sample error but terrible out of sample error (called "over fitting"). In the cases where you have both a large number of predictors and a large number of samples, cross-validation is a necessary tool to help assess how well the model will behave when predicting on new data. It's also an important tool when choosing between competing predictive models.
On another note, cross-validation is almost always just used when trying to build a predictive model. In general, it is not very helpful for models when you are trying to estimate the effect of some treatment. For example, if you are comparing the distribution of tensile strength between materials A and B ("treatment" being material type), cross validation will not be necessary; while we do hope that our estimate of treatment effect generalizes out of sample, for most problems classic statistical theory can answer this (i.e. "standard errors" of estimates) more precisely than cross-validation. Unfortunately, classical statistical methodology1 for standard errors doesn't hold up in the case of overfitting. Cross-validation often does much better in that case.
On the other hand, if you are trying to predict when a material will break based on 10,000 measured variables that you throw into some machine learning model based on 100,000 observations, you'll have a lot of trouble building a great model without cross validation!
I'm guessing in a lot of the physics experiments done, you are generally interested in estimation of effects. In those cases, there is very little need for cross-validation.
1One could argue that Bayesian methods with informative priors are a classical statistical methodology that addresses overfitting. But that's another discussion.
Side note: while cross-validation first appeared in the statistics literature, and is definitely used by people who call themselves statisticians, it's become a fundamental required tool in the machine learning community. Lots of stats models will work well without the use of cross-validation, but almost all models that are considered "machine learning predictive models" need cross-validation, as they often require selection of tuning parameters, which is almost impossible to do without cross-validation. | Is splitting the data into test and training sets purely a "stats" thing? | Not all statistical procedures split in to training/testing data, also called "cross-validation" (although the entire procedure involves a little more than that).
Rather, this is a technique that spe | Is splitting the data into test and training sets purely a "stats" thing?
Not all statistical procedures split in to training/testing data, also called "cross-validation" (although the entire procedure involves a little more than that).
Rather, this is a technique that specifically is used to estimate out-of-sample error; i.e. how well will your model predict new outcomes using a new dataset? This becomes a very important issue when you have, for example, a very large number of predictors relative to the number of samples in your dataset. In such cases, it is really easy to build a model with great in-sample error but terrible out of sample error (called "over fitting"). In the cases where you have both a large number of predictors and a large number of samples, cross-validation is a necessary tool to help assess how well the model will behave when predicting on new data. It's also an important tool when choosing between competing predictive models.
On another note, cross-validation is almost always just used when trying to build a predictive model. In general, it is not very helpful for models when you are trying to estimate the effect of some treatment. For example, if you are comparing the distribution of tensile strength between materials A and B ("treatment" being material type), cross validation will not be necessary; while we do hope that our estimate of treatment effect generalizes out of sample, for most problems classic statistical theory can answer this (i.e. "standard errors" of estimates) more precisely than cross-validation. Unfortunately, classical statistical methodology1 for standard errors doesn't hold up in the case of overfitting. Cross-validation often does much better in that case.
On the other hand, if you are trying to predict when a material will break based on 10,000 measured variables that you throw into some machine learning model based on 100,000 observations, you'll have a lot of trouble building a great model without cross validation!
I'm guessing in a lot of the physics experiments done, you are generally interested in estimation of effects. In those cases, there is very little need for cross-validation.
1One could argue that Bayesian methods with informative priors are a classical statistical methodology that addresses overfitting. But that's another discussion.
Side note: while cross-validation first appeared in the statistics literature, and is definitely used by people who call themselves statisticians, it's become a fundamental required tool in the machine learning community. Lots of stats models will work well without the use of cross-validation, but almost all models that are considered "machine learning predictive models" need cross-validation, as they often require selection of tuning parameters, which is almost impossible to do without cross-validation. | Is splitting the data into test and training sets purely a "stats" thing?
Not all statistical procedures split in to training/testing data, also called "cross-validation" (although the entire procedure involves a little more than that).
Rather, this is a technique that spe |
20,423 | Is splitting the data into test and training sets purely a "stats" thing? | Being (analytical) chemist, I encounter both approaches: analytical calculation of figures of merit [mostly for univariate regression] as well as direct measurement of predictive figures of merit.
The train/test splitting to me is the "little brother" of a validation experiment to measure prediction quality.
Long answer:
The typical experiments we do e.g. in undergraduate physical chemistry use univariate regression. The property of interest are often the model parameters, e.g. the time constant when measuring reaction kinetics, but sometimes also predictions (e.g. univariate linear calibration to predict/measure some value of interest).
These situations are very benign in terms of not overfitting: there's usually a comfortable number of degrees of freedom left after all parameters are estimated, and they are used to train (as in education) students with classical confidence or prediction interval calculation, and classical error propagation - they were developed for these situations. And even if the situation is not entirely textbook-like (e.g. I have structure in my data, e.g. in the kinetics I'd expect the data is better described by variance between runs of the reaction + variance between measurements in a run than by a plain one-variance-only approach), I can typically have enough runs of the experiment to still get useful results.
However, in my professional life, I deal with spectroscopic data sets (typically 100s to 1000s of variates $p$) and moreover with rather limited sets of independent cases (samples) $n$. Often $n < p$, so we use regularization of which it is not always easy to say how many degrees of freedom we use, and in addition we try to at least somewhat compensate for the
small $n$ by using (large) numbers of almost repeated measurements - which leaves us with an unknown effective $n$. Without knowing $n$ or $df$, the classical approaches don't work. But as I'm mostly doing predictions, I always have a very direct possibility of measuring the predictive ability of my model: I do predictions, and compare them to reference values.
This approach is actually very powerful (though costly due to increased experimental effort), as it allows me to probe predictive quality also for conditions that were not covered in the training/calibration data. E.g. I can measure how predictive quality deteriorates with extrapolation (extrapolation includes also e.g. measurements made, say, a month after the training data was acquired), I can probe the ruggedness against confounding factors that I expect to be important, etc. In other words, we can study the behaviour of our model just as we study the behavior of any other system: we probe certain points, or perturb it and look at the change in the system's answer, etc.
I'd say that the more important predictive quality is (and the higher the risk of overfitting) the more we tend to prefer direct measurements of predictive quality rather than analytically derived numbers. (Of course we could have included all those confounders also into the design of the training experiment). Some areas such as medical diagnostics demand that proper validation studies are performed before the model is "let loose" on real patients.
The train/test splitting (whether hold out* or cross validation or out-of-bootstrap or ...) takes this one step easier. We save the extra experiment and do not extrapolate (we only generalize to predicting unknown independent cases of the very same distribution of the training data). I'd describe this as a verification rather than validation (although validation is deeply in the terminology here).
This is often the pragmatic way to go if there are not too high demands on the precision of the figures of merit (they may not need to be known very precisely in a proof-of-concept scenario).
* do not confuse a single random split into train and test with a properly designed study to measure prediction quality. | Is splitting the data into test and training sets purely a "stats" thing? | Being (analytical) chemist, I encounter both approaches: analytical calculation of figures of merit [mostly for univariate regression] as well as direct measurement of predictive figures of merit.
The | Is splitting the data into test and training sets purely a "stats" thing?
Being (analytical) chemist, I encounter both approaches: analytical calculation of figures of merit [mostly for univariate regression] as well as direct measurement of predictive figures of merit.
The train/test splitting to me is the "little brother" of a validation experiment to measure prediction quality.
Long answer:
The typical experiments we do e.g. in undergraduate physical chemistry use univariate regression. The property of interest are often the model parameters, e.g. the time constant when measuring reaction kinetics, but sometimes also predictions (e.g. univariate linear calibration to predict/measure some value of interest).
These situations are very benign in terms of not overfitting: there's usually a comfortable number of degrees of freedom left after all parameters are estimated, and they are used to train (as in education) students with classical confidence or prediction interval calculation, and classical error propagation - they were developed for these situations. And even if the situation is not entirely textbook-like (e.g. I have structure in my data, e.g. in the kinetics I'd expect the data is better described by variance between runs of the reaction + variance between measurements in a run than by a plain one-variance-only approach), I can typically have enough runs of the experiment to still get useful results.
However, in my professional life, I deal with spectroscopic data sets (typically 100s to 1000s of variates $p$) and moreover with rather limited sets of independent cases (samples) $n$. Often $n < p$, so we use regularization of which it is not always easy to say how many degrees of freedom we use, and in addition we try to at least somewhat compensate for the
small $n$ by using (large) numbers of almost repeated measurements - which leaves us with an unknown effective $n$. Without knowing $n$ or $df$, the classical approaches don't work. But as I'm mostly doing predictions, I always have a very direct possibility of measuring the predictive ability of my model: I do predictions, and compare them to reference values.
This approach is actually very powerful (though costly due to increased experimental effort), as it allows me to probe predictive quality also for conditions that were not covered in the training/calibration data. E.g. I can measure how predictive quality deteriorates with extrapolation (extrapolation includes also e.g. measurements made, say, a month after the training data was acquired), I can probe the ruggedness against confounding factors that I expect to be important, etc. In other words, we can study the behaviour of our model just as we study the behavior of any other system: we probe certain points, or perturb it and look at the change in the system's answer, etc.
I'd say that the more important predictive quality is (and the higher the risk of overfitting) the more we tend to prefer direct measurements of predictive quality rather than analytically derived numbers. (Of course we could have included all those confounders also into the design of the training experiment). Some areas such as medical diagnostics demand that proper validation studies are performed before the model is "let loose" on real patients.
The train/test splitting (whether hold out* or cross validation or out-of-bootstrap or ...) takes this one step easier. We save the extra experiment and do not extrapolate (we only generalize to predicting unknown independent cases of the very same distribution of the training data). I'd describe this as a verification rather than validation (although validation is deeply in the terminology here).
This is often the pragmatic way to go if there are not too high demands on the precision of the figures of merit (they may not need to be known very precisely in a proof-of-concept scenario).
* do not confuse a single random split into train and test with a properly designed study to measure prediction quality. | Is splitting the data into test and training sets purely a "stats" thing?
Being (analytical) chemist, I encounter both approaches: analytical calculation of figures of merit [mostly for univariate regression] as well as direct measurement of predictive figures of merit.
The |
20,424 | Optimization with orthogonal constraints | I found the following paper to be useful:
Edelman, A., Arias, T. A., & Smith, S. T. (1998).
The geometry of algorithms with orthogonality constraints.
SIAM journal on Matrix Analysis and Applications, 20(2), 303-353.
This paper has far more information than you may need, in terms of differential geometry and higher-order optimization methods.
However the information to answer your question is actually quite straightforward, and is in fact contained mostly in equations 2.53 and 2.70, which are both of the form
$$\nabla{f}=G-X\Phi$$
where the nominal gradient
$$G=\frac{\partial{f}}{\partial{X}}$$
is corrected to constrained gradient $\nabla{f}$ by subtracting off its projection $\Phi$ onto the current solution $X$. This is the normal to the manifold, similar to circular motion, and ensures the corrected gradient is tangent to the manifold.
Note: These formulas assume you are already on the manifold, i.e. $X^TX=I$. So in practice you will need to make sure your initial condition is appropriate (e.g. $X_0=I$, possible rectangular). You may also need to occasionally correct accumulated roundoff and/or truncation error (e.g. via SVD, see "ZCA" example below).
In the unconstrained case, $\Phi=0$, while in the constrained case $\Phi$ takes two forms:
$$\Phi_{\mathrm{\small{G}}}=X^TG\implies\nabla_{\mathrm{\small{G}}}{f}=\left(I-XX^T\right)G$$
which corresponds to the "Grassmann manifold". The distinction here is that $\nabla_{\mathrm{\small{G}}}$ is insensitive to rotations, since for a rotation $Q^TQ=I$ and $X=X_0Q$, we have $XX^T=X_0X_0^T$.
The second form is
$$\Phi_{\mathrm{\small{S}}}=G^TX\implies\nabla_{\mathrm{\small{S}}}{f}=G-XG^TX$$
which corresponds to the "Stiefel manifold", and is sensitive to rotations.
A simple example is approximation of a given matrix $A\in\mathbb{R}^{n\times{p}}$ with $p\leq{n}$ by an orthogonal matrix $X$, minimizing the least-squares error. In this case we have
$$f[X]=\tfrac{1}{2}\|X-A\|_F^2=\tfrac{1}{2}\sum_{ij}\left(X_{ij}-A_{ij}\right)^2\implies{G}=X-A$$
The unconstrained case $\nabla{f}=G$ has solution $X=A$, because we are not concerned with ensuring $X$ is orthogonal.
For the Grassmann case we have
$$\nabla_{\mathrm{\small{G}}}{f}=\left(XX^T-I\right)A=0$$
This can only have a solution is $A$ is square rather than "skinny", because if $p<n$ then $X$ will have a null space.
For the Stiefel case, we have
$$\nabla_{\mathrm{\small{S}}}{f}=XA^TX-A=0$$
which can be solved even when $p<n$.
These two cases, Grassmann vs. Stiefel, essentially correspond to the difference between "PCA vs. ZCA whitening". In terms of the SVD, if the input matrix is $A=USV^T$, then the solutions are $X_{\mathrm{\small{G}}}=U$ and $X_{\mathrm{\small{S}}}=UV^T$. The PCA solution $X_{\mathrm{\small{G}}}$ only applies to a square input, i.e. $A$ must be the "covariance matrix". However the ZCA solution $X_{\mathrm{\small{S}}}$ can be used when $A$ is a "data matrix". (This is more properly known as the Orthogonal Procrustes problem.) | Optimization with orthogonal constraints | I found the following paper to be useful:
Edelman, A., Arias, T. A., & Smith, S. T. (1998).
The geometry of algorithms with orthogonality constraints.
SIAM journal on Matrix Analysis and Applicat | Optimization with orthogonal constraints
I found the following paper to be useful:
Edelman, A., Arias, T. A., & Smith, S. T. (1998).
The geometry of algorithms with orthogonality constraints.
SIAM journal on Matrix Analysis and Applications, 20(2), 303-353.
This paper has far more information than you may need, in terms of differential geometry and higher-order optimization methods.
However the information to answer your question is actually quite straightforward, and is in fact contained mostly in equations 2.53 and 2.70, which are both of the form
$$\nabla{f}=G-X\Phi$$
where the nominal gradient
$$G=\frac{\partial{f}}{\partial{X}}$$
is corrected to constrained gradient $\nabla{f}$ by subtracting off its projection $\Phi$ onto the current solution $X$. This is the normal to the manifold, similar to circular motion, and ensures the corrected gradient is tangent to the manifold.
Note: These formulas assume you are already on the manifold, i.e. $X^TX=I$. So in practice you will need to make sure your initial condition is appropriate (e.g. $X_0=I$, possible rectangular). You may also need to occasionally correct accumulated roundoff and/or truncation error (e.g. via SVD, see "ZCA" example below).
In the unconstrained case, $\Phi=0$, while in the constrained case $\Phi$ takes two forms:
$$\Phi_{\mathrm{\small{G}}}=X^TG\implies\nabla_{\mathrm{\small{G}}}{f}=\left(I-XX^T\right)G$$
which corresponds to the "Grassmann manifold". The distinction here is that $\nabla_{\mathrm{\small{G}}}$ is insensitive to rotations, since for a rotation $Q^TQ=I$ and $X=X_0Q$, we have $XX^T=X_0X_0^T$.
The second form is
$$\Phi_{\mathrm{\small{S}}}=G^TX\implies\nabla_{\mathrm{\small{S}}}{f}=G-XG^TX$$
which corresponds to the "Stiefel manifold", and is sensitive to rotations.
A simple example is approximation of a given matrix $A\in\mathbb{R}^{n\times{p}}$ with $p\leq{n}$ by an orthogonal matrix $X$, minimizing the least-squares error. In this case we have
$$f[X]=\tfrac{1}{2}\|X-A\|_F^2=\tfrac{1}{2}\sum_{ij}\left(X_{ij}-A_{ij}\right)^2\implies{G}=X-A$$
The unconstrained case $\nabla{f}=G$ has solution $X=A$, because we are not concerned with ensuring $X$ is orthogonal.
For the Grassmann case we have
$$\nabla_{\mathrm{\small{G}}}{f}=\left(XX^T-I\right)A=0$$
This can only have a solution is $A$ is square rather than "skinny", because if $p<n$ then $X$ will have a null space.
For the Stiefel case, we have
$$\nabla_{\mathrm{\small{S}}}{f}=XA^TX-A=0$$
which can be solved even when $p<n$.
These two cases, Grassmann vs. Stiefel, essentially correspond to the difference between "PCA vs. ZCA whitening". In terms of the SVD, if the input matrix is $A=USV^T$, then the solutions are $X_{\mathrm{\small{G}}}=U$ and $X_{\mathrm{\small{S}}}=UV^T$. The PCA solution $X_{\mathrm{\small{G}}}$ only applies to a square input, i.e. $A$ must be the "covariance matrix". However the ZCA solution $X_{\mathrm{\small{S}}}$ can be used when $A$ is a "data matrix". (This is more properly known as the Orthogonal Procrustes problem.) | Optimization with orthogonal constraints
I found the following paper to be useful:
Edelman, A., Arias, T. A., & Smith, S. T. (1998).
The geometry of algorithms with orthogonality constraints.
SIAM journal on Matrix Analysis and Applicat |
20,425 | Variance of maximum of Gaussian random variables | You can obtain upper bound by applying Talagrand inequality : look at Chatterjee's book (Superconcentration phenomenon for instance) .
It tells you that ${\rm Var}(f)\leq C\sum_{i=1}^n\frac{\|\partial_if\|_2^2}{1+\log( \|\partial_i f||_2/\|\partial_i f\|_1)}$.
For the maximum, you get $\partial_if=1_{X_i=max}$, then by integrating with respect to the Gaussian measure on $\mathbb{R}^n$ you get
$\|\partial_if\|_2^2=\|\partial_if\|_1=\frac{1}{n}$ by symmetry. (Here I choose all my rv iid with variance one).
This the true order of the variance : since you have some upper bound on the expectation of the maximum, this article of Eldan-Ding Zhai (On Multiple peaks and moderate deviation of Gaussian supremum) tells you that
${\rm Var}(\max X_i)\geq C/(1+\mathbb{E}[\max X_i])^2$
It is also possible to obtain sharp concentration inequality reflecting these bound on the variance : you can look at http://www.wisdom.weizmann.ac.il/mathusers/gideon/papers/ranDv.pdf
or, for more general gaussian process, at my paper
https://perso.math.univ-toulouse.fr/ktanguy/files/2012/04/Article-3-brouillon.pdf
In full generality it is rather hard to find the right order of magnitude of the variance of a Gaussien supremum since the tools from concentration theory are always suboptimal for the maximum function.
Why do you need these kinds of estimates if I may ask ? | Variance of maximum of Gaussian random variables | You can obtain upper bound by applying Talagrand inequality : look at Chatterjee's book (Superconcentration phenomenon for instance) .
It tells you that ${\rm Var}(f)\leq C\sum_{i=1}^n\frac{\|\partial | Variance of maximum of Gaussian random variables
You can obtain upper bound by applying Talagrand inequality : look at Chatterjee's book (Superconcentration phenomenon for instance) .
It tells you that ${\rm Var}(f)\leq C\sum_{i=1}^n\frac{\|\partial_if\|_2^2}{1+\log( \|\partial_i f||_2/\|\partial_i f\|_1)}$.
For the maximum, you get $\partial_if=1_{X_i=max}$, then by integrating with respect to the Gaussian measure on $\mathbb{R}^n$ you get
$\|\partial_if\|_2^2=\|\partial_if\|_1=\frac{1}{n}$ by symmetry. (Here I choose all my rv iid with variance one).
This the true order of the variance : since you have some upper bound on the expectation of the maximum, this article of Eldan-Ding Zhai (On Multiple peaks and moderate deviation of Gaussian supremum) tells you that
${\rm Var}(\max X_i)\geq C/(1+\mathbb{E}[\max X_i])^2$
It is also possible to obtain sharp concentration inequality reflecting these bound on the variance : you can look at http://www.wisdom.weizmann.ac.il/mathusers/gideon/papers/ranDv.pdf
or, for more general gaussian process, at my paper
https://perso.math.univ-toulouse.fr/ktanguy/files/2012/04/Article-3-brouillon.pdf
In full generality it is rather hard to find the right order of magnitude of the variance of a Gaussien supremum since the tools from concentration theory are always suboptimal for the maximum function.
Why do you need these kinds of estimates if I may ask ? | Variance of maximum of Gaussian random variables
You can obtain upper bound by applying Talagrand inequality : look at Chatterjee's book (Superconcentration phenomenon for instance) .
It tells you that ${\rm Var}(f)\leq C\sum_{i=1}^n\frac{\|\partial |
20,426 | Variance of maximum of Gaussian random variables | More generally, the expectation and variance of the range depends on how fat the tail of your distribution is. For the variance, it is $O(n^{-B})$ where $B$ depends on your distribution ($B = 2$ for uniform, $B = 1$ for Gaussian, and $B = 0$ for exponential.) See here. The table below shows the order of magnitude for the range. | Variance of maximum of Gaussian random variables | More generally, the expectation and variance of the range depends on how fat the tail of your distribution is. For the variance, it is $O(n^{-B})$ where $B$ depends on your distribution ($B = 2$ for u | Variance of maximum of Gaussian random variables
More generally, the expectation and variance of the range depends on how fat the tail of your distribution is. For the variance, it is $O(n^{-B})$ where $B$ depends on your distribution ($B = 2$ for uniform, $B = 1$ for Gaussian, and $B = 0$ for exponential.) See here. The table below shows the order of magnitude for the range. | Variance of maximum of Gaussian random variables
More generally, the expectation and variance of the range depends on how fat the tail of your distribution is. For the variance, it is $O(n^{-B})$ where $B$ depends on your distribution ($B = 2$ for u |
20,427 | Sum of rating scores vs estimated factor scores? | I've been wrestling with this idea myself in some current projects. I think you need to ask yourself what is being estimated here. If a one-factor model fits, then the factor scores estimate the latent factor. The straight sum or mean of your manifest variables estimates something else, unless every observation loads equally on the factor, and the uniquenesses are also the same. And that something else is probably not a quantity of great theoretical interest.
So if a one-factor model fits, you are probably well advised to use the factor scores. I take your point about comparability across studies, but within a particular study, I think the factor scores have a lot going for them.
Where it gets interesting is when a one-factor model does not fit, either because a two-factor model applies (or higher), or because the covariance structure is more complicated than a factor model predicts. To me, the question is then whether the straight total of the variables refers to anything real. This is particularly true if the data have more than one dimension. In practice, what often happens is that you have a bunch of related variables (items on a survey, perhaps), with one or two of them being way different from the others. You can say, "to Hell with this", and take the average of everything, regardless of what it means. Or you can go with the factor scores. If you fit a one-factor model, what will typically happen, is that the factor analysis will downweight the less useful variables (or at least, those variables that really belong on a second factor score). In effect, it spots them as belonging to a different dimension and ignores them.
So I believe that the factor score can sort of prune the data to give something more uni-dimensional than you started with. But I don't have a reference for this, and I'm still trying to figure out in my own work if I like this approach. To me, the big danger is overfitting when you plough the scores into another model with the same data. The scores are already the answer to an optimization question, so where does that leave the rest of the analysis? I hate to think.
But at the end of the day, does a sum or total of variables actually make sense if something like a one-factor model does not apply?
A lot of these questions would not arise if people designed better scales to start with. | Sum of rating scores vs estimated factor scores? | I've been wrestling with this idea myself in some current projects. I think you need to ask yourself what is being estimated here. If a one-factor model fits, then the factor scores estimate the laten | Sum of rating scores vs estimated factor scores?
I've been wrestling with this idea myself in some current projects. I think you need to ask yourself what is being estimated here. If a one-factor model fits, then the factor scores estimate the latent factor. The straight sum or mean of your manifest variables estimates something else, unless every observation loads equally on the factor, and the uniquenesses are also the same. And that something else is probably not a quantity of great theoretical interest.
So if a one-factor model fits, you are probably well advised to use the factor scores. I take your point about comparability across studies, but within a particular study, I think the factor scores have a lot going for them.
Where it gets interesting is when a one-factor model does not fit, either because a two-factor model applies (or higher), or because the covariance structure is more complicated than a factor model predicts. To me, the question is then whether the straight total of the variables refers to anything real. This is particularly true if the data have more than one dimension. In practice, what often happens is that you have a bunch of related variables (items on a survey, perhaps), with one or two of them being way different from the others. You can say, "to Hell with this", and take the average of everything, regardless of what it means. Or you can go with the factor scores. If you fit a one-factor model, what will typically happen, is that the factor analysis will downweight the less useful variables (or at least, those variables that really belong on a second factor score). In effect, it spots them as belonging to a different dimension and ignores them.
So I believe that the factor score can sort of prune the data to give something more uni-dimensional than you started with. But I don't have a reference for this, and I'm still trying to figure out in my own work if I like this approach. To me, the big danger is overfitting when you plough the scores into another model with the same data. The scores are already the answer to an optimization question, so where does that leave the rest of the analysis? I hate to think.
But at the end of the day, does a sum or total of variables actually make sense if something like a one-factor model does not apply?
A lot of these questions would not arise if people designed better scales to start with. | Sum of rating scores vs estimated factor scores?
I've been wrestling with this idea myself in some current projects. I think you need to ask yourself what is being estimated here. If a one-factor model fits, then the factor scores estimate the laten |
20,428 | Sum of rating scores vs estimated factor scores? | Summing or averaging items loaded by the common factor is a traditional way to reckon the construst score (the construct representing tha factor). It is a simplest version of the "coarse method" of computing factor scores; the method's main point stands in using factor loadings as score weights. While refined methods to compute scores use specially estimated score coefficients (calculated from the loadings) as the weights.
This answer does not universally "suggest about when to use [refined] factor scores over plain sum of item scores", which is a vast domain, but focuses on showing some concrete obvious implications going with preferring one way of reckoning the construct over the other way.
Consider a simple situation with some factor $F$ and two items loaded by it. According to Footnote 1 here explaining how regressional factor scores are computed, factor score coefficients $b_1$ and $b_2$ to compute factor scores of $F$ come from
$s_1=b_1r_{11}+b_2r_{12}$,
$s_2=b_1r_{12}+b_2r_{22}$,
where $s_1$ and $s_2$ are the correlations between the factor and the items - the factor loadings; $r_{12}$ is the correlation between the items. The $b$ coefficients are what distinguish factor scores from simple, unweighted sum of the item scores. For, when you compute just the sum (or mean) you deliberately set both $b$s to be equal. While in "refined" factor scores the $b$s are got from the above equations and are not equal usually.
For simplicity, and because factor analysis is often performed on correlations let us take the $r$s as correlations, not covariances. Then $r_{11}$ and $r_{22}$ are unit and can be omitted. Then,
$b_1 = \frac{s_2r_{12}-s_1}{r_{12}^2-1}$,
$b_2 = \frac{s_1r_{12}-s_2}{r_{12}^2-1}$,
hence $b_1-b_2= -\frac{(r_{12}+1)(s_1-s_2)}{r_{12}^2-1}.$
We are interested in how this potential inequality between the $b$s is dependent on the inequality among the loadings $s$s and the correlation $r_{12}$. The function $b_1-b_2$ is shown below on the surface plot and also on a heatmap plot.
Clearly, as the loadings are equal ($s_1-s_2=0$) the $b$ coefficients are also equal, always. As $s_1-s_2$ grows, $b_1-b_2$ grows in response, and grows the more rapidly the greater is $r_{12}$.
So, if two items are loaded by their factor about equally you may safely set their weights equal, i.e. compute simple sum, - because the $b$ weights (which determine regressional factor scores) are about equal too. You do not depart far from factor scores (a).
But consider two different loadings, say, $s_1=.70$ and $s_2=.45$, the difference is $.25$. If you choose to simply sum their scores given by a respondent the degree how much awry is your decision relative to the estimated factor score depends on how strongly the items correlate with each other. If they correlate not very strongly, your bias is not too pronounced (b). But if they correlate really strongly, the bias is strong too, so simple sum won't do (c). Interpreting the reason in the three situations:
c. If they correlate strongly, the weaker loaded item is a junior
duplicate of the other one. What's the reason to count that weaker
indicator/symptom in the presense of its stronger substitute? No much reason. And
factor scores adjust for that (while simple summation doesn't). Note
that in a multifactor questionnaire the "weaker loaded item" is
often another factor's item, loaded higher there; while in the
present factor this item gets restrained, as we see now, in
computation of factor scores, - and that serves it right.
b. But if items, while loaded as before unequally, do not correlate that
strongly, then they are different indicators/symptoms to us. And
could be counted "twice", i.e. just summed. In this case, factor scores try to respect the weaker item to the extent its loading still allows, for it being a different embodiment of the factor.
a. Two items can also be counted twice, i.e. just summed, whenever they have similar, sufficiently high, loadings by the factor,
whatever correlation between these items. (Factor scores add more weight to both items when they correlate not too tight, however the weights are equal.) It seems not unreasonable
that we usually tolerate or admit quite duplicate items if they are all
strongly loaded. If you don't like this (sometimes you may want to)
you are ever free to eliminate duplicates from the factor manually.
So, in computation of (refined) factor scores (by the regression method at least) there apparent are "get along / push out" intrigues among the variables constituting the construct, in their influence on the scores. Equally strong indicators tolerate each other, as unequally strong not strongly correlated ones do, too. "Shutting up" occurs of a weaker indicator strongly correlated with stronger indicators. Simple addition/averaging doesn't have that "push out a weak duplicate" intrigue.
Please see also this answer which warns that factor theoretically is rather an "essence inside" than a gross collection or heap of "its" indicative phenomena. Therefore blindly summing up items - taking neither their loadings nor their correlations in mind - is potentially problematic. On the other hand, factor, as scored, can be but some kind of a sum of its items, and so everything is about a better conception of the weights in the sum.
Let us also glance at deficiency of coarse or summation method more generally and abstractly.
In the beginning of the answer I've said that obtaining a construct score via plain summing/averaging is a particular case of coarse method of factor score reckoning whereby score coefficients $b$s are replaced by factor loadings $a$s (when the loadings enter dichotomized as 1 (loaded) and 0 (unloaded) we get exactly that simple summing or averaging of items).
Let $\hat F_i$ be a respondent $i$ factor score (estimate of value) and $F_i$ be his true factor value (ever unknown). We also know that each of items $X1$ and $X2$ loaded by the common factor (with loadings $a1$ and $a2$) consist of that common factor $F$ plus the unique factor $U$ (we assume the latter comprising specific factor S and error term e). So, in reckoning factor scores as packages do via $b$s we have
$\hat F_i = b1X1_i+b2X2_i = b1(F_i+U1_i)+b2(F_i+U2_i) = (b1+b2)F_i+b1U1_i+b2U2_i$.
If $b1U1_i+b2U2_i$ happens to be close to zero $\hat F_i$ and $F_i$ are equivalent. Unless unique factors $U$s are altogether absent (or unless we known their values, which we don't) we can never provide $\hat F$ scores reflecting $F$ values precisely. We could, however, contrive the two $b$ coefficients in such a way that $\text{var}[b1U1_i+b2U2_i]$ is possibly minimal across respondents; then $\hat F$ will strongly correlate with $F$. One method or another, by estimating score coefficients $b$s from loadings $a$s and values $X$ we can make $\hat F$ scores be quite representative of $F$.
But look at the "coarse method" - where loadings $a$s themselves are admitted in place of $b$s to the above approximation of $F$ by $\hat F$:
$\hat F_i = a1X1_i+a2X2_i= ~...~ =(a1+a2)F_i+a1U1_i+a2U2_i$.
What we see here is weighting of unique factors by those same coefficients that are the degree how variables are weighted by the common factor. Above, $b$s were computed with the help of $a$s, true, but they weren't $a$s themselves; and now $a$'s themselves came to weight as they are - to weight what they relate to not. This is the crudity we commit when using "coarse method" of factor score computation, including plain summation/averaging of items as its specific variant. | Sum of rating scores vs estimated factor scores? | Summing or averaging items loaded by the common factor is a traditional way to reckon the construst score (the construct representing tha factor). It is a simplest version of the "coarse method" of co | Sum of rating scores vs estimated factor scores?
Summing or averaging items loaded by the common factor is a traditional way to reckon the construst score (the construct representing tha factor). It is a simplest version of the "coarse method" of computing factor scores; the method's main point stands in using factor loadings as score weights. While refined methods to compute scores use specially estimated score coefficients (calculated from the loadings) as the weights.
This answer does not universally "suggest about when to use [refined] factor scores over plain sum of item scores", which is a vast domain, but focuses on showing some concrete obvious implications going with preferring one way of reckoning the construct over the other way.
Consider a simple situation with some factor $F$ and two items loaded by it. According to Footnote 1 here explaining how regressional factor scores are computed, factor score coefficients $b_1$ and $b_2$ to compute factor scores of $F$ come from
$s_1=b_1r_{11}+b_2r_{12}$,
$s_2=b_1r_{12}+b_2r_{22}$,
where $s_1$ and $s_2$ are the correlations between the factor and the items - the factor loadings; $r_{12}$ is the correlation between the items. The $b$ coefficients are what distinguish factor scores from simple, unweighted sum of the item scores. For, when you compute just the sum (or mean) you deliberately set both $b$s to be equal. While in "refined" factor scores the $b$s are got from the above equations and are not equal usually.
For simplicity, and because factor analysis is often performed on correlations let us take the $r$s as correlations, not covariances. Then $r_{11}$ and $r_{22}$ are unit and can be omitted. Then,
$b_1 = \frac{s_2r_{12}-s_1}{r_{12}^2-1}$,
$b_2 = \frac{s_1r_{12}-s_2}{r_{12}^2-1}$,
hence $b_1-b_2= -\frac{(r_{12}+1)(s_1-s_2)}{r_{12}^2-1}.$
We are interested in how this potential inequality between the $b$s is dependent on the inequality among the loadings $s$s and the correlation $r_{12}$. The function $b_1-b_2$ is shown below on the surface plot and also on a heatmap plot.
Clearly, as the loadings are equal ($s_1-s_2=0$) the $b$ coefficients are also equal, always. As $s_1-s_2$ grows, $b_1-b_2$ grows in response, and grows the more rapidly the greater is $r_{12}$.
So, if two items are loaded by their factor about equally you may safely set their weights equal, i.e. compute simple sum, - because the $b$ weights (which determine regressional factor scores) are about equal too. You do not depart far from factor scores (a).
But consider two different loadings, say, $s_1=.70$ and $s_2=.45$, the difference is $.25$. If you choose to simply sum their scores given by a respondent the degree how much awry is your decision relative to the estimated factor score depends on how strongly the items correlate with each other. If they correlate not very strongly, your bias is not too pronounced (b). But if they correlate really strongly, the bias is strong too, so simple sum won't do (c). Interpreting the reason in the three situations:
c. If they correlate strongly, the weaker loaded item is a junior
duplicate of the other one. What's the reason to count that weaker
indicator/symptom in the presense of its stronger substitute? No much reason. And
factor scores adjust for that (while simple summation doesn't). Note
that in a multifactor questionnaire the "weaker loaded item" is
often another factor's item, loaded higher there; while in the
present factor this item gets restrained, as we see now, in
computation of factor scores, - and that serves it right.
b. But if items, while loaded as before unequally, do not correlate that
strongly, then they are different indicators/symptoms to us. And
could be counted "twice", i.e. just summed. In this case, factor scores try to respect the weaker item to the extent its loading still allows, for it being a different embodiment of the factor.
a. Two items can also be counted twice, i.e. just summed, whenever they have similar, sufficiently high, loadings by the factor,
whatever correlation between these items. (Factor scores add more weight to both items when they correlate not too tight, however the weights are equal.) It seems not unreasonable
that we usually tolerate or admit quite duplicate items if they are all
strongly loaded. If you don't like this (sometimes you may want to)
you are ever free to eliminate duplicates from the factor manually.
So, in computation of (refined) factor scores (by the regression method at least) there apparent are "get along / push out" intrigues among the variables constituting the construct, in their influence on the scores. Equally strong indicators tolerate each other, as unequally strong not strongly correlated ones do, too. "Shutting up" occurs of a weaker indicator strongly correlated with stronger indicators. Simple addition/averaging doesn't have that "push out a weak duplicate" intrigue.
Please see also this answer which warns that factor theoretically is rather an "essence inside" than a gross collection or heap of "its" indicative phenomena. Therefore blindly summing up items - taking neither their loadings nor their correlations in mind - is potentially problematic. On the other hand, factor, as scored, can be but some kind of a sum of its items, and so everything is about a better conception of the weights in the sum.
Let us also glance at deficiency of coarse or summation method more generally and abstractly.
In the beginning of the answer I've said that obtaining a construct score via plain summing/averaging is a particular case of coarse method of factor score reckoning whereby score coefficients $b$s are replaced by factor loadings $a$s (when the loadings enter dichotomized as 1 (loaded) and 0 (unloaded) we get exactly that simple summing or averaging of items).
Let $\hat F_i$ be a respondent $i$ factor score (estimate of value) and $F_i$ be his true factor value (ever unknown). We also know that each of items $X1$ and $X2$ loaded by the common factor (with loadings $a1$ and $a2$) consist of that common factor $F$ plus the unique factor $U$ (we assume the latter comprising specific factor S and error term e). So, in reckoning factor scores as packages do via $b$s we have
$\hat F_i = b1X1_i+b2X2_i = b1(F_i+U1_i)+b2(F_i+U2_i) = (b1+b2)F_i+b1U1_i+b2U2_i$.
If $b1U1_i+b2U2_i$ happens to be close to zero $\hat F_i$ and $F_i$ are equivalent. Unless unique factors $U$s are altogether absent (or unless we known their values, which we don't) we can never provide $\hat F$ scores reflecting $F$ values precisely. We could, however, contrive the two $b$ coefficients in such a way that $\text{var}[b1U1_i+b2U2_i]$ is possibly minimal across respondents; then $\hat F$ will strongly correlate with $F$. One method or another, by estimating score coefficients $b$s from loadings $a$s and values $X$ we can make $\hat F$ scores be quite representative of $F$.
But look at the "coarse method" - where loadings $a$s themselves are admitted in place of $b$s to the above approximation of $F$ by $\hat F$:
$\hat F_i = a1X1_i+a2X2_i= ~...~ =(a1+a2)F_i+a1U1_i+a2U2_i$.
What we see here is weighting of unique factors by those same coefficients that are the degree how variables are weighted by the common factor. Above, $b$s were computed with the help of $a$s, true, but they weren't $a$s themselves; and now $a$'s themselves came to weight as they are - to weight what they relate to not. This is the crudity we commit when using "coarse method" of factor score computation, including plain summation/averaging of items as its specific variant. | Sum of rating scores vs estimated factor scores?
Summing or averaging items loaded by the common factor is a traditional way to reckon the construst score (the construct representing tha factor). It is a simplest version of the "coarse method" of co |
20,429 | Regression to the mean in "Thinking, Fast and Slow" | Is my interpretation of Kahneman's procedure correct?
This is a bit hard to say, because Kahneman's step #2 is not formulated very precisely: "Determine the GPA that matches your impression of the evidence" -- what exactly is that supposed to mean? If somebody's impressions are well calibrated, then there will be no need to correct towards the mean. If somebody's impressions are grossly off, then they should rather correct even stronger.
So I agree with @AndyW that Kahneman's advice is only a rule of thumb.
That said, if you interpret Kahneman's step #2 as you interpreted it in your Interpretation steps ##1--2: i.e. that you take GPA with the same $z$-score as the $z$-score of reading precocity as "matching your impression of the evidence", then your procedure is exactly mathematically correct and not a rule of thumb.
[...] is there a more formal mathematical justification of his procedure, especially step 4? In general, what is the relationship between the correlation between two variables and changes/differences in their standard scores?
If you predict $y$ from $x$ and both are converted into $z$-scores, i.e. have zero mean and unit variance, and have correlation $\rho$ between each other, then it can be easily shown that the regression equation will be $$y=\rho x,$$ i.e. regression coefficient will be equal to the correlation coefficient.
From here it immediately follows that if you know the value of $x$ (e.g. you know the standard score of the reading precocity), then the predicted value of $y$ (standard score of GPA) will be $\rho$ times that.
This is exactly what is called "regression to the mean". You can see some formulas and derivations in the discussion on Wikipedia. | Regression to the mean in "Thinking, Fast and Slow" | Is my interpretation of Kahneman's procedure correct?
This is a bit hard to say, because Kahneman's step #2 is not formulated very precisely: "Determine the GPA that matches your impression of the e | Regression to the mean in "Thinking, Fast and Slow"
Is my interpretation of Kahneman's procedure correct?
This is a bit hard to say, because Kahneman's step #2 is not formulated very precisely: "Determine the GPA that matches your impression of the evidence" -- what exactly is that supposed to mean? If somebody's impressions are well calibrated, then there will be no need to correct towards the mean. If somebody's impressions are grossly off, then they should rather correct even stronger.
So I agree with @AndyW that Kahneman's advice is only a rule of thumb.
That said, if you interpret Kahneman's step #2 as you interpreted it in your Interpretation steps ##1--2: i.e. that you take GPA with the same $z$-score as the $z$-score of reading precocity as "matching your impression of the evidence", then your procedure is exactly mathematically correct and not a rule of thumb.
[...] is there a more formal mathematical justification of his procedure, especially step 4? In general, what is the relationship between the correlation between two variables and changes/differences in their standard scores?
If you predict $y$ from $x$ and both are converted into $z$-scores, i.e. have zero mean and unit variance, and have correlation $\rho$ between each other, then it can be easily shown that the regression equation will be $$y=\rho x,$$ i.e. regression coefficient will be equal to the correlation coefficient.
From here it immediately follows that if you know the value of $x$ (e.g. you know the standard score of the reading precocity), then the predicted value of $y$ (standard score of GPA) will be $\rho$ times that.
This is exactly what is called "regression to the mean". You can see some formulas and derivations in the discussion on Wikipedia. | Regression to the mean in "Thinking, Fast and Slow"
Is my interpretation of Kahneman's procedure correct?
This is a bit hard to say, because Kahneman's step #2 is not formulated very precisely: "Determine the GPA that matches your impression of the e |
20,430 | Regression to the mean in "Thinking, Fast and Slow" | The order of your numbers do not match with the Kahneman quote. Because of this it seems like you may be missing the overall point.
Kahneman's point one is the most important. It means literally estimate the average GPA -- for everyone. The point behind this advice is that it is your anchor. Any prediction you give should be in reference to changes around this anchor point. I'm not sure I see this step in any of your points!
Kahneman uses an acronym, WYSIATI, what you see is all there is. This is the human tendency to overestimate the importance of the information currently available. For many people, the information about reading ability would make people think Julie is smart, and so people would guesstimate the GPA of a smart person.
But, a child's behavior at four contains very little information related to adult behavior. You are probably better off ignoring it in making predictions. It should only sway you from your anchor by a small amount. Also, peoples first guess of a smart persons GPA can be very inaccurate. Due to selection, the majority of seniors in college are above average intelligence.
There actually is some other hidden information in the question besides Julie's reading ability at four years old though.
Julie is likely to be a female name
She is attending a state university
She is a senior
I suspect all three of these characteristics raise the average GPA slightly compared to the overall student population. For example I bet Seniors' likely have a higher GPA than Sophmores' because because students with very bad GPA's drop out.
So Kahneman's procedure (as a hypothetical) would go like something like this.
The average GPA for a female senior in a state university is 3.1.
I guess that based on Julie's advanced reading ability at 4 that her GPA is 3.8
I guess reading ability at 4 years old has a correlation of 0.3 with GPA
Then 30% of the way between 3.1 and 3.8 is 3.3 (i.e. 3.1 + (3.8-3.1)*0.3)
So in this hypothetical the final guess for Julie's GPA is 3.3.
The regression to the mean in Kahneman's approach is that step 2 is likely to be a gross over-estimate of the importance of the information available. So a better strategy is to regress our prediction back to the overall mean. Steps 3 and 4 are (ad-hoc) ways to estimate how much to regress. | Regression to the mean in "Thinking, Fast and Slow" | The order of your numbers do not match with the Kahneman quote. Because of this it seems like you may be missing the overall point.
Kahneman's point one is the most important. It means literally estim | Regression to the mean in "Thinking, Fast and Slow"
The order of your numbers do not match with the Kahneman quote. Because of this it seems like you may be missing the overall point.
Kahneman's point one is the most important. It means literally estimate the average GPA -- for everyone. The point behind this advice is that it is your anchor. Any prediction you give should be in reference to changes around this anchor point. I'm not sure I see this step in any of your points!
Kahneman uses an acronym, WYSIATI, what you see is all there is. This is the human tendency to overestimate the importance of the information currently available. For many people, the information about reading ability would make people think Julie is smart, and so people would guesstimate the GPA of a smart person.
But, a child's behavior at four contains very little information related to adult behavior. You are probably better off ignoring it in making predictions. It should only sway you from your anchor by a small amount. Also, peoples first guess of a smart persons GPA can be very inaccurate. Due to selection, the majority of seniors in college are above average intelligence.
There actually is some other hidden information in the question besides Julie's reading ability at four years old though.
Julie is likely to be a female name
She is attending a state university
She is a senior
I suspect all three of these characteristics raise the average GPA slightly compared to the overall student population. For example I bet Seniors' likely have a higher GPA than Sophmores' because because students with very bad GPA's drop out.
So Kahneman's procedure (as a hypothetical) would go like something like this.
The average GPA for a female senior in a state university is 3.1.
I guess that based on Julie's advanced reading ability at 4 that her GPA is 3.8
I guess reading ability at 4 years old has a correlation of 0.3 with GPA
Then 30% of the way between 3.1 and 3.8 is 3.3 (i.e. 3.1 + (3.8-3.1)*0.3)
So in this hypothetical the final guess for Julie's GPA is 3.3.
The regression to the mean in Kahneman's approach is that step 2 is likely to be a gross over-estimate of the importance of the information available. So a better strategy is to regress our prediction back to the overall mean. Steps 3 and 4 are (ad-hoc) ways to estimate how much to regress. | Regression to the mean in "Thinking, Fast and Slow"
The order of your numbers do not match with the Kahneman quote. Because of this it seems like you may be missing the overall point.
Kahneman's point one is the most important. It means literally estim |
20,431 | How to train LSTM layer of deep-network | The best place to start with LSTMs is the blog post of A. Karpathy http://karpathy.github.io/2015/05/21/rnn-effectiveness/. If you are using Torch7 (which I would strongly suggest) the source code is available at github https://github.com/karpathy/char-rnn.
I would also try to alter your model a bit. I would use a many-to-one approach so that you input words through a lookup table and add a special word at the end of each sequence, so that only when you input the "end of the sequence" sign you will read the classification output and calculate the error based on your training criterion. This way you would train directly under a supervised context.
On the other hand, a simpler approach would be to use paragraph2vec (https://radimrehurek.com/gensim/models/doc2vec.html) to extract features for your input text and then run a classifier on top of your features. Paragraph vector feature extraction is very simple and in python it would be:
class LabeledLineSentence(object):
def __init__(self, filename):
self.filename = filename
def __iter__(self):
for uid, line in enumerate(open(self.filename)):
yield LabeledSentence(words=line.split(), labels=['TXT_%s' % uid])
sentences = LabeledLineSentence('your_text.txt')
model = Doc2Vec(alpha=0.025, min_alpha=0.025, size=50, window=5, min_count=5, dm=1, workers=8, sample=1e-5)
model.build_vocab(sentences)
for epoch in range(epochs):
try:
model.train(sentences)
except (KeyboardInterrupt, SystemExit):
break | How to train LSTM layer of deep-network | The best place to start with LSTMs is the blog post of A. Karpathy http://karpathy.github.io/2015/05/21/rnn-effectiveness/. If you are using Torch7 (which I would strongly suggest) the source code is | How to train LSTM layer of deep-network
The best place to start with LSTMs is the blog post of A. Karpathy http://karpathy.github.io/2015/05/21/rnn-effectiveness/. If you are using Torch7 (which I would strongly suggest) the source code is available at github https://github.com/karpathy/char-rnn.
I would also try to alter your model a bit. I would use a many-to-one approach so that you input words through a lookup table and add a special word at the end of each sequence, so that only when you input the "end of the sequence" sign you will read the classification output and calculate the error based on your training criterion. This way you would train directly under a supervised context.
On the other hand, a simpler approach would be to use paragraph2vec (https://radimrehurek.com/gensim/models/doc2vec.html) to extract features for your input text and then run a classifier on top of your features. Paragraph vector feature extraction is very simple and in python it would be:
class LabeledLineSentence(object):
def __init__(self, filename):
self.filename = filename
def __iter__(self):
for uid, line in enumerate(open(self.filename)):
yield LabeledSentence(words=line.split(), labels=['TXT_%s' % uid])
sentences = LabeledLineSentence('your_text.txt')
model = Doc2Vec(alpha=0.025, min_alpha=0.025, size=50, window=5, min_count=5, dm=1, workers=8, sample=1e-5)
model.build_vocab(sentences)
for epoch in range(epochs):
try:
model.train(sentences)
except (KeyboardInterrupt, SystemExit):
break | How to train LSTM layer of deep-network
The best place to start with LSTMs is the blog post of A. Karpathy http://karpathy.github.io/2015/05/21/rnn-effectiveness/. If you are using Torch7 (which I would strongly suggest) the source code is |
20,432 | Relation between sum of Gaussian RVs and Gaussian Mixture | A weighted sum of Gaussian random variables $X_1,\ldots,X_p$
$$\sum_{i=1}^p \beta_i X_i$$
is a Gaussian random variable: if
$$(X_1,\ldots,X_p)\sim\text{N}_p(\mu,\Sigma)$$then
$$\beta^\text{T}(X_1,\ldots,X_p)\sim\text{N}_1(\beta^\text{T}\mu,\beta^\text{T}\Sigma\beta)$$
A mixture of Gaussian densities has a density given as a weighted sum of Gaussian densities:$$f(\cdot;\theta)=\sum_{i=1}^p \omega_i \varphi(\cdot;\mu_i,\sigma_i)$$which is almost invariably not equal to a Gaussian density. See e.g. the blue estimated mixture density below (where the yellow band is a measure of variability of the estimated mixture):
[Source: Marin and Robert, Bayesian Core, 2007]
A random variable with this density, $X\sim f(\cdot;\theta)$ can be represented as
$$X=\sum_{i=1}^p \mathbb{I}(Z=i) X_i = X_{Z}$$
where $X_i\sim\text{N}_p(\mu_i,\sigma_i)$ and $Z$ is Multinomial with $\mathbb{P}(Z=i)=\omega_i$:$$Z\sim\text{M}(1;\omega_1,\ldots,\omega_p)$$ | Relation between sum of Gaussian RVs and Gaussian Mixture | A weighted sum of Gaussian random variables $X_1,\ldots,X_p$
$$\sum_{i=1}^p \beta_i X_i$$
is a Gaussian random variable: if
$$(X_1,\ldots,X_p)\sim\text{N}_p(\mu,\Sigma)$$then
$$\beta^\text{T}(X_1,\ldo | Relation between sum of Gaussian RVs and Gaussian Mixture
A weighted sum of Gaussian random variables $X_1,\ldots,X_p$
$$\sum_{i=1}^p \beta_i X_i$$
is a Gaussian random variable: if
$$(X_1,\ldots,X_p)\sim\text{N}_p(\mu,\Sigma)$$then
$$\beta^\text{T}(X_1,\ldots,X_p)\sim\text{N}_1(\beta^\text{T}\mu,\beta^\text{T}\Sigma\beta)$$
A mixture of Gaussian densities has a density given as a weighted sum of Gaussian densities:$$f(\cdot;\theta)=\sum_{i=1}^p \omega_i \varphi(\cdot;\mu_i,\sigma_i)$$which is almost invariably not equal to a Gaussian density. See e.g. the blue estimated mixture density below (where the yellow band is a measure of variability of the estimated mixture):
[Source: Marin and Robert, Bayesian Core, 2007]
A random variable with this density, $X\sim f(\cdot;\theta)$ can be represented as
$$X=\sum_{i=1}^p \mathbb{I}(Z=i) X_i = X_{Z}$$
where $X_i\sim\text{N}_p(\mu_i,\sigma_i)$ and $Z$ is Multinomial with $\mathbb{P}(Z=i)=\omega_i$:$$Z\sim\text{M}(1;\omega_1,\ldots,\omega_p)$$ | Relation between sum of Gaussian RVs and Gaussian Mixture
A weighted sum of Gaussian random variables $X_1,\ldots,X_p$
$$\sum_{i=1}^p \beta_i X_i$$
is a Gaussian random variable: if
$$(X_1,\ldots,X_p)\sim\text{N}_p(\mu,\Sigma)$$then
$$\beta^\text{T}(X_1,\ldo |
20,433 | Relation between sum of Gaussian RVs and Gaussian Mixture | And here is some R code to complement @Xi'an answer:
par(mfrow=c(2,1))
nsamples <- 100000
# Sum of two Gaussians
x1 <- rnorm(nsamples, mean=-10, sd=1)
x2 <- rnorm(nsamples, mean=10, sd=1)
hist(x1+x2, breaks=100)
# Mixture of two Gaussians
z <- runif(nsamples)<0.5 # assume mixture coefficients are (0.5,0.5)
x1_x2 <- rnorm(nsamples,mean=ifelse(z,-10,10),sd=1)
hist(x1_x2,breaks=100) | Relation between sum of Gaussian RVs and Gaussian Mixture | And here is some R code to complement @Xi'an answer:
par(mfrow=c(2,1))
nsamples <- 100000
# Sum of two Gaussians
x1 <- rnorm(nsamples, mean=-10, sd=1)
x2 <- rnorm(nsamples, mean=10, sd=1)
hist(x1+x2, | Relation between sum of Gaussian RVs and Gaussian Mixture
And here is some R code to complement @Xi'an answer:
par(mfrow=c(2,1))
nsamples <- 100000
# Sum of two Gaussians
x1 <- rnorm(nsamples, mean=-10, sd=1)
x2 <- rnorm(nsamples, mean=10, sd=1)
hist(x1+x2, breaks=100)
# Mixture of two Gaussians
z <- runif(nsamples)<0.5 # assume mixture coefficients are (0.5,0.5)
x1_x2 <- rnorm(nsamples,mean=ifelse(z,-10,10),sd=1)
hist(x1_x2,breaks=100) | Relation between sum of Gaussian RVs and Gaussian Mixture
And here is some R code to complement @Xi'an answer:
par(mfrow=c(2,1))
nsamples <- 100000
# Sum of two Gaussians
x1 <- rnorm(nsamples, mean=-10, sd=1)
x2 <- rnorm(nsamples, mean=10, sd=1)
hist(x1+x2, |
20,434 | Relation between sum of Gaussian RVs and Gaussian Mixture | The distribution of the sum of independent random variables is the convolution their distributions. As you have noted, the convolution of two Gaussians happens to be Gaussian.
The distribution of a mixture model performs a weighted average of the RV's distributions. Samples from (finite) mixture models can be produced by flipping a coin (or rolling a die) to decide which distribution to draw from: Say I have two RVs $X,Y$ and I want to produce an RV $Z$ whose distribution is the average of $X$ and $Y$ If I flip a coin, let $Z=X$. if I land tails, let $Z=Y$. | Relation between sum of Gaussian RVs and Gaussian Mixture | The distribution of the sum of independent random variables is the convolution their distributions. As you have noted, the convolution of two Gaussians happens to be Gaussian.
The distribution of a mi | Relation between sum of Gaussian RVs and Gaussian Mixture
The distribution of the sum of independent random variables is the convolution their distributions. As you have noted, the convolution of two Gaussians happens to be Gaussian.
The distribution of a mixture model performs a weighted average of the RV's distributions. Samples from (finite) mixture models can be produced by flipping a coin (or rolling a die) to decide which distribution to draw from: Say I have two RVs $X,Y$ and I want to produce an RV $Z$ whose distribution is the average of $X$ and $Y$ If I flip a coin, let $Z=X$. if I land tails, let $Z=Y$. | Relation between sum of Gaussian RVs and Gaussian Mixture
The distribution of the sum of independent random variables is the convolution their distributions. As you have noted, the convolution of two Gaussians happens to be Gaussian.
The distribution of a mi |
20,435 | How to derive Gibbs sampling? | Computing a joint distribution from conditional distributions in general is very difficult. If the conditional distributions are chosen arbitrarily, a common joint distribution might not even exist. In this case, even showing that the conditional distributions are consistent is generally difficult. One result that might be used for deriving a joint distribution is Brook's lemma,
$$ \frac{p(\mathbf{x})}{p(\mathbf{x}')} = \prod_i \frac{p(x_i \mid \mathbf{x}_{<i}, \mathbf{x}'_{>i})}{p(x_i' \mid \mathbf{x}_{<i}, \mathbf{x}'_{>i})},$$
by choosing a fixed state $\mathbf{x}'$, although I have never successfully used it myself for that purpose. For more on that topic, I would look at Julian Besag's work.
To prove that Gibbs sampling works, however, it's better to take a different route. If a Markov chain implemented by a sampling algorithm has distribution $p$ as invariant distribution, and is irreducible and aperiodic, then the Markov chain will converge to that distribution (Tierney, 1994).
Gibbs sampling will always leave the joint distribution invariant from which the conditional distributions were derived: Roughly, if $(x_0, y_0) \sim p(x_0, y_0)$ and we sample $x_1 \sim p(x_1 \mid y_0)$, then
$$(x_1, y_0) \sim \int p(x_0, y_0) p(x_1 \mid y_0) \, dx_0 = p(x_1 \mid y_0) p(y_0) = p(x_1, y_0).$$
That is, updating $x$ by conditionally sampling does not change the distribution of the sample.
However, Gibbs sampling is not always irreducible. While we can always apply it without breaking things (in the sense that if we already have a sample from the desired distribution it will not change the distribution), it depends on the joint distribution whether Gibbs sampling will actually converge to it (a simple sufficient condition for irreducibility is that the density is positive everywhere, $p(\mathbf{x}) > 0$). | How to derive Gibbs sampling? | Computing a joint distribution from conditional distributions in general is very difficult. If the conditional distributions are chosen arbitrarily, a common joint distribution might not even exist. I | How to derive Gibbs sampling?
Computing a joint distribution from conditional distributions in general is very difficult. If the conditional distributions are chosen arbitrarily, a common joint distribution might not even exist. In this case, even showing that the conditional distributions are consistent is generally difficult. One result that might be used for deriving a joint distribution is Brook's lemma,
$$ \frac{p(\mathbf{x})}{p(\mathbf{x}')} = \prod_i \frac{p(x_i \mid \mathbf{x}_{<i}, \mathbf{x}'_{>i})}{p(x_i' \mid \mathbf{x}_{<i}, \mathbf{x}'_{>i})},$$
by choosing a fixed state $\mathbf{x}'$, although I have never successfully used it myself for that purpose. For more on that topic, I would look at Julian Besag's work.
To prove that Gibbs sampling works, however, it's better to take a different route. If a Markov chain implemented by a sampling algorithm has distribution $p$ as invariant distribution, and is irreducible and aperiodic, then the Markov chain will converge to that distribution (Tierney, 1994).
Gibbs sampling will always leave the joint distribution invariant from which the conditional distributions were derived: Roughly, if $(x_0, y_0) \sim p(x_0, y_0)$ and we sample $x_1 \sim p(x_1 \mid y_0)$, then
$$(x_1, y_0) \sim \int p(x_0, y_0) p(x_1 \mid y_0) \, dx_0 = p(x_1 \mid y_0) p(y_0) = p(x_1, y_0).$$
That is, updating $x$ by conditionally sampling does not change the distribution of the sample.
However, Gibbs sampling is not always irreducible. While we can always apply it without breaking things (in the sense that if we already have a sample from the desired distribution it will not change the distribution), it depends on the joint distribution whether Gibbs sampling will actually converge to it (a simple sufficient condition for irreducibility is that the density is positive everywhere, $p(\mathbf{x}) > 0$). | How to derive Gibbs sampling?
Computing a joint distribution from conditional distributions in general is very difficult. If the conditional distributions are chosen arbitrarily, a common joint distribution might not even exist. I |
20,436 | Questions about Q-Learning using Neural Networks | Q1. You're definitely on the right track, but a few changes could help immensely. Some people use one output unit per action so that they only have to run their network once for action selection (you have to run your net once for each possible action). But this shouldn't make a difference with regards to learning, and is only worth implementing if you're planning on scaling your model up significantly.
Q2. Generally, people use a linear activation function for the last layer of their neural network, especially for reinforcement learning. There are a variety of reasons for this, but the most pertinent is that a linear activation function allows you to represent the full range of real numbers as your output. Thus, even if you don't know the bounds on the rewards for your task, you're still guaranteed to be able to represent that range.
Q3. Unfortunately, the theoretical guarantees for combining neural networks (and non-linear function approximation for generally) with reinforcement learning are pretty much non-existent. There are a few fancier versions of reinforcement learning (mainly out of the Sutton lab) that can make the sorts of convergence claims you mention, but I've never really seen those algorithms applied 'in the wild'. The reason for this is that while great performance can't be promised, it is typically obtained in practice, with proper attention to hyper-parameters and initial conditions.
One final point that bears mentioning for neural networks in general: don't use sigmoid activation functions for networks with a lot of hidden layers! They're cursed with the problem of 'vanishing gradients'; the error signal hardly reaches the earlier layers (looking at the derivative of the function should make it clear why this is the case). Instead, try using rectified linear units (RELU) or 'soft plus' units, as they generally exhibit much better performance in deep networks.
See this paper for a great implementation of neural networks trained with reinforcement learning:
Mnih, Volodymyr, et al. "Playing Atari with deep reinforcement learning." arXiv preprint arXiv:1312.5602 (2013). | Questions about Q-Learning using Neural Networks | Q1. You're definitely on the right track, but a few changes could help immensely. Some people use one output unit per action so that they only have to run their network once for action selection (you | Questions about Q-Learning using Neural Networks
Q1. You're definitely on the right track, but a few changes could help immensely. Some people use one output unit per action so that they only have to run their network once for action selection (you have to run your net once for each possible action). But this shouldn't make a difference with regards to learning, and is only worth implementing if you're planning on scaling your model up significantly.
Q2. Generally, people use a linear activation function for the last layer of their neural network, especially for reinforcement learning. There are a variety of reasons for this, but the most pertinent is that a linear activation function allows you to represent the full range of real numbers as your output. Thus, even if you don't know the bounds on the rewards for your task, you're still guaranteed to be able to represent that range.
Q3. Unfortunately, the theoretical guarantees for combining neural networks (and non-linear function approximation for generally) with reinforcement learning are pretty much non-existent. There are a few fancier versions of reinforcement learning (mainly out of the Sutton lab) that can make the sorts of convergence claims you mention, but I've never really seen those algorithms applied 'in the wild'. The reason for this is that while great performance can't be promised, it is typically obtained in practice, with proper attention to hyper-parameters and initial conditions.
One final point that bears mentioning for neural networks in general: don't use sigmoid activation functions for networks with a lot of hidden layers! They're cursed with the problem of 'vanishing gradients'; the error signal hardly reaches the earlier layers (looking at the derivative of the function should make it clear why this is the case). Instead, try using rectified linear units (RELU) or 'soft plus' units, as they generally exhibit much better performance in deep networks.
See this paper for a great implementation of neural networks trained with reinforcement learning:
Mnih, Volodymyr, et al. "Playing Atari with deep reinforcement learning." arXiv preprint arXiv:1312.5602 (2013). | Questions about Q-Learning using Neural Networks
Q1. You're definitely on the right track, but a few changes could help immensely. Some people use one output unit per action so that they only have to run their network once for action selection (you |
20,437 | Questions about Q-Learning using Neural Networks | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
For activation function, maxout also works well. Using proper trainer is crucial for deep networks, I had tried various trainers but decided to stick with RMSprop and it is looking great! | Questions about Q-Learning using Neural Networks | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Questions about Q-Learning using Neural Networks
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
For activation function, maxout also works well. Using proper trainer is crucial for deep networks, I had tried various trainers but decided to stick with RMSprop and it is looking great! | Questions about Q-Learning using Neural Networks
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
20,438 | Convert Poisson distribution to normal distribution | 1) What's depicted appears to be (grouped) continuous data drawn as a bar chart.
You can quite safely conclude that it is not a Poisson distribution.
A Poisson random variable takes values 0, 1, 2, ... and has highest peak at 0 only when the mean is less than 1. It's used for count data; if you drew similar chart of of Poisson data, it could look like the plots below:
$\hspace{1.5cm}$
The first is a Poisson that shows similar skewness to yours. You can see its mean is quite small (around 0.6).
The second is a Poisson that has mean similar (at a very rough guess) to yours. As you see, it looks pretty symmetric.
You can have the skewness or the large mean, but not both at the same time.
2) (i) You cannot make discrete data normal --
With the grouped data, using any monotonic-increasing transformation, you'll move all values in a group to the same place, so the lowest group will still have the highest peak - see the plot below. In the first plot, we move the positions of the x-values to closely match a normal cdf:
In the second plot, we see the probability function after the transform. We can't really achieve anything like normality because it's both discrete and skew; the big jump of the first group will remain a big jump, no matter whether you push it left or right.
(ii) Continuous skewed data might be transformed to look reasonably normal.
If you have raw (ungrouped) values and they're not heavily discrete, you can possibly do something, but even then often when people seek to transform their data it's either unnecessary or their underlying problem can be solved a different (generally better) way. Sometimes transformation is a good choice, but it's usually done for not-very-good reasons.
So ... why do you want to transform it? | Convert Poisson distribution to normal distribution | 1) What's depicted appears to be (grouped) continuous data drawn as a bar chart.
You can quite safely conclude that it is not a Poisson distribution.
A Poisson random variable takes values 0, 1, 2, . | Convert Poisson distribution to normal distribution
1) What's depicted appears to be (grouped) continuous data drawn as a bar chart.
You can quite safely conclude that it is not a Poisson distribution.
A Poisson random variable takes values 0, 1, 2, ... and has highest peak at 0 only when the mean is less than 1. It's used for count data; if you drew similar chart of of Poisson data, it could look like the plots below:
$\hspace{1.5cm}$
The first is a Poisson that shows similar skewness to yours. You can see its mean is quite small (around 0.6).
The second is a Poisson that has mean similar (at a very rough guess) to yours. As you see, it looks pretty symmetric.
You can have the skewness or the large mean, but not both at the same time.
2) (i) You cannot make discrete data normal --
With the grouped data, using any monotonic-increasing transformation, you'll move all values in a group to the same place, so the lowest group will still have the highest peak - see the plot below. In the first plot, we move the positions of the x-values to closely match a normal cdf:
In the second plot, we see the probability function after the transform. We can't really achieve anything like normality because it's both discrete and skew; the big jump of the first group will remain a big jump, no matter whether you push it left or right.
(ii) Continuous skewed data might be transformed to look reasonably normal.
If you have raw (ungrouped) values and they're not heavily discrete, you can possibly do something, but even then often when people seek to transform their data it's either unnecessary or their underlying problem can be solved a different (generally better) way. Sometimes transformation is a good choice, but it's usually done for not-very-good reasons.
So ... why do you want to transform it? | Convert Poisson distribution to normal distribution
1) What's depicted appears to be (grouped) continuous data drawn as a bar chart.
You can quite safely conclude that it is not a Poisson distribution.
A Poisson random variable takes values 0, 1, 2, . |
20,439 | Convert Poisson distribution to normal distribution | Posting more fun information for posterity.
There is an older post that discusses a similar problem regarding the use of count data as an independent variable for logistic regressions.
Here it is:
Does using count data as independent variable violate any of GLM assumptions?
As Glen mentioned if you are simply trying to predict a dichotomous outcome it is possible that you may be able to use the untransformed count data as a direct component of your logistic regression model. However, a note of caution: When an independent variable (IV) is both poisson distributed AND ranges over many orders of magnitude using the raw values may result in highly influential points, which in turn can bias your model. If this is the case it may be useful to perform a transformation to your IV's to obtain a more robust model.
Transformations such as the square root, or log can augment the relation between the IV and the odds ratio. For example, if changes in X by three entire orders of magnitude (away from the median X value) corresponded with a mere 0.1 change in the probability of Y occuring (away from 0.5), then it's pretty safe to assume that any model discrepancies will lead to significant bias due to the extreme leverage from outlier X values.
To further illustrate, imagine we wanted to use the Scoville rating of various chili peppers ( domain[X] = {0, 3.2 million} ) to predict the probability that a person classifies the pepper as "uncomfortably spicy" ( range[Y] = {1 = yes, 0 = no}) after eating a pepper of corresponding rating X.
https://en.wikipedia.org/wiki/Scoville_scale
If you look at the chart of scoville ratings you can see that a log transform of the raw Scoville ratings would give you a closer approximation to the subjective (1-10) ratings of each chili.
So in this case, if we wanted make a more robust model that captures the true relation between raw Scoville ratings and subjective heat rating, we could perform a logarithmic transformation on X values. By doing this we reduce the impact of the excessively large X domain, by effectively "shrinking" the distance between values that differ by orders of magnitude, and consequently reducing the weight any X outliers (e.g. those capsaicin intolerant and/or crazy spice fiends!!!) have on our predictions.
Hope this adds some fun context! | Convert Poisson distribution to normal distribution | Posting more fun information for posterity.
There is an older post that discusses a similar problem regarding the use of count data as an independent variable for logistic regressions.
Here it is:
Doe | Convert Poisson distribution to normal distribution
Posting more fun information for posterity.
There is an older post that discusses a similar problem regarding the use of count data as an independent variable for logistic regressions.
Here it is:
Does using count data as independent variable violate any of GLM assumptions?
As Glen mentioned if you are simply trying to predict a dichotomous outcome it is possible that you may be able to use the untransformed count data as a direct component of your logistic regression model. However, a note of caution: When an independent variable (IV) is both poisson distributed AND ranges over many orders of magnitude using the raw values may result in highly influential points, which in turn can bias your model. If this is the case it may be useful to perform a transformation to your IV's to obtain a more robust model.
Transformations such as the square root, or log can augment the relation between the IV and the odds ratio. For example, if changes in X by three entire orders of magnitude (away from the median X value) corresponded with a mere 0.1 change in the probability of Y occuring (away from 0.5), then it's pretty safe to assume that any model discrepancies will lead to significant bias due to the extreme leverage from outlier X values.
To further illustrate, imagine we wanted to use the Scoville rating of various chili peppers ( domain[X] = {0, 3.2 million} ) to predict the probability that a person classifies the pepper as "uncomfortably spicy" ( range[Y] = {1 = yes, 0 = no}) after eating a pepper of corresponding rating X.
https://en.wikipedia.org/wiki/Scoville_scale
If you look at the chart of scoville ratings you can see that a log transform of the raw Scoville ratings would give you a closer approximation to the subjective (1-10) ratings of each chili.
So in this case, if we wanted make a more robust model that captures the true relation between raw Scoville ratings and subjective heat rating, we could perform a logarithmic transformation on X values. By doing this we reduce the impact of the excessively large X domain, by effectively "shrinking" the distance between values that differ by orders of magnitude, and consequently reducing the weight any X outliers (e.g. those capsaicin intolerant and/or crazy spice fiends!!!) have on our predictions.
Hope this adds some fun context! | Convert Poisson distribution to normal distribution
Posting more fun information for posterity.
There is an older post that discusses a similar problem regarding the use of count data as an independent variable for logistic regressions.
Here it is:
Doe |
20,440 | Confidence interval and sample size multinomial probabilities | Thank you very much again for your help. Below is the (hopefully correct) solution using the "Normal Approximation Method" of the Binomial Confidence Interval: | Confidence interval and sample size multinomial probabilities | Thank you very much again for your help. Below is the (hopefully correct) solution using the "Normal Approximation Method" of the Binomial Confidence Interval: | Confidence interval and sample size multinomial probabilities
Thank you very much again for your help. Below is the (hopefully correct) solution using the "Normal Approximation Method" of the Binomial Confidence Interval: | Confidence interval and sample size multinomial probabilities
Thank you very much again for your help. Below is the (hopefully correct) solution using the "Normal Approximation Method" of the Binomial Confidence Interval: |
20,441 | Confidence interval and sample size multinomial probabilities | I would like to add Wilson's method mentioned by Michael M in a comment.
From Wikipedia: Binomial proportion confidence interval - Wilson_score_interval.
You can get a 95% confidence interval by using the following:
$\frac{n_s + \frac{z^2}{2}}{n+z^2} \pm \frac{z}{n+z^2}\sqrt{\frac{n_s n_f}{n}+\frac{z^2}{4}}$
The left term is the center value and the right term gives the value you have to add / subtract to get the interval bounds.
$n_s$ is the number of samples in that category, $n_f$ the number of samples not in that category, $n$ the total number of samples and $z$ is 1.96 if you want a 95% confidence interval*
For high counts it gives the same results of the normal approximation, however this should be better for low counts or extreme values.
As an example, I had a category with 0 samples and the normal approximation returned a 0 s.e., and thus an absured confidence interval of 0-0 (as it was certain that it has 0% probability, while actually it had 0 occurence only because of the few samples).
$ $
* The method is actually for a binomial distribution and $n_s$ and $n_f$ are successes and failures on that distribution.
However, I think it can be reasonably used for a multinomial, even though it does not account for the fact that estimated probabilities must sum up to 1.
The normal approximation doesn't account for that either afaik. | Confidence interval and sample size multinomial probabilities | I would like to add Wilson's method mentioned by Michael M in a comment.
From Wikipedia: Binomial proportion confidence interval - Wilson_score_interval.
You can get a 95% confidence interval by using | Confidence interval and sample size multinomial probabilities
I would like to add Wilson's method mentioned by Michael M in a comment.
From Wikipedia: Binomial proportion confidence interval - Wilson_score_interval.
You can get a 95% confidence interval by using the following:
$\frac{n_s + \frac{z^2}{2}}{n+z^2} \pm \frac{z}{n+z^2}\sqrt{\frac{n_s n_f}{n}+\frac{z^2}{4}}$
The left term is the center value and the right term gives the value you have to add / subtract to get the interval bounds.
$n_s$ is the number of samples in that category, $n_f$ the number of samples not in that category, $n$ the total number of samples and $z$ is 1.96 if you want a 95% confidence interval*
For high counts it gives the same results of the normal approximation, however this should be better for low counts or extreme values.
As an example, I had a category with 0 samples and the normal approximation returned a 0 s.e., and thus an absured confidence interval of 0-0 (as it was certain that it has 0% probability, while actually it had 0 occurence only because of the few samples).
$ $
* The method is actually for a binomial distribution and $n_s$ and $n_f$ are successes and failures on that distribution.
However, I think it can be reasonably used for a multinomial, even though it does not account for the fact that estimated probabilities must sum up to 1.
The normal approximation doesn't account for that either afaik. | Confidence interval and sample size multinomial probabilities
I would like to add Wilson's method mentioned by Michael M in a comment.
From Wikipedia: Binomial proportion confidence interval - Wilson_score_interval.
You can get a 95% confidence interval by using |
20,442 | Confidence interval and sample size multinomial probabilities | There are several methods to get confidence intervals for multinomial proportions, and many of them are implemented in R's function MultinomCI() from the DescTools package:
> DescTools::MultinomCI(400 * c(0.10, 0.25, 0.35, 0.30), sides = "two.sided", method = "sisonglaz")
est lwr.ci upr.ci
[1,] 0.10 0.05 0.1549989
[2,] 0.25 0.20 0.3049989
[3,] 0.35 0.30 0.4049989
[4,] 0.30 0.25 0.3549989
For details, see https://cran.r-project.org/web/packages/DescTools/DescTools.pdf | Confidence interval and sample size multinomial probabilities | There are several methods to get confidence intervals for multinomial proportions, and many of them are implemented in R's function MultinomCI() from the DescTools package:
> DescTools::MultinomCI(400 | Confidence interval and sample size multinomial probabilities
There are several methods to get confidence intervals for multinomial proportions, and many of them are implemented in R's function MultinomCI() from the DescTools package:
> DescTools::MultinomCI(400 * c(0.10, 0.25, 0.35, 0.30), sides = "two.sided", method = "sisonglaz")
est lwr.ci upr.ci
[1,] 0.10 0.05 0.1549989
[2,] 0.25 0.20 0.3049989
[3,] 0.35 0.30 0.4049989
[4,] 0.30 0.25 0.3549989
For details, see https://cran.r-project.org/web/packages/DescTools/DescTools.pdf | Confidence interval and sample size multinomial probabilities
There are several methods to get confidence intervals for multinomial proportions, and many of them are implemented in R's function MultinomCI() from the DescTools package:
> DescTools::MultinomCI(400 |
20,443 | ARIMA vs ARMA on the differenced series | There are three minor issues in tseries::arma compared to stats::arima that lead to a slightly different result in the ARMA model for the differenced series using tseries::arma and ARIMA in stats::arima.
Starting values of the coefficients: stats::arima sets the initial AR and MA coefficients to zero, while tseries::arma uses the procedure described in Hannan and Rissanen (1982) is employed to get initial values of the coefficients.
Scale of the objective function: the objective function in tseries::arma returns the value of the conditional sums of squares, RSS; stats::arima returns 0.5*log(RSS/(n-ncond)).
Optimization algorithm: By default, Nelder-Mead is used in tseries::arma, while stats::arima employs the BFGS algorithm.
The last one can be changed through the argument optim.method in stats::arima but the others would require modifying the code. Below, I show an abridged version of the source code (minimal code for this particular model) for stats::arima where the three issues mentioned above are modified
so that they are the same as in tseries::arma. After addressing these issues, the same result as in tseries::arma is obtained.
Minimal version of stats::arima (with the changes mentioned above):
# objective function, conditional sum of squares
# adapted from "armaCSS" in stats::arima
armaCSS <- function(p, x, arma, ncond)
{
# this does nothing, except returning the vector of coefficients as a list
trarma <- .Call(stats:::C_ARIMA_transPars, p, arma, FALSE)
res <- .Call(stats:::C_ARIMA_CSS, x, arma, trarma[[1L]], trarma[[2L]], as.integer(ncond), FALSE)
# return the conditional sum of squares instead of 0.5*log(res),
# actually CSS is divided by n-ncond but does not relevant in this case
#0.5 * log(res)
res
}
# initial values of coefficients
# adapted from function "arma.init" within tseries::arma
arma.init <- function(dx, max.order, lag.ar=NULL, lag.ma=NULL)
{
n <- length(dx)
k <- round(1.1*log(n))
e <- as.vector(na.omit(drop(ar.ols(dx, order.max = k, aic = FALSE, demean = FALSE, intercept = FALSE)$resid)))
ee <- embed(e, max.order+1)
xx <- embed(dx[-(1:k)], max.order+1)
return(lm(xx[,1]~xx[,lag.ar+1]+ee[,lag.ma+1]-1)$coef)
}
# modified version of stats::arima
modified.arima <- function(x, order, seasonal, init)
{
n <- length(x)
arma <- as.integer(c(order[-2L], seasonal$order[-2L], seasonal$period, order[2L], seasonal$order[2L]))
narma <- sum(arma[1L:4L])
ncond <- order[2L] + seasonal$order[2L] * seasonal$period
ncond1 <- order[1L] + seasonal$period * seasonal$order[1L]
ncond <- as.integer(ncond + ncond1)
optim(init, armaCSS, method = "Nelder-Mead", hessian = TRUE, x=x, arma=arma, ncond=ncond)$par
}
Now, compare both procedures and check that yield the same result (requires the series x generated by the OP in the body of the question).
Using the initial values chosen in tseries::arima:
dx <- diff(x)
fit1 <- arma(dx, order=c(3,3), include.intercept=FALSE)
coef(fit1)
# ar1 ar2 ar3 ma1 ma2 ma3
# 0.33139827 0.80013071 -0.45177254 0.67331027 -0.14600320 -0.08931003
init <- arma.init(diff(x), 3, 1:3, 1:3)
fit2.coef <- modified.arima(x, order=c(3,1,3), seasonal=list(order=c(0,0,0), period=1), init=init)
fit2.coef
# xx[, lag.ar + 1]1 xx[, lag.ar + 1]2 xx[, lag.ar + 1]3 ee[, lag.ma + 1]1
# 0.33139827 0.80013071 -0.45177254 0.67331027
# ee[, lag.ma + 1]2 ee[, lag.ma + 1]3
# -0.14600320 -0.08931003
all.equal(coef(fit1), fit2.coef, check.attributes=FALSE)
# [1] TRUE
Using the initial values chosen in stats::arima (zeros):
fit3 <- arma(dx, order=c(3,3), include.intercept=FALSE, coef=rep(0,6))
coef(fit3)
# ar1 ar2 ar3 ma1 ma2 ma3
# 0.33176424 0.79999112 -0.45215742 0.67304072 -0.14592152 -0.08900624
init <- rep(0, 6)
fit4.coef <- modified.arima(x, order=c(3,1,3), seasonal=list(order=c(0,0,0), period=1), init=init)
fit4.coef
# [1] 0.33176424 0.79999112 -0.45215742 0.67304072 -0.14592152 -0.08900624
all.equal(coef(fit3), fit4.coef, check.attributes=FALSE)
# [1] TRUE | ARIMA vs ARMA on the differenced series | There are three minor issues in tseries::arma compared to stats::arima that lead to a slightly different result in the ARMA model for the differenced series using tseries::arma and ARIMA in stats::ari | ARIMA vs ARMA on the differenced series
There are three minor issues in tseries::arma compared to stats::arima that lead to a slightly different result in the ARMA model for the differenced series using tseries::arma and ARIMA in stats::arima.
Starting values of the coefficients: stats::arima sets the initial AR and MA coefficients to zero, while tseries::arma uses the procedure described in Hannan and Rissanen (1982) is employed to get initial values of the coefficients.
Scale of the objective function: the objective function in tseries::arma returns the value of the conditional sums of squares, RSS; stats::arima returns 0.5*log(RSS/(n-ncond)).
Optimization algorithm: By default, Nelder-Mead is used in tseries::arma, while stats::arima employs the BFGS algorithm.
The last one can be changed through the argument optim.method in stats::arima but the others would require modifying the code. Below, I show an abridged version of the source code (minimal code for this particular model) for stats::arima where the three issues mentioned above are modified
so that they are the same as in tseries::arma. After addressing these issues, the same result as in tseries::arma is obtained.
Minimal version of stats::arima (with the changes mentioned above):
# objective function, conditional sum of squares
# adapted from "armaCSS" in stats::arima
armaCSS <- function(p, x, arma, ncond)
{
# this does nothing, except returning the vector of coefficients as a list
trarma <- .Call(stats:::C_ARIMA_transPars, p, arma, FALSE)
res <- .Call(stats:::C_ARIMA_CSS, x, arma, trarma[[1L]], trarma[[2L]], as.integer(ncond), FALSE)
# return the conditional sum of squares instead of 0.5*log(res),
# actually CSS is divided by n-ncond but does not relevant in this case
#0.5 * log(res)
res
}
# initial values of coefficients
# adapted from function "arma.init" within tseries::arma
arma.init <- function(dx, max.order, lag.ar=NULL, lag.ma=NULL)
{
n <- length(dx)
k <- round(1.1*log(n))
e <- as.vector(na.omit(drop(ar.ols(dx, order.max = k, aic = FALSE, demean = FALSE, intercept = FALSE)$resid)))
ee <- embed(e, max.order+1)
xx <- embed(dx[-(1:k)], max.order+1)
return(lm(xx[,1]~xx[,lag.ar+1]+ee[,lag.ma+1]-1)$coef)
}
# modified version of stats::arima
modified.arima <- function(x, order, seasonal, init)
{
n <- length(x)
arma <- as.integer(c(order[-2L], seasonal$order[-2L], seasonal$period, order[2L], seasonal$order[2L]))
narma <- sum(arma[1L:4L])
ncond <- order[2L] + seasonal$order[2L] * seasonal$period
ncond1 <- order[1L] + seasonal$period * seasonal$order[1L]
ncond <- as.integer(ncond + ncond1)
optim(init, armaCSS, method = "Nelder-Mead", hessian = TRUE, x=x, arma=arma, ncond=ncond)$par
}
Now, compare both procedures and check that yield the same result (requires the series x generated by the OP in the body of the question).
Using the initial values chosen in tseries::arima:
dx <- diff(x)
fit1 <- arma(dx, order=c(3,3), include.intercept=FALSE)
coef(fit1)
# ar1 ar2 ar3 ma1 ma2 ma3
# 0.33139827 0.80013071 -0.45177254 0.67331027 -0.14600320 -0.08931003
init <- arma.init(diff(x), 3, 1:3, 1:3)
fit2.coef <- modified.arima(x, order=c(3,1,3), seasonal=list(order=c(0,0,0), period=1), init=init)
fit2.coef
# xx[, lag.ar + 1]1 xx[, lag.ar + 1]2 xx[, lag.ar + 1]3 ee[, lag.ma + 1]1
# 0.33139827 0.80013071 -0.45177254 0.67331027
# ee[, lag.ma + 1]2 ee[, lag.ma + 1]3
# -0.14600320 -0.08931003
all.equal(coef(fit1), fit2.coef, check.attributes=FALSE)
# [1] TRUE
Using the initial values chosen in stats::arima (zeros):
fit3 <- arma(dx, order=c(3,3), include.intercept=FALSE, coef=rep(0,6))
coef(fit3)
# ar1 ar2 ar3 ma1 ma2 ma3
# 0.33176424 0.79999112 -0.45215742 0.67304072 -0.14592152 -0.08900624
init <- rep(0, 6)
fit4.coef <- modified.arima(x, order=c(3,1,3), seasonal=list(order=c(0,0,0), period=1), init=init)
fit4.coef
# [1] 0.33176424 0.79999112 -0.45215742 0.67304072 -0.14592152 -0.08900624
all.equal(coef(fit3), fit4.coef, check.attributes=FALSE)
# [1] TRUE | ARIMA vs ARMA on the differenced series
There are three minor issues in tseries::arma compared to stats::arima that lead to a slightly different result in the ARMA model for the differenced series using tseries::arma and ARIMA in stats::ari |
20,444 | ARIMA vs ARMA on the differenced series | As far as I can tell, the difference is entirely due to the MA terms. That is, when I fit your data with only AR terms, the ARMA of the differenced series and ARIMA agree. | ARIMA vs ARMA on the differenced series | As far as I can tell, the difference is entirely due to the MA terms. That is, when I fit your data with only AR terms, the ARMA of the differenced series and ARIMA agree. | ARIMA vs ARMA on the differenced series
As far as I can tell, the difference is entirely due to the MA terms. That is, when I fit your data with only AR terms, the ARMA of the differenced series and ARIMA agree. | ARIMA vs ARMA on the differenced series
As far as I can tell, the difference is entirely due to the MA terms. That is, when I fit your data with only AR terms, the ARMA of the differenced series and ARIMA agree. |
20,445 | Log Likelihood for GLM | It appears that the logLik function in R calculates what is referred to in SAS as the "full likelihood function", which in this case includes the binomial coefficient. I did not include the binomial coefficient in the mle2 calculation because it has no impact on the parameter estimates. Once this constant is added to the log likelihood in the mle2 calculation, glm and mle2 agree. | Log Likelihood for GLM | It appears that the logLik function in R calculates what is referred to in SAS as the "full likelihood function", which in this case includes the binomial coefficient. I did not include the binomial c | Log Likelihood for GLM
It appears that the logLik function in R calculates what is referred to in SAS as the "full likelihood function", which in this case includes the binomial coefficient. I did not include the binomial coefficient in the mle2 calculation because it has no impact on the parameter estimates. Once this constant is added to the log likelihood in the mle2 calculation, glm and mle2 agree. | Log Likelihood for GLM
It appears that the logLik function in R calculates what is referred to in SAS as the "full likelihood function", which in this case includes the binomial coefficient. I did not include the binomial c |
20,446 | Two methods of bootstrap significance tests | The first approach is classical and trustworthy but can not always be used. To get bootstrap samples assuming the null hypothesis you must either be willing to assume a theoretical distribution to hold (this is your first option) or to assume that your statistic of interest has the same distributionally shape when shifted to the null hypothesis (your second option). For example, under the usual assumption the t-distribution has the same shape when shifted to another mean. However, when changing the null frequency of 0.5 of a binomial distribution to 0.025 will also change the shape.
In my experience, otherwise in the case that you are willing to make these assumptions you often also have other options. In your example 1) where you seem to assume that both samples could have come from the same base population a permutation test would be better in my opinion.
There is another option (which you seems to be your 2nd choice) which is based on bootstrap confidence intervals. Basically, this assumes that if your stated coverage holds that significance at a level of $\alpha$ is equivalent to the null hypothesis not being included in the $(1-\alpha)$-confidence interval. See for example, this question: What is the difference between confidence intervals and hypothesis testing?
This is a very flexible method and applicable for many tests. However, it is very critical to construct good bootstrap confidence intervals and not simply to use Wald-approximations or the percentile method. Some info is here: Bootstrap-based confidence interval | Two methods of bootstrap significance tests | The first approach is classical and trustworthy but can not always be used. To get bootstrap samples assuming the null hypothesis you must either be willing to assume a theoretical distribution to hol | Two methods of bootstrap significance tests
The first approach is classical and trustworthy but can not always be used. To get bootstrap samples assuming the null hypothesis you must either be willing to assume a theoretical distribution to hold (this is your first option) or to assume that your statistic of interest has the same distributionally shape when shifted to the null hypothesis (your second option). For example, under the usual assumption the t-distribution has the same shape when shifted to another mean. However, when changing the null frequency of 0.5 of a binomial distribution to 0.025 will also change the shape.
In my experience, otherwise in the case that you are willing to make these assumptions you often also have other options. In your example 1) where you seem to assume that both samples could have come from the same base population a permutation test would be better in my opinion.
There is another option (which you seems to be your 2nd choice) which is based on bootstrap confidence intervals. Basically, this assumes that if your stated coverage holds that significance at a level of $\alpha$ is equivalent to the null hypothesis not being included in the $(1-\alpha)$-confidence interval. See for example, this question: What is the difference between confidence intervals and hypothesis testing?
This is a very flexible method and applicable for many tests. However, it is very critical to construct good bootstrap confidence intervals and not simply to use Wald-approximations or the percentile method. Some info is here: Bootstrap-based confidence interval | Two methods of bootstrap significance tests
The first approach is classical and trustworthy but can not always be used. To get bootstrap samples assuming the null hypothesis you must either be willing to assume a theoretical distribution to hol |
20,447 | What are some good resources for the history of time series analysis? | You might find Mary S. Morgan's History of Econometric Ideas interesting. Its focus is in on the history of economic cycle analysis from late 19th to mid 20th century. In particular, the story of the debate on the relationship between sunspots and agricultural output was fascinating. | What are some good resources for the history of time series analysis? | You might find Mary S. Morgan's History of Econometric Ideas interesting. Its focus is in on the history of economic cycle analysis from late 19th to mid 20th century. In particular, the story of the | What are some good resources for the history of time series analysis?
You might find Mary S. Morgan's History of Econometric Ideas interesting. Its focus is in on the history of economic cycle analysis from late 19th to mid 20th century. In particular, the story of the debate on the relationship between sunspots and agricultural output was fascinating. | What are some good resources for the history of time series analysis?
You might find Mary S. Morgan's History of Econometric Ideas interesting. Its focus is in on the history of economic cycle analysis from late 19th to mid 20th century. In particular, the story of the |
20,448 | What are some good resources for the history of time series analysis? | On time series visualization, a good reference is Aigner, Miksch, Schumann and Tominski (2011) Visualization of Time-Oriented Data, Springer. Chapter 2 covers history. | What are some good resources for the history of time series analysis? | On time series visualization, a good reference is Aigner, Miksch, Schumann and Tominski (2011) Visualization of Time-Oriented Data, Springer. Chapter 2 covers history. | What are some good resources for the history of time series analysis?
On time series visualization, a good reference is Aigner, Miksch, Schumann and Tominski (2011) Visualization of Time-Oriented Data, Springer. Chapter 2 covers history. | What are some good resources for the history of time series analysis?
On time series visualization, a good reference is Aigner, Miksch, Schumann and Tominski (2011) Visualization of Time-Oriented Data, Springer. Chapter 2 covers history. |
20,449 | What are some good resources for the history of time series analysis? | Another source of the history of time series analysis is "The foundations of modern time series analysis” by Mills, Terence C. | What are some good resources for the history of time series analysis? | Another source of the history of time series analysis is "The foundations of modern time series analysis” by Mills, Terence C. | What are some good resources for the history of time series analysis?
Another source of the history of time series analysis is "The foundations of modern time series analysis” by Mills, Terence C. | What are some good resources for the history of time series analysis?
Another source of the history of time series analysis is "The foundations of modern time series analysis” by Mills, Terence C. |
20,450 | What are some good resources for the history of time series analysis? | This is quite good also. I'm not sure how it compares to the ones already mentioned.
Statistical Visions in Time (A History of Time Series Analysis, 1662-1938) by Judy L. Klein. | What are some good resources for the history of time series analysis? | This is quite good also. I'm not sure how it compares to the ones already mentioned.
Statistical Visions in Time (A History of Time Series Analysis, 1662-1938) by Judy L. Klein. | What are some good resources for the history of time series analysis?
This is quite good also. I'm not sure how it compares to the ones already mentioned.
Statistical Visions in Time (A History of Time Series Analysis, 1662-1938) by Judy L. Klein. | What are some good resources for the history of time series analysis?
This is quite good also. I'm not sure how it compares to the ones already mentioned.
Statistical Visions in Time (A History of Time Series Analysis, 1662-1938) by Judy L. Klein. |
20,451 | fitting an exponential function using least squares vs. generalized linear model vs. nonlinear least squares | The difference is basically the difference in the assumed distribution of the random component, and how the random component interacts with the underlying mean relationship.
Using nonlinear least squares effectively assumes the noise is additive, with constant variance (and least squares is maximum likelihood for normal errors).
The other two assume that the noise is multiplicative, and that the variance is proportional to the square of the mean. Taking logs and fitting a least squares line is maximum likelihood for the lognormal, while the GLM you fitted is maximum likelihood (at least for its mean) for the Gamma (unsurprisingly). Those two will be quite similar, but the Gamma will put less weight on very low values, while the lognormal one will put relatively less weight on the highest values.
(Note that to properly compare the parameter estimates for those two, you need to deal with the difference between expectation on the log scale and expectation on the original scale. The mean of a transformed variable is not the transformed mean in general.) | fitting an exponential function using least squares vs. generalized linear model vs. nonlinear least | The difference is basically the difference in the assumed distribution of the random component, and how the random component interacts with the underlying mean relationship.
Using nonlinear least squa | fitting an exponential function using least squares vs. generalized linear model vs. nonlinear least squares
The difference is basically the difference in the assumed distribution of the random component, and how the random component interacts with the underlying mean relationship.
Using nonlinear least squares effectively assumes the noise is additive, with constant variance (and least squares is maximum likelihood for normal errors).
The other two assume that the noise is multiplicative, and that the variance is proportional to the square of the mean. Taking logs and fitting a least squares line is maximum likelihood for the lognormal, while the GLM you fitted is maximum likelihood (at least for its mean) for the Gamma (unsurprisingly). Those two will be quite similar, but the Gamma will put less weight on very low values, while the lognormal one will put relatively less weight on the highest values.
(Note that to properly compare the parameter estimates for those two, you need to deal with the difference between expectation on the log scale and expectation on the original scale. The mean of a transformed variable is not the transformed mean in general.) | fitting an exponential function using least squares vs. generalized linear model vs. nonlinear least
The difference is basically the difference in the assumed distribution of the random component, and how the random component interacts with the underlying mean relationship.
Using nonlinear least squa |
20,452 | What books to read for MCMC theory? | Since you are asking for MCMC theory, I am assuming Markov chains on general state space is of the most interest here. Here I provide some books/articles and what they are good for.
Markov Chains and Stochastic Stability by Meyn and Tweedie. This is considered to be the most thorough book on the theory for Markov chains in MCMC. You will find that most research in the theory of MCMC refer to this book often. Their treatment is fairly measure theoretic, too much maybe for my taste. This is probably a good one stop place.
General Irreducible Markov Chains and Non-Negative Operators by Nummelin. This is a thin and (very) concise book on general state space Markov chains. To be honest, it takes a lot of time to wrap your head around the notation. Not my favorite, but rigorous nonetheless.
General State Space Markov Chains by Roberts and Rosenthal. This is not a book, but a survey paper on MCMC methods with detailed theory. This would be the perfect place to start for someone interested in MCMC theory. They also cite various books for readers to refer to.
Handbook of Markov Chain Monte Carlo by Brooks, Gelman, Jones, and Meng. Not theoretical but definitely more recent (and in my view better) than other MCMC practice books. | What books to read for MCMC theory? | Since you are asking for MCMC theory, I am assuming Markov chains on general state space is of the most interest here. Here I provide some books/articles and what they are good for.
Markov Chains and | What books to read for MCMC theory?
Since you are asking for MCMC theory, I am assuming Markov chains on general state space is of the most interest here. Here I provide some books/articles and what they are good for.
Markov Chains and Stochastic Stability by Meyn and Tweedie. This is considered to be the most thorough book on the theory for Markov chains in MCMC. You will find that most research in the theory of MCMC refer to this book often. Their treatment is fairly measure theoretic, too much maybe for my taste. This is probably a good one stop place.
General Irreducible Markov Chains and Non-Negative Operators by Nummelin. This is a thin and (very) concise book on general state space Markov chains. To be honest, it takes a lot of time to wrap your head around the notation. Not my favorite, but rigorous nonetheless.
General State Space Markov Chains by Roberts and Rosenthal. This is not a book, but a survey paper on MCMC methods with detailed theory. This would be the perfect place to start for someone interested in MCMC theory. They also cite various books for readers to refer to.
Handbook of Markov Chain Monte Carlo by Brooks, Gelman, Jones, and Meng. Not theoretical but definitely more recent (and in my view better) than other MCMC practice books. | What books to read for MCMC theory?
Since you are asking for MCMC theory, I am assuming Markov chains on general state space is of the most interest here. Here I provide some books/articles and what they are good for.
Markov Chains and |
20,453 | How to measure shape of cluster? | I like Gaussian Mixture models (GMM's).
One of their features is that, in probit domain, they act like piecewise interpolators. One implication of this is that they can act like a replacement basis, a universal approximator. This means that for non-gaussian distributions, like lognormal, weibull, or crazier non-analytic ones, as long as some criteria are met - the GMM can approximate the distribution.
So if you know the parameters of the AICc or BIC optimal approximation using GMM then you can project that to smaller dimensions. You can rotate it, and look at the principal axes of the components of the approximating GMM.
The consequence would be an informative and visually accessible way to look at the most important parts of higher dimensional data using our 3d-viewing visual perception.
EDIT: (sure thing, whuber)
There are several ways to look at the shape.
You can look at trends in the means. A lognormal is approximated by a series of Gaussians whos means get progressively closer and whose weights get smaller along the progression. The sum approximates the heavier tail. In n-dimensions, a sequence of such components would make a lobe. You can track distances between means (convert to high dimension) and direction cosines between as well. This would convert to much more accessible dimensions.
You can make a 3d system whose axes are the weight, the magnitude of the mean, and the magnitude of the variance/covariance. If you have a very high cluster-count, this is a way to view them in comparison with each other. It is a valuable way to convert 50k parts with 2k measures each into a few clouds in a 3d space. I can execute process control in that space, if I choose. I like the recursion of using gaussian mixture model based control on components of gaussian mixture model fits to part parameters.
In terms of de-cluttering you can throw away by very small weight, or by weight per covariance, or such.
You can plot the GMM cloud in terms of BIC, $ R^2$, Mahalanobis distance to components or overall, probability of membership or overall.
You could look at it like bubbles intersecting. The location of equal probability (zero Kullback-Leibler divergence) exists between each pair of GMM clusters. If you track that position, you can filter by probability of membership at that location. It will give you points of classification boundaries. This will help you isolate "loners". You can count the number of such boundaries above the threshold per member and get a list of "connectedness" per component. You can also look at angles and distances between locations.
You can resample the space using random numbers given the Gaussian PDFs, and then perform principle component analysis on it, and look at the eigen-shapes, and eigenvalues associated with them.
EDIT:
What does shape mean? They say specificity is the soul of all good communication.
What do you mean about "measure"?
Ideas about what it can mean:
Eyeball norm sense/feels of general form. (extremely qualitative, visual accessibility)
measure of GD&T shape (coplanarity, concentricity, etc) (extremely quantitative)
something numeric (eigenvalues, covariances, etc...)
a useful reduced dimension coordinate (like GMM parameters becoming dimensions)
a reduced noise system (smoothed in some way, then presented)
Most of the "several ways" are some variation on these. | How to measure shape of cluster? | I like Gaussian Mixture models (GMM's).
One of their features is that, in probit domain, they act like piecewise interpolators. One implication of this is that they can act like a replacement basis | How to measure shape of cluster?
I like Gaussian Mixture models (GMM's).
One of their features is that, in probit domain, they act like piecewise interpolators. One implication of this is that they can act like a replacement basis, a universal approximator. This means that for non-gaussian distributions, like lognormal, weibull, or crazier non-analytic ones, as long as some criteria are met - the GMM can approximate the distribution.
So if you know the parameters of the AICc or BIC optimal approximation using GMM then you can project that to smaller dimensions. You can rotate it, and look at the principal axes of the components of the approximating GMM.
The consequence would be an informative and visually accessible way to look at the most important parts of higher dimensional data using our 3d-viewing visual perception.
EDIT: (sure thing, whuber)
There are several ways to look at the shape.
You can look at trends in the means. A lognormal is approximated by a series of Gaussians whos means get progressively closer and whose weights get smaller along the progression. The sum approximates the heavier tail. In n-dimensions, a sequence of such components would make a lobe. You can track distances between means (convert to high dimension) and direction cosines between as well. This would convert to much more accessible dimensions.
You can make a 3d system whose axes are the weight, the magnitude of the mean, and the magnitude of the variance/covariance. If you have a very high cluster-count, this is a way to view them in comparison with each other. It is a valuable way to convert 50k parts with 2k measures each into a few clouds in a 3d space. I can execute process control in that space, if I choose. I like the recursion of using gaussian mixture model based control on components of gaussian mixture model fits to part parameters.
In terms of de-cluttering you can throw away by very small weight, or by weight per covariance, or such.
You can plot the GMM cloud in terms of BIC, $ R^2$, Mahalanobis distance to components or overall, probability of membership or overall.
You could look at it like bubbles intersecting. The location of equal probability (zero Kullback-Leibler divergence) exists between each pair of GMM clusters. If you track that position, you can filter by probability of membership at that location. It will give you points of classification boundaries. This will help you isolate "loners". You can count the number of such boundaries above the threshold per member and get a list of "connectedness" per component. You can also look at angles and distances between locations.
You can resample the space using random numbers given the Gaussian PDFs, and then perform principle component analysis on it, and look at the eigen-shapes, and eigenvalues associated with them.
EDIT:
What does shape mean? They say specificity is the soul of all good communication.
What do you mean about "measure"?
Ideas about what it can mean:
Eyeball norm sense/feels of general form. (extremely qualitative, visual accessibility)
measure of GD&T shape (coplanarity, concentricity, etc) (extremely quantitative)
something numeric (eigenvalues, covariances, etc...)
a useful reduced dimension coordinate (like GMM parameters becoming dimensions)
a reduced noise system (smoothed in some way, then presented)
Most of the "several ways" are some variation on these. | How to measure shape of cluster?
I like Gaussian Mixture models (GMM's).
One of their features is that, in probit domain, they act like piecewise interpolators. One implication of this is that they can act like a replacement basis |
20,454 | How to measure shape of cluster? | This might be rather simplistic, but you might get some insight by doing an eigenvalue analysis on each of your clusters.
What I would try is to take all points assigned to a cluster and fit them with a multivariate Gaussian. Then you can compute the eigenvalues of the fitted covariance matrix and plot them. There are many ways to do this ; perhaps the most well-known and widely used is called principal component analysis or PCA.
Once you have the eigenvalues (also called a spectrum), you can examine their relative sizes to determine how "stretched out" the cluster is in certain dimensions. The less uniform the spectrum, the more "cigar-shaped" the cluster is, and the more uniform the spectrum, the more spherical the cluster is. You could even define some sort of metric for indicating how non-uniform the eigenvalues are (spectral entropy ?) ; see http://en.wikipedia.org/wiki/Spectral_flatness.
As a side benefit, you can examine the principal components (the eigenvectors associated with large eigenvalues) to see "where" the "cigar-shaped" clusters are pointing in your data space.
Naturally this is a crude approximation for an arbitrary cluster, as it only models the points in the cluster as a single ellipsoid. But, like I said, it might give you some insight. | How to measure shape of cluster? | This might be rather simplistic, but you might get some insight by doing an eigenvalue analysis on each of your clusters.
What I would try is to take all points assigned to a cluster and fit them with | How to measure shape of cluster?
This might be rather simplistic, but you might get some insight by doing an eigenvalue analysis on each of your clusters.
What I would try is to take all points assigned to a cluster and fit them with a multivariate Gaussian. Then you can compute the eigenvalues of the fitted covariance matrix and plot them. There are many ways to do this ; perhaps the most well-known and widely used is called principal component analysis or PCA.
Once you have the eigenvalues (also called a spectrum), you can examine their relative sizes to determine how "stretched out" the cluster is in certain dimensions. The less uniform the spectrum, the more "cigar-shaped" the cluster is, and the more uniform the spectrum, the more spherical the cluster is. You could even define some sort of metric for indicating how non-uniform the eigenvalues are (spectral entropy ?) ; see http://en.wikipedia.org/wiki/Spectral_flatness.
As a side benefit, you can examine the principal components (the eigenvectors associated with large eigenvalues) to see "where" the "cigar-shaped" clusters are pointing in your data space.
Naturally this is a crude approximation for an arbitrary cluster, as it only models the points in the cluster as a single ellipsoid. But, like I said, it might give you some insight. | How to measure shape of cluster?
This might be rather simplistic, but you might get some insight by doing an eigenvalue analysis on each of your clusters.
What I would try is to take all points assigned to a cluster and fit them with |
20,455 | How to measure shape of cluster? | Correlation clustering algorithms such as 4C, ERiC or LMCLUS usually consider clusters to be linear manifolds. I.e. k-dimensional hyperplanes in a d-dimensional space. Well, for 4C and ERiC only locally linear, so they can in fact be non-convex. But they still try to detect clusters of a reduced local dimensionality.
Finding arbitrary shaped clusters in high dimensional data is a quite tough problem. In particular, because of the curse of dimensionality which lets the search space explode and at the same time also requires that you have a much larger input data if you still want significant results. Way too many algorithms don't pay attention to whether what they find is still significant or could as well be random.
So in fact I believe there are other problems to solve before thinking about the convexity of non-convexity of complex clusters in high-dimensional space.
Also have a look at the complexity of computing the convex hull in higher dimensions...
Also, do you have a true use case for that beyond curiosity? | How to measure shape of cluster? | Correlation clustering algorithms such as 4C, ERiC or LMCLUS usually consider clusters to be linear manifolds. I.e. k-dimensional hyperplanes in a d-dimensional space. Well, for 4C and ERiC only local | How to measure shape of cluster?
Correlation clustering algorithms such as 4C, ERiC or LMCLUS usually consider clusters to be linear manifolds. I.e. k-dimensional hyperplanes in a d-dimensional space. Well, for 4C and ERiC only locally linear, so they can in fact be non-convex. But they still try to detect clusters of a reduced local dimensionality.
Finding arbitrary shaped clusters in high dimensional data is a quite tough problem. In particular, because of the curse of dimensionality which lets the search space explode and at the same time also requires that you have a much larger input data if you still want significant results. Way too many algorithms don't pay attention to whether what they find is still significant or could as well be random.
So in fact I believe there are other problems to solve before thinking about the convexity of non-convexity of complex clusters in high-dimensional space.
Also have a look at the complexity of computing the convex hull in higher dimensions...
Also, do you have a true use case for that beyond curiosity? | How to measure shape of cluster?
Correlation clustering algorithms such as 4C, ERiC or LMCLUS usually consider clusters to be linear manifolds. I.e. k-dimensional hyperplanes in a d-dimensional space. Well, for 4C and ERiC only local |
20,456 | How to measure shape of cluster? | If your dimensionality is not much higher than 2 or 3, then it might be possible to project the cluster of interest into 2D space multiple times and visualize the results or use your 2D measurement of nonlinearity. I thought of this because of the method Random Projections http://users.ics.aalto.fi/ella/publications/randproj_kdd.pdf.
Random projections can be used to reduce the dimensionality in order to build an index. The theory is that if two points are close in D dimensions and you take a random projection into d dimensions with d
For concreteness, you can think of projecting a globe onto a flat surface. No matter how you project it New York and New Jersey are going to be together, but only rarely will you push New York and London together.
I don't know if this can help you rigorously but it might be a quick way to visualize the clusters. | How to measure shape of cluster? | If your dimensionality is not much higher than 2 or 3, then it might be possible to project the cluster of interest into 2D space multiple times and visualize the results or use your 2D measurement of | How to measure shape of cluster?
If your dimensionality is not much higher than 2 or 3, then it might be possible to project the cluster of interest into 2D space multiple times and visualize the results or use your 2D measurement of nonlinearity. I thought of this because of the method Random Projections http://users.ics.aalto.fi/ella/publications/randproj_kdd.pdf.
Random projections can be used to reduce the dimensionality in order to build an index. The theory is that if two points are close in D dimensions and you take a random projection into d dimensions with d
For concreteness, you can think of projecting a globe onto a flat surface. No matter how you project it New York and New Jersey are going to be together, but only rarely will you push New York and London together.
I don't know if this can help you rigorously but it might be a quick way to visualize the clusters. | How to measure shape of cluster?
If your dimensionality is not much higher than 2 or 3, then it might be possible to project the cluster of interest into 2D space multiple times and visualize the results or use your 2D measurement of |
20,457 | hierarchical Bayesian models vs. empirical Bayes | I would say that HBM is certainly "more Bayesian" than EB, as marginalizing is a more Bayesian approach than optimizing. Essentially it seems to me that EB ignores the uncertainty in the hyper-parameters, whereas HBM attempts to include it in the analysis. I suspect HMB is a good idea where there is little data and hence significant uncertainty in the hyper-parameters, which must be accounted for. On the other hand for large datasets EB becomes more attractive as it is generally less computationally expensive and the the volume of data often means the results are much less sensitive to the hyper-parameter settings.
I have worked on Gaussian process classifiers and quite often optimizing the hyper-parameters to maximize the marginal likelihood results in over-fitting the ML and hence significant degradation in generalization performance. I suspect in those cases, a full HBM treatment would be more reliable, but also much more expensive. | hierarchical Bayesian models vs. empirical Bayes | I would say that HBM is certainly "more Bayesian" than EB, as marginalizing is a more Bayesian approach than optimizing. Essentially it seems to me that EB ignores the uncertainty in the hyper-parame | hierarchical Bayesian models vs. empirical Bayes
I would say that HBM is certainly "more Bayesian" than EB, as marginalizing is a more Bayesian approach than optimizing. Essentially it seems to me that EB ignores the uncertainty in the hyper-parameters, whereas HBM attempts to include it in the analysis. I suspect HMB is a good idea where there is little data and hence significant uncertainty in the hyper-parameters, which must be accounted for. On the other hand for large datasets EB becomes more attractive as it is generally less computationally expensive and the the volume of data often means the results are much less sensitive to the hyper-parameter settings.
I have worked on Gaussian process classifiers and quite often optimizing the hyper-parameters to maximize the marginal likelihood results in over-fitting the ML and hence significant degradation in generalization performance. I suspect in those cases, a full HBM treatment would be more reliable, but also much more expensive. | hierarchical Bayesian models vs. empirical Bayes
I would say that HBM is certainly "more Bayesian" than EB, as marginalizing is a more Bayesian approach than optimizing. Essentially it seems to me that EB ignores the uncertainty in the hyper-parame |
20,458 | How do economists quantify black market operations? | They're not usually captured in traditional market models and some of the ways of estimating shadow markets can be a bit esoteric.
First, definitions:
informal markets, or shadow markets, are those which fall outside of formal financial systems; these are not necessarily illegal, but cater to sub-economic or very poor communities; however, it can also be local community markets or exchanges - the principle is that these are legal activities which would normally be reported for tax purposes but which are (for whatever reason) not;
black markets, or illegal markets, are more normally associated with illicit market activity (drugs, stolen merchandise, etc.);
The UN World Drug Report (2011) lists a variety of illicit drugs and estimates the value based on tillage (aerial surveylance of agricultural areas), production, and various prices sampled throughout the supply-chain by law-enforcement services. It's a reasonable approach, given the difficulties.
In many ways, this type of estimation is similar to the way in which Population Censuses are conducted in many emerging markets and informal settlements. Aerial photographs are used to calculate population density and economic activity. Sampling can estimate product range, constituency and pricing. Supply-chains can be tracked and all of this can be used to estimate a total market size.
And many of these systems come from Wildlife Management. | How do economists quantify black market operations? | They're not usually captured in traditional market models and some of the ways of estimating shadow markets can be a bit esoteric.
First, definitions:
informal markets, or shadow markets, are those | How do economists quantify black market operations?
They're not usually captured in traditional market models and some of the ways of estimating shadow markets can be a bit esoteric.
First, definitions:
informal markets, or shadow markets, are those which fall outside of formal financial systems; these are not necessarily illegal, but cater to sub-economic or very poor communities; however, it can also be local community markets or exchanges - the principle is that these are legal activities which would normally be reported for tax purposes but which are (for whatever reason) not;
black markets, or illegal markets, are more normally associated with illicit market activity (drugs, stolen merchandise, etc.);
The UN World Drug Report (2011) lists a variety of illicit drugs and estimates the value based on tillage (aerial surveylance of agricultural areas), production, and various prices sampled throughout the supply-chain by law-enforcement services. It's a reasonable approach, given the difficulties.
In many ways, this type of estimation is similar to the way in which Population Censuses are conducted in many emerging markets and informal settlements. Aerial photographs are used to calculate population density and economic activity. Sampling can estimate product range, constituency and pricing. Supply-chains can be tracked and all of this can be used to estimate a total market size.
And many of these systems come from Wildlife Management. | How do economists quantify black market operations?
They're not usually captured in traditional market models and some of the ways of estimating shadow markets can be a bit esoteric.
First, definitions:
informal markets, or shadow markets, are those |
20,459 | How do economists quantify black market operations? | There are many possible ways to estimate the size of the illegal sector. One particular clever way, I think, is to see how much energy a country uses compared to its official output. If this number is higher than you would expect, you might believe that other, illegal outputs are being created.
To measure drug use, researchers have measured concentrations of the drugs in sewage water.
See Schneider and Estne, "Shadow Economies: Size, Causes,
and Consequences" from the Journal of Economic Literature for a listing of various methods and a summary IMF document here. | How do economists quantify black market operations? | There are many possible ways to estimate the size of the illegal sector. One particular clever way, I think, is to see how much energy a country uses compared to its official output. If this number is | How do economists quantify black market operations?
There are many possible ways to estimate the size of the illegal sector. One particular clever way, I think, is to see how much energy a country uses compared to its official output. If this number is higher than you would expect, you might believe that other, illegal outputs are being created.
To measure drug use, researchers have measured concentrations of the drugs in sewage water.
See Schneider and Estne, "Shadow Economies: Size, Causes,
and Consequences" from the Journal of Economic Literature for a listing of various methods and a summary IMF document here. | How do economists quantify black market operations?
There are many possible ways to estimate the size of the illegal sector. One particular clever way, I think, is to see how much energy a country uses compared to its official output. If this number is |
20,460 | Power calculations/sample size for biomarker study | Let's talk about sensitivity (which we'll denote by $p$), the specificity is similar. The following is a frequentist approach; it would be great if one of the Bayesians here could add another answer to discuss an alternative way to go about it.
Suppose you've recruited $n$ people with cancer. You apply your biomarker test to each, so you will get a sequence of 0's and 1's which we'll call x. The entries of x will have a Bernoulli distribution with success probability $p$. The estimate of $p$ is $\hat{p} = \sum x /n$. Hopefully $\hat{p}$ is "big", and you can judge the precision of your estimate via a confidence interval for $p$.
Your question says that you'd like to know how big $n$ should be. To answer it you'll need to consult the biomarker literature to decide how big is "big" and how low of a sensitivity you can tolerate due to sampling error. Suppose you decide that a biomarker is "good" if its sensitivity is bigger than $p = 0.5$ (that's actually not so good), and you'd like $n$ to be big enough so there's a 90% chance to detect a sensitivity of $p = 0.57$. Suppose you'd like to control your significance level at $\alpha = 0.05$.
There are at least two approaches - analytical and simulation. The pwr package in R already exists to help with this design - you need to install it first. Next you'll need an effect size, then the function you want is pwr.p.test.
library(pwr)
h1 <- ES.h(0.57, 0.5)
pwr.p.test(h = h1, n = NULL, sig.level = 0.05, power = 0.9, alt = "greater")
proportion power calculation for binomial distribution (arc...
h = 0.1404614
n = 434.0651
sig.level = 0.05
power = 0.9
alternative = greater
So you'd need around $435$ people with cancer to detect a sensitivity of $0.57$ with power $0.90$ when your significance level is $0.05$. I've tried the simulation approach, too, and it gives a similar answer. Of course, if the true sensitivity is higher than $0.57$ (your biomarker is better) then you'd need fewer people to detect it.
Once you've got your data, the way to run the test is (I'll simulate data for the sake of argument).
n <- 435
sens <- 0.57
x <- rbinom(n, size = 1, prob = sens)
binom.test(sum(x), n, p = 0.5, alt = "greater")
Exact binomial test
data: sum(x) and n
number of successes = 247, number of trials = 435,
p-value = 0.002681
alternative hypothesis: true probability of success is greater than 0.5
95 percent confidence interval:
0.527342 1.000000
sample estimates:
probability of success
0.5678161
The estimate of sensitivity is $0.568$. What really matters is the confidence interval for $p$ which in this case is $[0.527, 1]$.
EDIT: If you like the simulation approach better, then you can do it this way: set
n <- 435
sens <- 0.57
nSim <- 1000
and let runTest be
runTest <- function(){
x <- rbinom(1, size = n, prob = sens)
tmp <- binom.test(x, n, p = 0.5, alt = "greater")
tmp$p.value < 0.05
}
so the estimate of power is
mean(replicate(nSim, runTest()))
[1] 0.887 | Power calculations/sample size for biomarker study | Let's talk about sensitivity (which we'll denote by $p$), the specificity is similar. The following is a frequentist approach; it would be great if one of the Bayesians here could add another answer | Power calculations/sample size for biomarker study
Let's talk about sensitivity (which we'll denote by $p$), the specificity is similar. The following is a frequentist approach; it would be great if one of the Bayesians here could add another answer to discuss an alternative way to go about it.
Suppose you've recruited $n$ people with cancer. You apply your biomarker test to each, so you will get a sequence of 0's and 1's which we'll call x. The entries of x will have a Bernoulli distribution with success probability $p$. The estimate of $p$ is $\hat{p} = \sum x /n$. Hopefully $\hat{p}$ is "big", and you can judge the precision of your estimate via a confidence interval for $p$.
Your question says that you'd like to know how big $n$ should be. To answer it you'll need to consult the biomarker literature to decide how big is "big" and how low of a sensitivity you can tolerate due to sampling error. Suppose you decide that a biomarker is "good" if its sensitivity is bigger than $p = 0.5$ (that's actually not so good), and you'd like $n$ to be big enough so there's a 90% chance to detect a sensitivity of $p = 0.57$. Suppose you'd like to control your significance level at $\alpha = 0.05$.
There are at least two approaches - analytical and simulation. The pwr package in R already exists to help with this design - you need to install it first. Next you'll need an effect size, then the function you want is pwr.p.test.
library(pwr)
h1 <- ES.h(0.57, 0.5)
pwr.p.test(h = h1, n = NULL, sig.level = 0.05, power = 0.9, alt = "greater")
proportion power calculation for binomial distribution (arc...
h = 0.1404614
n = 434.0651
sig.level = 0.05
power = 0.9
alternative = greater
So you'd need around $435$ people with cancer to detect a sensitivity of $0.57$ with power $0.90$ when your significance level is $0.05$. I've tried the simulation approach, too, and it gives a similar answer. Of course, if the true sensitivity is higher than $0.57$ (your biomarker is better) then you'd need fewer people to detect it.
Once you've got your data, the way to run the test is (I'll simulate data for the sake of argument).
n <- 435
sens <- 0.57
x <- rbinom(n, size = 1, prob = sens)
binom.test(sum(x), n, p = 0.5, alt = "greater")
Exact binomial test
data: sum(x) and n
number of successes = 247, number of trials = 435,
p-value = 0.002681
alternative hypothesis: true probability of success is greater than 0.5
95 percent confidence interval:
0.527342 1.000000
sample estimates:
probability of success
0.5678161
The estimate of sensitivity is $0.568$. What really matters is the confidence interval for $p$ which in this case is $[0.527, 1]$.
EDIT: If you like the simulation approach better, then you can do it this way: set
n <- 435
sens <- 0.57
nSim <- 1000
and let runTest be
runTest <- function(){
x <- rbinom(1, size = n, prob = sens)
tmp <- binom.test(x, n, p = 0.5, alt = "greater")
tmp$p.value < 0.05
}
so the estimate of power is
mean(replicate(nSim, runTest()))
[1] 0.887 | Power calculations/sample size for biomarker study
Let's talk about sensitivity (which we'll denote by $p$), the specificity is similar. The following is a frequentist approach; it would be great if one of the Bayesians here could add another answer |
20,461 | When would one want to use AdaBoost? | Adaboost can use multiple instances of the same classifier with different parameters. Thus, a previously linear classifier can be combined into nonlinear classifiers. Or, as the AdaBoost people like to put it, multiple weak learners can make one strong learner. A nice picture can be found here, on the bottom.
Basically, it goes as with any other learning algorithm: on some datasets it works, on some it doesn't. There sure are datasets out there, where it excels. And maybe you haven't chosen the right weak learner yet. Did you try logistic regression? Did you visualize how the decision boundaries evolve during adding of learners? Maybe you can tell what is going wrong. | When would one want to use AdaBoost? | Adaboost can use multiple instances of the same classifier with different parameters. Thus, a previously linear classifier can be combined into nonlinear classifiers. Or, as the AdaBoost people like t | When would one want to use AdaBoost?
Adaboost can use multiple instances of the same classifier with different parameters. Thus, a previously linear classifier can be combined into nonlinear classifiers. Or, as the AdaBoost people like to put it, multiple weak learners can make one strong learner. A nice picture can be found here, on the bottom.
Basically, it goes as with any other learning algorithm: on some datasets it works, on some it doesn't. There sure are datasets out there, where it excels. And maybe you haven't chosen the right weak learner yet. Did you try logistic regression? Did you visualize how the decision boundaries evolve during adding of learners? Maybe you can tell what is going wrong. | When would one want to use AdaBoost?
Adaboost can use multiple instances of the same classifier with different parameters. Thus, a previously linear classifier can be combined into nonlinear classifiers. Or, as the AdaBoost people like t |
20,462 | How do you choose a unit of analysis (level of aggregation) in a time series? | Introduction
My interest in the topic is now about 7 years and resulted in PhD thesis Time series: aggregation, disaggregation and long memory, where attention was paid to a specific question of cross-sectional disaggregation problem for AR(1) scheme.
Data
Working with different approaches to aggregation the first question you need to clarify is what type of data you deal with (my guess is spatial, the most thrilling one). In practice you may consider temporal aggregation (see Silvestrini, A. and Veridas, D. (2008)), cross-sectional (I loved the article by Granger, C. W. J. (1990)) or both time and space (spatial aggregation is nicely surveyed in Giacomini, R. and Granger, C. W. J. (2004)).
Answers (lengthy)
Now, answering your questions, I put some rough intuition first. Since the problems I meet in practice are often based on inexact data (Andy's assumption
you can measure a time series of observations at any level of precision in time
seems too strong for macro-econometrics, but good for financial and micro-econometrics or any experimental fields, were you do control the precision quite well) I do have to bear in mind that my monthly time series are less precise than when I work with yearly data. Besides more frequent time series at least in macroeconomics do have seasonal patterns, that may lead to spurious results (seasonal parts do correlate not the series), so you need to seasonally adjust your data - another source of smaller precision for higher frequency data. Working with cross-sectional data revealed that high level of disaggregation brings more problems with probably, lots of zeroes to deal with. For instance, a particular household in the panel of data may purchase a car once per 5-10 years, but aggregated demand for new (used) cars is much smoother (even for a small town or region).
The weakest point aggregation always results in the loss of information, you may have the GDP produced by the cross-section of EU countries during the whole decade (say period of 2001-2010), but you will loose all the dynamic features that may be present in your analysis considering detailed panel data set. Large scale cross-sectional aggregation may turn to be even more interesting: you, roughly, take simple things (short memory AR(1)) average them over the quite large population and get "representative" long memory agent that resembles none of the micro units (one more stone to the representative agent's concept). So aggregation ~ loss of information ~ different properties of the objects and you would like to take control over the level of this loss and/or new properties. In my opinion, it is better to have precise micro level data at as high frequency as possible, but... there is a usual measurement trade-off, you can't be everywhere perfect and precise :)
Technically producing any regression analysis you do need more room (degrees of freedom) to be more or less confident that (at least) statistically your results are not junk, though they still may be a-theoretical and junk :) So I do put equal weights to question 1 and 2 (usually choose quarterly data for the macro-analysis). Answering the 3rd sub-question, all you decide in practical applications what is more important to you: more precise data or degrees of freedom. If you take the mentioned assumption into account the more detailed (or higher frequency) data is preferable.
Probably the answer will be edited latter after some sort of discussion if any. | How do you choose a unit of analysis (level of aggregation) in a time series? | Introduction
My interest in the topic is now about 7 years and resulted in PhD thesis Time series: aggregation, disaggregation and long memory, where attention was paid to a specific question of cros | How do you choose a unit of analysis (level of aggregation) in a time series?
Introduction
My interest in the topic is now about 7 years and resulted in PhD thesis Time series: aggregation, disaggregation and long memory, where attention was paid to a specific question of cross-sectional disaggregation problem for AR(1) scheme.
Data
Working with different approaches to aggregation the first question you need to clarify is what type of data you deal with (my guess is spatial, the most thrilling one). In practice you may consider temporal aggregation (see Silvestrini, A. and Veridas, D. (2008)), cross-sectional (I loved the article by Granger, C. W. J. (1990)) or both time and space (spatial aggregation is nicely surveyed in Giacomini, R. and Granger, C. W. J. (2004)).
Answers (lengthy)
Now, answering your questions, I put some rough intuition first. Since the problems I meet in practice are often based on inexact data (Andy's assumption
you can measure a time series of observations at any level of precision in time
seems too strong for macro-econometrics, but good for financial and micro-econometrics or any experimental fields, were you do control the precision quite well) I do have to bear in mind that my monthly time series are less precise than when I work with yearly data. Besides more frequent time series at least in macroeconomics do have seasonal patterns, that may lead to spurious results (seasonal parts do correlate not the series), so you need to seasonally adjust your data - another source of smaller precision for higher frequency data. Working with cross-sectional data revealed that high level of disaggregation brings more problems with probably, lots of zeroes to deal with. For instance, a particular household in the panel of data may purchase a car once per 5-10 years, but aggregated demand for new (used) cars is much smoother (even for a small town or region).
The weakest point aggregation always results in the loss of information, you may have the GDP produced by the cross-section of EU countries during the whole decade (say period of 2001-2010), but you will loose all the dynamic features that may be present in your analysis considering detailed panel data set. Large scale cross-sectional aggregation may turn to be even more interesting: you, roughly, take simple things (short memory AR(1)) average them over the quite large population and get "representative" long memory agent that resembles none of the micro units (one more stone to the representative agent's concept). So aggregation ~ loss of information ~ different properties of the objects and you would like to take control over the level of this loss and/or new properties. In my opinion, it is better to have precise micro level data at as high frequency as possible, but... there is a usual measurement trade-off, you can't be everywhere perfect and precise :)
Technically producing any regression analysis you do need more room (degrees of freedom) to be more or less confident that (at least) statistically your results are not junk, though they still may be a-theoretical and junk :) So I do put equal weights to question 1 and 2 (usually choose quarterly data for the macro-analysis). Answering the 3rd sub-question, all you decide in practical applications what is more important to you: more precise data or degrees of freedom. If you take the mentioned assumption into account the more detailed (or higher frequency) data is preferable.
Probably the answer will be edited latter after some sort of discussion if any. | How do you choose a unit of analysis (level of aggregation) in a time series?
Introduction
My interest in the topic is now about 7 years and resulted in PhD thesis Time series: aggregation, disaggregation and long memory, where attention was paid to a specific question of cros |
20,463 | What is Combinatorial Purged Cross-Validation for time series data? | Another popular question of mine. Well here's to self-help, starting with a sketch illustrating the 3 paths in the $N = 4, k =2$ situation:
The number of ways to arrange the 2 test sets to occur in 4 time periods is ${4 \choose 2} = 6$, and $\frac{k}{N} = .5$ is the fraction of the combinations that will start with a test set. Since a "path" is a continuous group of blocks from the first to the last sequential group, there are .5 * 6 = 3 paths, which aligns with $\phi(N, k)$ from the question.
Here's a sketch for a more complicated example with N = 5 and k = 2, which leads to 4 paths: | What is Combinatorial Purged Cross-Validation for time series data? | Another popular question of mine. Well here's to self-help, starting with a sketch illustrating the 3 paths in the $N = 4, k =2$ situation:
The number of ways to arrange the 2 test sets to occur in 4 | What is Combinatorial Purged Cross-Validation for time series data?
Another popular question of mine. Well here's to self-help, starting with a sketch illustrating the 3 paths in the $N = 4, k =2$ situation:
The number of ways to arrange the 2 test sets to occur in 4 time periods is ${4 \choose 2} = 6$, and $\frac{k}{N} = .5$ is the fraction of the combinations that will start with a test set. Since a "path" is a continuous group of blocks from the first to the last sequential group, there are .5 * 6 = 3 paths, which aligns with $\phi(N, k)$ from the question.
Here's a sketch for a more complicated example with N = 5 and k = 2, which leads to 4 paths: | What is Combinatorial Purged Cross-Validation for time series data?
Another popular question of mine. Well here's to self-help, starting with a sketch illustrating the 3 paths in the $N = 4, k =2$ situation:
The number of ways to arrange the 2 test sets to occur in 4 |
20,464 | What is Combinatorial Purged Cross-Validation for time series data? | I had the same question. I asked Marcos López de Prado, the person who created that methodology, on Twitter. Here is a link to his response.
Suppose that you have $6$ folds. A CV where you leave $2$ folds out (instead of the standard 1) at each split allows you to compute $5$ estimates for each datapoint. Instead of one PnL line, you can now estimate $5$ PnL lines. | What is Combinatorial Purged Cross-Validation for time series data? | I had the same question. I asked Marcos López de Prado, the person who created that methodology, on Twitter. Here is a link to his response.
Suppose that you have $6$ folds. A CV where you leave $2$ | What is Combinatorial Purged Cross-Validation for time series data?
I had the same question. I asked Marcos López de Prado, the person who created that methodology, on Twitter. Here is a link to his response.
Suppose that you have $6$ folds. A CV where you leave $2$ folds out (instead of the standard 1) at each split allows you to compute $5$ estimates for each datapoint. Instead of one PnL line, you can now estimate $5$ PnL lines. | What is Combinatorial Purged Cross-Validation for time series data?
I had the same question. I asked Marcos López de Prado, the person who created that methodology, on Twitter. Here is a link to his response.
Suppose that you have $6$ folds. A CV where you leave $2$ |
20,465 | Neural Nets: One-hot variable overwhelming continuous? | You can encode the categorical variables with a method different than one-hot. Binary or hashing encoders may be appropriate for this case. Hashing in particular is nice because you encode all of the categories into a single representation per feature vector, so no single one dominates the other. You can also specify the size of the final representation, so can hash all categorical variables into 10 features, and end up with 20 numeric features (half continuous, half categorical).
Both are implemented in https://github.com/scikit-learn-contrib/categorical-encoding, or fairly straight forward to implement yourself. | Neural Nets: One-hot variable overwhelming continuous? | You can encode the categorical variables with a method different than one-hot. Binary or hashing encoders may be appropriate for this case. Hashing in particular is nice because you encode all of the | Neural Nets: One-hot variable overwhelming continuous?
You can encode the categorical variables with a method different than one-hot. Binary or hashing encoders may be appropriate for this case. Hashing in particular is nice because you encode all of the categories into a single representation per feature vector, so no single one dominates the other. You can also specify the size of the final representation, so can hash all categorical variables into 10 features, and end up with 20 numeric features (half continuous, half categorical).
Both are implemented in https://github.com/scikit-learn-contrib/categorical-encoding, or fairly straight forward to implement yourself. | Neural Nets: One-hot variable overwhelming continuous?
You can encode the categorical variables with a method different than one-hot. Binary or hashing encoders may be appropriate for this case. Hashing in particular is nice because you encode all of the |
20,466 | Neural Nets: One-hot variable overwhelming continuous? | You could use embedding to transform your large number of categorical variables into a single vector. This compressed vector will be a distributed representation of the categorical features. The categorical inputs will be transformed into a relatively small vector of length N with N real-numbers that in some way represent N latent features that describe all the inputs.
Consider the large number of words in the English dictionary. If this number is N, then we could represent each word as a one-hot-coded vector of length N. However, word-to-vec is able to capture virtually all this information in a vector of length between 200-300. | Neural Nets: One-hot variable overwhelming continuous? | You could use embedding to transform your large number of categorical variables into a single vector. This compressed vector will be a distributed representation of the categorical features. The categ | Neural Nets: One-hot variable overwhelming continuous?
You could use embedding to transform your large number of categorical variables into a single vector. This compressed vector will be a distributed representation of the categorical features. The categorical inputs will be transformed into a relatively small vector of length N with N real-numbers that in some way represent N latent features that describe all the inputs.
Consider the large number of words in the English dictionary. If this number is N, then we could represent each word as a one-hot-coded vector of length N. However, word-to-vec is able to capture virtually all this information in a vector of length between 200-300. | Neural Nets: One-hot variable overwhelming continuous?
You could use embedding to transform your large number of categorical variables into a single vector. This compressed vector will be a distributed representation of the categorical features. The categ |
20,467 | Block bootstrap for a novice [duplicate] | Model-free resampling of time series is accomplished by block resampling, also called block bootstrapping, which can be implemented using the tsboot function in R’s boot package. The idea is to break the series into roughly equal-length blocks of consecutive observations, to resample the block with replacement, and then to paste the blocks together. For example, if the time series is of length 200 and one uses 10 blocks of length 20, then the blocks are the first 20 observations, the next 20, and so forth. A possible resample is the fourth block (observation 61 to 80), then the last block (observation 181 to 200), then the second block (observation 21 to 40), then the fourth block again, and so on until there are 10 blocks in the resample.
How do you do bootstrapping with time series data? | Block bootstrap for a novice [duplicate] | Model-free resampling of time series is accomplished by block resampling, also called block bootstrapping, which can be implemented using the tsboot function in R’s boot package. The idea is to break | Block bootstrap for a novice [duplicate]
Model-free resampling of time series is accomplished by block resampling, also called block bootstrapping, which can be implemented using the tsboot function in R’s boot package. The idea is to break the series into roughly equal-length blocks of consecutive observations, to resample the block with replacement, and then to paste the blocks together. For example, if the time series is of length 200 and one uses 10 blocks of length 20, then the blocks are the first 20 observations, the next 20, and so forth. A possible resample is the fourth block (observation 61 to 80), then the last block (observation 181 to 200), then the second block (observation 21 to 40), then the fourth block again, and so on until there are 10 blocks in the resample.
How do you do bootstrapping with time series data? | Block bootstrap for a novice [duplicate]
Model-free resampling of time series is accomplished by block resampling, also called block bootstrapping, which can be implemented using the tsboot function in R’s boot package. The idea is to break |
20,468 | The theory behind the weights argument in R when using lm() | The matrix $X$ should be
$$
\begin{bmatrix}
1 & 0\\
1 & 1\\
1 & 2
\end{bmatrix},
$$
not
$$
\begin{bmatrix}
1 & 1\\
1 & 1\\
1 & 1
\end{bmatrix}.
$$
Also, your V_inv should be diag(weights), not diag(1/weights).
x <- c(0, 1, 2)
y <- c(0.25, 0.75, 0.85)
weights <- c(50, 85, 75)
X <- cbind(1, x)
> solve(t(X) %*% diag(weights) %*% X, t(X) %*% diag(weights) %*% y)
[,1]
0.3495122
x 0.2834146 | The theory behind the weights argument in R when using lm() | The matrix $X$ should be
$$
\begin{bmatrix}
1 & 0\\
1 & 1\\
1 & 2
\end{bmatrix},
$$
not
$$
\begin{bmatrix}
1 & 1\\
1 & 1\\
1 & 1
\end{bmatrix}.
$$
Also, your V_inv should be diag(weights), not diag(1/ | The theory behind the weights argument in R when using lm()
The matrix $X$ should be
$$
\begin{bmatrix}
1 & 0\\
1 & 1\\
1 & 2
\end{bmatrix},
$$
not
$$
\begin{bmatrix}
1 & 1\\
1 & 1\\
1 & 1
\end{bmatrix}.
$$
Also, your V_inv should be diag(weights), not diag(1/weights).
x <- c(0, 1, 2)
y <- c(0.25, 0.75, 0.85)
weights <- c(50, 85, 75)
X <- cbind(1, x)
> solve(t(X) %*% diag(weights) %*% X, t(X) %*% diag(weights) %*% y)
[,1]
0.3495122
x 0.2834146 | The theory behind the weights argument in R when using lm()
The matrix $X$ should be
$$
\begin{bmatrix}
1 & 0\\
1 & 1\\
1 & 2
\end{bmatrix},
$$
not
$$
\begin{bmatrix}
1 & 1\\
1 & 1\\
1 & 1
\end{bmatrix}.
$$
Also, your V_inv should be diag(weights), not diag(1/ |
20,469 | The theory behind the weights argument in R when using lm() | To answer this more concisely, the weighted least squares regression using weights in R makes the following assumptions: suppose we have weights = c(w_1, w_2, ..., w_n). Let $\mathbf{y} \in \mathbb{R}^n$, $\mathbf{X}$ be a $n \times p$ design matrix, $\boldsymbol\beta\in\mathbb{R}^p$ be a parameter vector, and $\boldsymbol\epsilon \in \mathbb{R}^n$ be an error vector with mean $\mathbf{0}$ and variance matrix $\sigma^2\mathbf{V}$, where $\sigma^2 > 0$. Then, $$\mathbf{V} = \text{diag}(1/w_1, 1/w_2, \dots, 1/w_n)\text{.}$$
Following the same steps of the derivation in the original post, we have
$$\begin{align}
\arg\min_{\boldsymbol \beta}\left(\mathbf{y}-\mathbf{X}\boldsymbol\beta\right)^{T}\mathbf{V}^{-1}\left(\mathbf{y}-\mathbf{X}\boldsymbol\beta\right)&= \arg\min_{\boldsymbol \beta}\sum_{i=1}^{n}(1/w_i)^{-1}(y_i-\mathbf{x}^{T}_i\boldsymbol\beta)^2 \\
&= \arg\min_{\boldsymbol \beta}\sum_{i=1}^{n}w_i(y_i-\mathbf{x}^{T}_i\boldsymbol\beta)^2
\end{align}$$
and $\boldsymbol\beta$ is estimated using $$\hat{\boldsymbol\beta} = (\mathbf{X}^{T}\mathbf{V}^{-1}\mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{V}^{-1}\mathbf{y}$$
from the GLS assumptions. | The theory behind the weights argument in R when using lm() | To answer this more concisely, the weighted least squares regression using weights in R makes the following assumptions: suppose we have weights = c(w_1, w_2, ..., w_n). Let $\mathbf{y} \in \mathbb{R} | The theory behind the weights argument in R when using lm()
To answer this more concisely, the weighted least squares regression using weights in R makes the following assumptions: suppose we have weights = c(w_1, w_2, ..., w_n). Let $\mathbf{y} \in \mathbb{R}^n$, $\mathbf{X}$ be a $n \times p$ design matrix, $\boldsymbol\beta\in\mathbb{R}^p$ be a parameter vector, and $\boldsymbol\epsilon \in \mathbb{R}^n$ be an error vector with mean $\mathbf{0}$ and variance matrix $\sigma^2\mathbf{V}$, where $\sigma^2 > 0$. Then, $$\mathbf{V} = \text{diag}(1/w_1, 1/w_2, \dots, 1/w_n)\text{.}$$
Following the same steps of the derivation in the original post, we have
$$\begin{align}
\arg\min_{\boldsymbol \beta}\left(\mathbf{y}-\mathbf{X}\boldsymbol\beta\right)^{T}\mathbf{V}^{-1}\left(\mathbf{y}-\mathbf{X}\boldsymbol\beta\right)&= \arg\min_{\boldsymbol \beta}\sum_{i=1}^{n}(1/w_i)^{-1}(y_i-\mathbf{x}^{T}_i\boldsymbol\beta)^2 \\
&= \arg\min_{\boldsymbol \beta}\sum_{i=1}^{n}w_i(y_i-\mathbf{x}^{T}_i\boldsymbol\beta)^2
\end{align}$$
and $\boldsymbol\beta$ is estimated using $$\hat{\boldsymbol\beta} = (\mathbf{X}^{T}\mathbf{V}^{-1}\mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{V}^{-1}\mathbf{y}$$
from the GLS assumptions. | The theory behind the weights argument in R when using lm()
To answer this more concisely, the weighted least squares regression using weights in R makes the following assumptions: suppose we have weights = c(w_1, w_2, ..., w_n). Let $\mathbf{y} \in \mathbb{R} |
20,470 | How to avoid k-means assigning different labels on different run? | In short: no, you cannot simply instruct most K-Means implementations to use the same names for their clusters each time (at least I am not aware of any such) - so you probably need to do this on your own.
The simple reason is that K-Means intentionally distributes cluster centers randomly at the start, so there would not be any semantic meaning in assigning names to clusters at this point. As the clusters are still the same when K-Means converges (only their centers and associated samples changed) this does not change from a semantic point of view either.
What you can do is e.g. automatically order cluster centers after K-Means converged by some metric you define (e.g. their distance to some origin). But be aware that a) K-Means will very likely converge differently on different runs ("local optima" if you would want to call them such) which might change your naming completely, and b) even if you converge very closely each time, small changes might still cause your metric to order cluster centers differently, which in turn would cause e.g. 2 clusters to have their names "switched", or one cluster being ordered far earlier/later than on previous runs, thereby also changing names of many other clusters if you are unlucky. And keep in mind that in case you change e.g. the amount of clusters between runs the results will naturally be quite different, hence common labels will likely not contain useful information again.
Update:
As pointed out by @whuber and @ttnphns in the comments, you can of course also use an automatic matching of clusters of two converged runs of K-Means, using some cluster similarity measure. The general idea is to obtain a pairwise matching of clusters over run A and B, where the distance of all clusters of run A to their counterparts in run B is minimized. This will likely give you better results than individually ordering clusters in most cases. Depending on the number of your clusters, a wide range from exhaustive (brute-force) approaches to search strategies might be appropriate. | How to avoid k-means assigning different labels on different run? | In short: no, you cannot simply instruct most K-Means implementations to use the same names for their clusters each time (at least I am not aware of any such) - so you probably need to do this on your | How to avoid k-means assigning different labels on different run?
In short: no, you cannot simply instruct most K-Means implementations to use the same names for their clusters each time (at least I am not aware of any such) - so you probably need to do this on your own.
The simple reason is that K-Means intentionally distributes cluster centers randomly at the start, so there would not be any semantic meaning in assigning names to clusters at this point. As the clusters are still the same when K-Means converges (only their centers and associated samples changed) this does not change from a semantic point of view either.
What you can do is e.g. automatically order cluster centers after K-Means converged by some metric you define (e.g. their distance to some origin). But be aware that a) K-Means will very likely converge differently on different runs ("local optima" if you would want to call them such) which might change your naming completely, and b) even if you converge very closely each time, small changes might still cause your metric to order cluster centers differently, which in turn would cause e.g. 2 clusters to have their names "switched", or one cluster being ordered far earlier/later than on previous runs, thereby also changing names of many other clusters if you are unlucky. And keep in mind that in case you change e.g. the amount of clusters between runs the results will naturally be quite different, hence common labels will likely not contain useful information again.
Update:
As pointed out by @whuber and @ttnphns in the comments, you can of course also use an automatic matching of clusters of two converged runs of K-Means, using some cluster similarity measure. The general idea is to obtain a pairwise matching of clusters over run A and B, where the distance of all clusters of run A to their counterparts in run B is minimized. This will likely give you better results than individually ordering clusters in most cases. Depending on the number of your clusters, a wide range from exhaustive (brute-force) approaches to search strategies might be appropriate. | How to avoid k-means assigning different labels on different run?
In short: no, you cannot simply instruct most K-Means implementations to use the same names for their clusters each time (at least I am not aware of any such) - so you probably need to do this on your |
20,471 | How to avoid k-means assigning different labels on different run? | try random_state=0 parameter
kmeans = KMeans(n_clusters = 20, random_state=0)
see official Glossary | How to avoid k-means assigning different labels on different run? | try random_state=0 parameter
kmeans = KMeans(n_clusters = 20, random_state=0)
see official Glossary | How to avoid k-means assigning different labels on different run?
try random_state=0 parameter
kmeans = KMeans(n_clusters = 20, random_state=0)
see official Glossary | How to avoid k-means assigning different labels on different run?
try random_state=0 parameter
kmeans = KMeans(n_clusters = 20, random_state=0)
see official Glossary |
20,472 | How to avoid k-means assigning different labels on different run? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I have similar problem and use this advice here
https://stackoverflow.com/questions/44888415/how-to-set-k-means-clustering-labels-from-highest-to-lowest-with-python
in that case, I have always structured labels based on their values.
I guess this will help you. | How to avoid k-means assigning different labels on different run? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| How to avoid k-means assigning different labels on different run?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I have similar problem and use this advice here
https://stackoverflow.com/questions/44888415/how-to-set-k-means-clustering-labels-from-highest-to-lowest-with-python
in that case, I have always structured labels based on their values.
I guess this will help you. | How to avoid k-means assigning different labels on different run?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
20,473 | How to avoid k-means assigning different labels on different run? | How about using v-measure? It is a symmetric measure
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.v_measure_score.html
You might also want to read more about homogeneity score and completeness score. | How to avoid k-means assigning different labels on different run? | How about using v-measure? It is a symmetric measure
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.v_measure_score.html
You might also want to read more about homogeneity score and | How to avoid k-means assigning different labels on different run?
How about using v-measure? It is a symmetric measure
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.v_measure_score.html
You might also want to read more about homogeneity score and completeness score. | How to avoid k-means assigning different labels on different run?
How about using v-measure? It is a symmetric measure
https://scikit-learn.org/stable/modules/generated/sklearn.metrics.v_measure_score.html
You might also want to read more about homogeneity score and |
20,474 | Logistic regression with binomial data in Python | The statsmodel package has glm() function that can be used for such problems. See an example below:
import statsmodels.api as sm
glm_binom = sm.GLM(data.endog, data.exog, family=sm.families.Binomial())
More details can be found on the following link. Please note that the binomial family models accept a 2d array with two columns. Each observation is expected to be [success, failure]. In the above example that I took from the link provided below, data.endog corresponds to a two dimensional array (Success: NABOVE, Failure: NBELOW).
Relevant documentation: https://www.statsmodels.org/stable/examples/notebooks/generated/glm.html | Logistic regression with binomial data in Python | The statsmodel package has glm() function that can be used for such problems. See an example below:
import statsmodels.api as sm
glm_binom = sm.GLM(data.endog, data.exog, family=sm.families.Binomial( | Logistic regression with binomial data in Python
The statsmodel package has glm() function that can be used for such problems. See an example below:
import statsmodels.api as sm
glm_binom = sm.GLM(data.endog, data.exog, family=sm.families.Binomial())
More details can be found on the following link. Please note that the binomial family models accept a 2d array with two columns. Each observation is expected to be [success, failure]. In the above example that I took from the link provided below, data.endog corresponds to a two dimensional array (Success: NABOVE, Failure: NBELOW).
Relevant documentation: https://www.statsmodels.org/stable/examples/notebooks/generated/glm.html | Logistic regression with binomial data in Python
The statsmodel package has glm() function that can be used for such problems. See an example below:
import statsmodels.api as sm
glm_binom = sm.GLM(data.endog, data.exog, family=sm.families.Binomial( |
20,475 | Logistic regression with binomial data in Python | Alternatively using R-style formula
import statsmodels.api as sm
import statsmodels.formula.api as smf
mod = smf.glm('successes + failures ~ X1 + X2', family=sm.families.Binomial(), data=df).fit()
mod.summary()
``` | Logistic regression with binomial data in Python | Alternatively using R-style formula
import statsmodels.api as sm
import statsmodels.formula.api as smf
mod = smf.glm('successes + failures ~ X1 + X2', family=sm.families.Binomial(), data=df).fit()
mo | Logistic regression with binomial data in Python
Alternatively using R-style formula
import statsmodels.api as sm
import statsmodels.formula.api as smf
mod = smf.glm('successes + failures ~ X1 + X2', family=sm.families.Binomial(), data=df).fit()
mod.summary()
``` | Logistic regression with binomial data in Python
Alternatively using R-style formula
import statsmodels.api as sm
import statsmodels.formula.api as smf
mod = smf.glm('successes + failures ~ X1 + X2', family=sm.families.Binomial(), data=df).fit()
mo |
20,476 | Derivation of Normal-Wishart posterior | The likelihood $\times$ prior is
$$|\boldsymbol{\Lambda}|^{N/2} \exp\left\{-\frac{1}{2}\left(\sum_{i=1}^N \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i - N\boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\mu} - \boldsymbol{\mu}^T \boldsymbol{\Lambda} N\boldsymbol{\bar{x}} + N\boldsymbol{\mu}^T \boldsymbol{\Lambda} \boldsymbol{\mu} \right) \right\} \\
\times |\boldsymbol{\Lambda}|^{(\nu_0 - D - 1)/2} \exp\left\{-\frac{1}{2} tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right) \right\} \\
\times |\boldsymbol{\Lambda}|^{1/2} \exp \left\{ -\frac{\kappa_0}{2} \left( \boldsymbol{\mu}^T \boldsymbol{\Lambda} \boldsymbol{\mu} - \boldsymbol{\mu}^T\boldsymbol{\Lambda} \boldsymbol{\mu}_0 - \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu} + \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 \right) \right\}.$$
This can be rewritten as
$$|\boldsymbol{\Lambda}|^{1/2} |\boldsymbol{\Lambda}|^{(\nu_0 + N - D - 1)/2} \\
\times \exp \left\{ -\frac{1}{2} \left( \left(\kappa_0 + N\right) \boldsymbol{\mu}^T \boldsymbol{\Lambda} \boldsymbol{\mu} - \boldsymbol{\mu}^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}}) - (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} \boldsymbol{\mu} \\
+ \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 + \sum_{i=1}^N \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i + tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right) \right) \right\}$$
We can rewrite
$$(\kappa_0 + N) \boldsymbol{\mu}^T \boldsymbol{\Lambda} \boldsymbol{\mu} - \boldsymbol{\mu}^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}}) - (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} \boldsymbol{\mu} \\
+ \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 + \sum_{i=1}^N \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i + tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right)$$
as follows by adding and subtracting a term:
$$(\kappa_0 + N) \boldsymbol{\mu}^T \boldsymbol{\Lambda} \boldsymbol{\mu} - \boldsymbol{\mu}^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}}) - (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} \boldsymbol{\mu} \\
+ \frac{1}{\kappa_0 + N}(\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}}) \\
- \frac{1}{\kappa_0 + N}(\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}}) \\
+ \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 + \sum_{i=1}^N \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i + tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right).$$
The top two lines now factorise as
$$(\kappa_0 + N) \left( \boldsymbol{\mu} - \frac{\kappa_0 \boldsymbol{\mu} + N \boldsymbol{\bar{x}}}{\kappa_0 + N} \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{\mu} - \frac{\kappa_0 \boldsymbol{\mu} + N \boldsymbol{\bar{x}}}{\kappa_0 + N} \right).$$
Adding and subtracting $N \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}}$, the following:
$$- \frac{1}{\kappa_0 + N}(\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})
+ \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 + \sum_{i=1}^N \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i + tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right)$$
can be rewritten as
$$\sum_{i=1}^N \left( \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i - \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} - \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{x}_i + \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} \right) + N \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} + \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 - \frac{1}{\kappa_0 + N}(\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}}) + tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right).$$
The sum term
$$\sum_{i=1}^N \left( \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i - \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} - \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{x}_i + \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} \right)$$
equals
$$\sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right).$$
Now
$$N \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} + \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 - \frac{1}{\kappa_0 + N}(\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})$$
can be expanded as
$$N \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} + \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 - \frac{1}{\kappa_0 + N}(\kappa_0^2 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 + N \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}}_0 + N \kappa_0 \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 + N^2 \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}}),$$
which equals
$$\frac{N\kappa_0}{\kappa_0 + N} \left( \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} - \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 - \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} + \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 \right) = \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right).$$
The following two terms are scalars:
$$\sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right), \thinspace \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right).$$
And any scalar is equal to its trace, so
$$tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right) + \sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right) + \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)$$
can be rewritten as
$$tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right) + tr \left( \sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right) \right) + tr \left( \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right) \right).$$
Since $tr(\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}) = tr(\boldsymbol{C} \boldsymbol{A} \boldsymbol{B})$, the above sum equals
$$tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right) + tr \left( \sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right) \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T \boldsymbol{\Lambda} \right) + tr \left( \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right) \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \boldsymbol{\Lambda} \right).$$
Using the fact that $tr(\boldsymbol{A} + \boldsymbol{B}) = tr(\boldsymbol{A}) + tr(\boldsymbol{B})$, we can rewrite the sum as
$$tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} + \sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right) \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T \boldsymbol{\Lambda} + \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right) \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \boldsymbol{\Lambda} \right) = tr \left( \left( \boldsymbol{W}_0^{-1} + \sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right) \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T + \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right) \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \right) \boldsymbol{\Lambda} \right).$$
Putting all of that together, if we let $\boldsymbol{S} = \sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right) \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T$ we have that the likelihood $\times$ prior equals
$$|\boldsymbol{\Lambda}|^{1/2} \exp \left\{-\frac{\kappa_0 + N}{2} \left( \boldsymbol{\mu} - \frac{\kappa_0 \boldsymbol{\mu} + N \boldsymbol{\bar{x}}}{\kappa_0 + N} \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{\mu} - \frac{\kappa_0 \boldsymbol{\mu} + N \boldsymbol{\bar{x}}}{\kappa_0 + N} \right) \right\} \\
\times |\boldsymbol{\Lambda}|^{(\nu_0 + N - D - 1)/2} \exp \left\{ - \frac{1}{2} tr \left( \left( \boldsymbol{W}_0^{-1} + \boldsymbol{S} + \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right) \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \right) \boldsymbol{\Lambda} \right) \right\},$$
as required. | Derivation of Normal-Wishart posterior | The likelihood $\times$ prior is
$$|\boldsymbol{\Lambda}|^{N/2} \exp\left\{-\frac{1}{2}\left(\sum_{i=1}^N \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i - N\boldsymbol{\bar{x}}^T \boldsymbol | Derivation of Normal-Wishart posterior
The likelihood $\times$ prior is
$$|\boldsymbol{\Lambda}|^{N/2} \exp\left\{-\frac{1}{2}\left(\sum_{i=1}^N \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i - N\boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\mu} - \boldsymbol{\mu}^T \boldsymbol{\Lambda} N\boldsymbol{\bar{x}} + N\boldsymbol{\mu}^T \boldsymbol{\Lambda} \boldsymbol{\mu} \right) \right\} \\
\times |\boldsymbol{\Lambda}|^{(\nu_0 - D - 1)/2} \exp\left\{-\frac{1}{2} tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right) \right\} \\
\times |\boldsymbol{\Lambda}|^{1/2} \exp \left\{ -\frac{\kappa_0}{2} \left( \boldsymbol{\mu}^T \boldsymbol{\Lambda} \boldsymbol{\mu} - \boldsymbol{\mu}^T\boldsymbol{\Lambda} \boldsymbol{\mu}_0 - \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu} + \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 \right) \right\}.$$
This can be rewritten as
$$|\boldsymbol{\Lambda}|^{1/2} |\boldsymbol{\Lambda}|^{(\nu_0 + N - D - 1)/2} \\
\times \exp \left\{ -\frac{1}{2} \left( \left(\kappa_0 + N\right) \boldsymbol{\mu}^T \boldsymbol{\Lambda} \boldsymbol{\mu} - \boldsymbol{\mu}^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}}) - (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} \boldsymbol{\mu} \\
+ \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 + \sum_{i=1}^N \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i + tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right) \right) \right\}$$
We can rewrite
$$(\kappa_0 + N) \boldsymbol{\mu}^T \boldsymbol{\Lambda} \boldsymbol{\mu} - \boldsymbol{\mu}^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}}) - (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} \boldsymbol{\mu} \\
+ \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 + \sum_{i=1}^N \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i + tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right)$$
as follows by adding and subtracting a term:
$$(\kappa_0 + N) \boldsymbol{\mu}^T \boldsymbol{\Lambda} \boldsymbol{\mu} - \boldsymbol{\mu}^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}}) - (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} \boldsymbol{\mu} \\
+ \frac{1}{\kappa_0 + N}(\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}}) \\
- \frac{1}{\kappa_0 + N}(\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}}) \\
+ \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 + \sum_{i=1}^N \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i + tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right).$$
The top two lines now factorise as
$$(\kappa_0 + N) \left( \boldsymbol{\mu} - \frac{\kappa_0 \boldsymbol{\mu} + N \boldsymbol{\bar{x}}}{\kappa_0 + N} \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{\mu} - \frac{\kappa_0 \boldsymbol{\mu} + N \boldsymbol{\bar{x}}}{\kappa_0 + N} \right).$$
Adding and subtracting $N \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}}$, the following:
$$- \frac{1}{\kappa_0 + N}(\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})
+ \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 + \sum_{i=1}^N \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i + tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right)$$
can be rewritten as
$$\sum_{i=1}^N \left( \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i - \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} - \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{x}_i + \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} \right) + N \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} + \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 - \frac{1}{\kappa_0 + N}(\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}}) + tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right).$$
The sum term
$$\sum_{i=1}^N \left( \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i - \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} - \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{x}_i + \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} \right)$$
equals
$$\sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right).$$
Now
$$N \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} + \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 - \frac{1}{\kappa_0 + N}(\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})^T \boldsymbol{\Lambda} (\kappa_0 \boldsymbol{\mu}_0 + N \boldsymbol{\bar{x}})$$
can be expanded as
$$N \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} + \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 - \frac{1}{\kappa_0 + N}(\kappa_0^2 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 + N \kappa_0 \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}}_0 + N \kappa_0 \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 + N^2 \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}}),$$
which equals
$$\frac{N\kappa_0}{\kappa_0 + N} \left( \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} - \boldsymbol{\bar{x}}^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 - \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\bar{x}} + \boldsymbol{\mu}_0^T \boldsymbol{\Lambda} \boldsymbol{\mu}_0 \right) = \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right).$$
The following two terms are scalars:
$$\sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right), \thinspace \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right).$$
And any scalar is equal to its trace, so
$$tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right) + \sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right) + \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)$$
can be rewritten as
$$tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right) + tr \left( \sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right) \right) + tr \left( \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right) \right).$$
Since $tr(\boldsymbol{A} \boldsymbol{B} \boldsymbol{C}) = tr(\boldsymbol{C} \boldsymbol{A} \boldsymbol{B})$, the above sum equals
$$tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} \right) + tr \left( \sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right) \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T \boldsymbol{\Lambda} \right) + tr \left( \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right) \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \boldsymbol{\Lambda} \right).$$
Using the fact that $tr(\boldsymbol{A} + \boldsymbol{B}) = tr(\boldsymbol{A}) + tr(\boldsymbol{B})$, we can rewrite the sum as
$$tr \left( \boldsymbol{W}_0^{-1} \boldsymbol{\Lambda} + \sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right) \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T \boldsymbol{\Lambda} + \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right) \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \boldsymbol{\Lambda} \right) = tr \left( \left( \boldsymbol{W}_0^{-1} + \sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right) \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T + \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right) \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \right) \boldsymbol{\Lambda} \right).$$
Putting all of that together, if we let $\boldsymbol{S} = \sum_{i=1}^N \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right) \left( \boldsymbol{x}_i - \boldsymbol{\bar{x}} \right)^T$ we have that the likelihood $\times$ prior equals
$$|\boldsymbol{\Lambda}|^{1/2} \exp \left\{-\frac{\kappa_0 + N}{2} \left( \boldsymbol{\mu} - \frac{\kappa_0 \boldsymbol{\mu} + N \boldsymbol{\bar{x}}}{\kappa_0 + N} \right)^T \boldsymbol{\Lambda} \left( \boldsymbol{\mu} - \frac{\kappa_0 \boldsymbol{\mu} + N \boldsymbol{\bar{x}}}{\kappa_0 + N} \right) \right\} \\
\times |\boldsymbol{\Lambda}|^{(\nu_0 + N - D - 1)/2} \exp \left\{ - \frac{1}{2} tr \left( \left( \boldsymbol{W}_0^{-1} + \boldsymbol{S} + \frac{N\kappa_0}{\kappa_0 + N}\left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right) \left( \boldsymbol{\bar{x}} - \boldsymbol{\mu}_0 \right)^T \right) \boldsymbol{\Lambda} \right) \right\},$$
as required. | Derivation of Normal-Wishart posterior
The likelihood $\times$ prior is
$$|\boldsymbol{\Lambda}|^{N/2} \exp\left\{-\frac{1}{2}\left(\sum_{i=1}^N \boldsymbol{x}_i^T \boldsymbol{\Lambda} \boldsymbol{x}_i - N\boldsymbol{\bar{x}}^T \boldsymbol |
20,477 | Derivation of Normal-Wishart posterior | The trace is cyclic, so $tr(ABC) = tr(BCA) = tr(CAB)$. Also the trace distributes over addition so that $tr(A+B) = tr(A) + tr(B)$. With these facts you should be able to cycle the $\Lambda$ term around to the back in the trace terms, combine the trace terms together. The result should look something like $$W'^{-1} = W^{-1} + \sum_{i=1}^N x_i x_i^\intercal + \mu_0\mu_0^\intercal$$ | Derivation of Normal-Wishart posterior | The trace is cyclic, so $tr(ABC) = tr(BCA) = tr(CAB)$. Also the trace distributes over addition so that $tr(A+B) = tr(A) + tr(B)$. With these facts you should be able to cycle the $\Lambda$ term aroun | Derivation of Normal-Wishart posterior
The trace is cyclic, so $tr(ABC) = tr(BCA) = tr(CAB)$. Also the trace distributes over addition so that $tr(A+B) = tr(A) + tr(B)$. With these facts you should be able to cycle the $\Lambda$ term around to the back in the trace terms, combine the trace terms together. The result should look something like $$W'^{-1} = W^{-1} + \sum_{i=1}^N x_i x_i^\intercal + \mu_0\mu_0^\intercal$$ | Derivation of Normal-Wishart posterior
The trace is cyclic, so $tr(ABC) = tr(BCA) = tr(CAB)$. Also the trace distributes over addition so that $tr(A+B) = tr(A) + tr(B)$. With these facts you should be able to cycle the $\Lambda$ term aroun |
20,478 | Derivation of Normal-Wishart posterior | Your error is in the start of your derivation of the scale matrix W':
$tr(W'^{-1}\Lambda)=\ldots$
should be
$tr(W'^{-1}\Lambda) + \mu'^T\kappa'\Lambda\mu'=\ldots$
In your current solution, you will be missing that final term ($\mu'^T\kappa'\Lambda\mu'$) in the normal-part of the posterior. | Derivation of Normal-Wishart posterior | Your error is in the start of your derivation of the scale matrix W':
$tr(W'^{-1}\Lambda)=\ldots$
should be
$tr(W'^{-1}\Lambda) + \mu'^T\kappa'\Lambda\mu'=\ldots$
In your current solution, you will be | Derivation of Normal-Wishart posterior
Your error is in the start of your derivation of the scale matrix W':
$tr(W'^{-1}\Lambda)=\ldots$
should be
$tr(W'^{-1}\Lambda) + \mu'^T\kappa'\Lambda\mu'=\ldots$
In your current solution, you will be missing that final term ($\mu'^T\kappa'\Lambda\mu'$) in the normal-part of the posterior. | Derivation of Normal-Wishart posterior
Your error is in the start of your derivation of the scale matrix W':
$tr(W'^{-1}\Lambda)=\ldots$
should be
$tr(W'^{-1}\Lambda) + \mu'^T\kappa'\Lambda\mu'=\ldots$
In your current solution, you will be |
20,479 | How to compare two ranking algorithms? | Discounted Cumulative Gain (DCG) is one of the most popular metrics used for evaluation of ranking by any search engine. It is a measure of ranking quality. In information retrieval, it is often used to measure the effectiveness of web search engine.
It is based on the following assumptions:
Highly relevant documents are more useful if appearing earlier in a search result.
Highly relevant documents are more useful than marginally relevant documents which are better than non-relevant documents.
The formula for DCG goes as follows:
$$DCG_p = \sum_{i=1}^p \frac {rel_i} {log_2 (i+1)} = rel_1 + \sum_{i=2}^p \frac {rel_i} {log_2 (i+1)} \tag{1}$$
Where:
i is the returned position of a document in the search result.
$rel_i$ is the graded relevance of the document
summation over p (number of results returned) hence, accumulated cumulative gain gives the performance metrics of the returned result.
DCG is derived from CG (Cumulative Gain), given by:
$$CG_p = \sum_{i=1}^p rel_i \tag{2}$$
From (2) it can be seen that $CG_p$ does not change for a change in the order of results. Thus to overcome this issue DCG was introduced. There is a different form of DCG, which is popular for placing a very high emphasis on retrieval of the documents. This version of DCG is given by:
$$DCG_p = \sum_{i=1}^p \frac {2^{rel_i} - 1} {log_2 (i+1)} \tag{3}$$
One obvious drawback of the DCG equation presented in (1) and (3) is that algorithms returning a different number of results cannot be compared effectively. This is because the higher the value of $p$ the higher the value of $DCG_p$ will be scaled to.
To overcome this issue, normalized DCG (nDCG) is proposed. It is given by,
$$nDCG_p = \frac {DCG_p} {IDCG_p}$$
where $IDCG_p$ is the Ideal $DCG_p$, given by,
$$IDCG_p = \sum_{i=1}^{|REL|} \frac {2^{rel_i} - 1} {log_2 (i+1)}$$
Where |REL| is the list of documents ordered by relevance in the corpus up to position p.
For a perfect ranking algorithm,
$$DCG_p = IDCG_p$$
Since the values of nDCG are scaled within the range [0,1], the cross-query comparison is possible using these metrics.
Drawbacks:
1. nDCG does not penalize the retrieval of bad documents in the result. This is fixable by adjusting the values of relevance attributed to documents.
2. nDCG does not penalize missing documents. This can be fixed by fixing the retrieval size and using the minimum score for the missing documents.
Refer this for seeing example calculations of nDCG.
Reference | How to compare two ranking algorithms? | Discounted Cumulative Gain (DCG) is one of the most popular metrics used for evaluation of ranking by any search engine. It is a measure of ranking quality. In information retrieval, it is often used | How to compare two ranking algorithms?
Discounted Cumulative Gain (DCG) is one of the most popular metrics used for evaluation of ranking by any search engine. It is a measure of ranking quality. In information retrieval, it is often used to measure the effectiveness of web search engine.
It is based on the following assumptions:
Highly relevant documents are more useful if appearing earlier in a search result.
Highly relevant documents are more useful than marginally relevant documents which are better than non-relevant documents.
The formula for DCG goes as follows:
$$DCG_p = \sum_{i=1}^p \frac {rel_i} {log_2 (i+1)} = rel_1 + \sum_{i=2}^p \frac {rel_i} {log_2 (i+1)} \tag{1}$$
Where:
i is the returned position of a document in the search result.
$rel_i$ is the graded relevance of the document
summation over p (number of results returned) hence, accumulated cumulative gain gives the performance metrics of the returned result.
DCG is derived from CG (Cumulative Gain), given by:
$$CG_p = \sum_{i=1}^p rel_i \tag{2}$$
From (2) it can be seen that $CG_p$ does not change for a change in the order of results. Thus to overcome this issue DCG was introduced. There is a different form of DCG, which is popular for placing a very high emphasis on retrieval of the documents. This version of DCG is given by:
$$DCG_p = \sum_{i=1}^p \frac {2^{rel_i} - 1} {log_2 (i+1)} \tag{3}$$
One obvious drawback of the DCG equation presented in (1) and (3) is that algorithms returning a different number of results cannot be compared effectively. This is because the higher the value of $p$ the higher the value of $DCG_p$ will be scaled to.
To overcome this issue, normalized DCG (nDCG) is proposed. It is given by,
$$nDCG_p = \frac {DCG_p} {IDCG_p}$$
where $IDCG_p$ is the Ideal $DCG_p$, given by,
$$IDCG_p = \sum_{i=1}^{|REL|} \frac {2^{rel_i} - 1} {log_2 (i+1)}$$
Where |REL| is the list of documents ordered by relevance in the corpus up to position p.
For a perfect ranking algorithm,
$$DCG_p = IDCG_p$$
Since the values of nDCG are scaled within the range [0,1], the cross-query comparison is possible using these metrics.
Drawbacks:
1. nDCG does not penalize the retrieval of bad documents in the result. This is fixable by adjusting the values of relevance attributed to documents.
2. nDCG does not penalize missing documents. This can be fixed by fixing the retrieval size and using the minimum score for the missing documents.
Refer this for seeing example calculations of nDCG.
Reference | How to compare two ranking algorithms?
Discounted Cumulative Gain (DCG) is one of the most popular metrics used for evaluation of ranking by any search engine. It is a measure of ranking quality. In information retrieval, it is often used |
20,480 | How to compare two ranking algorithms? | Useful Resources:
http://www.cs.utexas.edu/~mooney/ir-course/slides/Evaluation.ppt
http://www.nii.ac.jp/TechReports/05-014E.pdf
http://www.stanford.edu/class/cs276/handouts/EvaluationNew-handout-6-per.pdf
http://hal.archives-ouvertes.fr/docs/00/72/67/60/PDF/07-busa-fekete.pdf
Learning to Rank for Information Retrieval (Tie-Yan Liu) | How to compare two ranking algorithms? | Useful Resources:
http://www.cs.utexas.edu/~mooney/ir-course/slides/Evaluation.ppt
http://www.nii.ac.jp/TechReports/05-014E.pdf
http://www.stanford.edu/class/cs276/handouts/EvaluationNew-handout-6-pe | How to compare two ranking algorithms?
Useful Resources:
http://www.cs.utexas.edu/~mooney/ir-course/slides/Evaluation.ppt
http://www.nii.ac.jp/TechReports/05-014E.pdf
http://www.stanford.edu/class/cs276/handouts/EvaluationNew-handout-6-per.pdf
http://hal.archives-ouvertes.fr/docs/00/72/67/60/PDF/07-busa-fekete.pdf
Learning to Rank for Information Retrieval (Tie-Yan Liu) | How to compare two ranking algorithms?
Useful Resources:
http://www.cs.utexas.edu/~mooney/ir-course/slides/Evaluation.ppt
http://www.nii.ac.jp/TechReports/05-014E.pdf
http://www.stanford.edu/class/cs276/handouts/EvaluationNew-handout-6-pe |
20,481 | How does the Gower distance calculate the difference between binary variables'? | How about binary attributes that have the values "m" and "f", for "male" and "female"?
You do realize that for a dicotomous variable all you can get out is "same" or "different"? The key point difference between distances is not if the value is 1 or 0; but how multiple variables are combined. | How does the Gower distance calculate the difference between binary variables'? | How about binary attributes that have the values "m" and "f", for "male" and "female"?
You do realize that for a dicotomous variable all you can get out is "same" or "different"? The key point differe | How does the Gower distance calculate the difference between binary variables'?
How about binary attributes that have the values "m" and "f", for "male" and "female"?
You do realize that for a dicotomous variable all you can get out is "same" or "different"? The key point difference between distances is not if the value is 1 or 0; but how multiple variables are combined. | How does the Gower distance calculate the difference between binary variables'?
How about binary attributes that have the values "m" and "f", for "male" and "female"?
You do realize that for a dicotomous variable all you can get out is "same" or "different"? The key point differe |
20,482 | How does the Gower distance calculate the difference between binary variables'? | Gower distance uses Manhattan for calculating distance between continuous datapoints and Dice for calculating distance between categorical datapoints | How does the Gower distance calculate the difference between binary variables'? | Gower distance uses Manhattan for calculating distance between continuous datapoints and Dice for calculating distance between categorical datapoints | How does the Gower distance calculate the difference between binary variables'?
Gower distance uses Manhattan for calculating distance between continuous datapoints and Dice for calculating distance between categorical datapoints | How does the Gower distance calculate the difference between binary variables'?
Gower distance uses Manhattan for calculating distance between continuous datapoints and Dice for calculating distance between categorical datapoints |
20,483 | t.test returns an error "data are essentially constant" | As covered in comments, the issue was that the differences were all 2 (or -2, depending on which way around you write the pairs).
Responding to the question in comments:
So this means that as far as statistics go, there's no need for fancy t.test and its a certainty that for each subject there would be a -2 reduction in the fu compared to the bl?
Well, that depends.
If the distribution of differences really was normal, that would be the conclusion, but it might be that the normality assumption is wrong and the distribution of differences in measurements is actually discrete (maybe in the population you wish to make inference about it's usually -2 but occasionally different from -2).
In fact, seeing that all the numbers are integers, it seems like discreteness is probably the case.
... in which case there's no such certainty that all differences will be -2 in the population -- it's more that there's a lack of evidence in the sample of a difference in the population means any different from -2.
(For example, if 87% of the population differences were -2, there's only a 50-50 chance that any of the 5 sample differences would be anything other than -2. So the sample is quite consistent with there being variation from -2 in the population)
But you would also be led to question the suitability of the assumptions for the t-test -- especially in such a small sample. | t.test returns an error "data are essentially constant" | As covered in comments, the issue was that the differences were all 2 (or -2, depending on which way around you write the pairs).
Responding to the question in comments:
So this means that as far as | t.test returns an error "data are essentially constant"
As covered in comments, the issue was that the differences were all 2 (or -2, depending on which way around you write the pairs).
Responding to the question in comments:
So this means that as far as statistics go, there's no need for fancy t.test and its a certainty that for each subject there would be a -2 reduction in the fu compared to the bl?
Well, that depends.
If the distribution of differences really was normal, that would be the conclusion, but it might be that the normality assumption is wrong and the distribution of differences in measurements is actually discrete (maybe in the population you wish to make inference about it's usually -2 but occasionally different from -2).
In fact, seeing that all the numbers are integers, it seems like discreteness is probably the case.
... in which case there's no such certainty that all differences will be -2 in the population -- it's more that there's a lack of evidence in the sample of a difference in the population means any different from -2.
(For example, if 87% of the population differences were -2, there's only a 50-50 chance that any of the 5 sample differences would be anything other than -2. So the sample is quite consistent with there being variation from -2 in the population)
But you would also be led to question the suitability of the assumptions for the t-test -- especially in such a small sample. | t.test returns an error "data are essentially constant"
As covered in comments, the issue was that the differences were all 2 (or -2, depending on which way around you write the pairs).
Responding to the question in comments:
So this means that as far as |
20,484 | Assign weights to variables in cluster analysis | One way to assign a weight to a variable is by changing its scale. The trick works for the clustering algorithms you mention, viz. k-means, weighted-average linkage and average-linkage.
Kaufman, Leonard, and Peter J. Rousseeuw. "Finding groups in data: An introduction to cluster analysis." (2005) - page 11:
The choice of measurement units gives rise to relative weights of the
variables. Expressing a variable in smaller units will lead to a
larger range for that variable, which will then have a large effect on
the resulting structure. On the other hand, by standardizing one
attempts to give all variables an equal weight, in the hope of
achieving objectivity. As such, it may be used by a practitioner who
possesses no prior knowledge. However, it may well be that some
variables are intrinsically more important than others in a particular
application, and then the assignment of weights should be based on
subject-matter knowledge (see, e.g., Abrahamowicz, 1985).
On the other hand, there have been attempts to devise clustering
techniques that are independent of the scale of the variables
(Friedman and Rubin, 1967). The proposal of Hardy and Rasson (1982) is
to search for a partition that minimizes the total volume of the
convex hulls of the clusters. In principle such a method is invariant
with respect to linear transformations of the data, but unfortunately
no algorithm exists for its implementation (except for an
approximation that is restricted to two dimensions). Therefore, the
dilemma of standardization appears unavoidable at present and the
programs described in this book leave the choice up to the user
Abrahamowicz, M. (1985), The use of non-numerical a pnon information for
measuring dissimilarities, paper presented at the Fourth European Meeting of
the Psychometric Society and the Classification Societies, 2-5 July, Cambridge
(UK).
Friedman, H. P., and Rubin, J. (1967), On some invariant criteria for grouping data.
J . Amer. Statist. ASSOC6.,2 , 1159-1178.
Hardy, A., and Rasson, J. P. (1982), Une nouvelle approche des problemes de
classification automatique, Statist. Anal. Donnies, 7, 41-56. | Assign weights to variables in cluster analysis | One way to assign a weight to a variable is by changing its scale. The trick works for the clustering algorithms you mention, viz. k-means, weighted-average linkage and average-linkage.
Kaufman, Leon | Assign weights to variables in cluster analysis
One way to assign a weight to a variable is by changing its scale. The trick works for the clustering algorithms you mention, viz. k-means, weighted-average linkage and average-linkage.
Kaufman, Leonard, and Peter J. Rousseeuw. "Finding groups in data: An introduction to cluster analysis." (2005) - page 11:
The choice of measurement units gives rise to relative weights of the
variables. Expressing a variable in smaller units will lead to a
larger range for that variable, which will then have a large effect on
the resulting structure. On the other hand, by standardizing one
attempts to give all variables an equal weight, in the hope of
achieving objectivity. As such, it may be used by a practitioner who
possesses no prior knowledge. However, it may well be that some
variables are intrinsically more important than others in a particular
application, and then the assignment of weights should be based on
subject-matter knowledge (see, e.g., Abrahamowicz, 1985).
On the other hand, there have been attempts to devise clustering
techniques that are independent of the scale of the variables
(Friedman and Rubin, 1967). The proposal of Hardy and Rasson (1982) is
to search for a partition that minimizes the total volume of the
convex hulls of the clusters. In principle such a method is invariant
with respect to linear transformations of the data, but unfortunately
no algorithm exists for its implementation (except for an
approximation that is restricted to two dimensions). Therefore, the
dilemma of standardization appears unavoidable at present and the
programs described in this book leave the choice up to the user
Abrahamowicz, M. (1985), The use of non-numerical a pnon information for
measuring dissimilarities, paper presented at the Fourth European Meeting of
the Psychometric Society and the Classification Societies, 2-5 July, Cambridge
(UK).
Friedman, H. P., and Rubin, J. (1967), On some invariant criteria for grouping data.
J . Amer. Statist. ASSOC6.,2 , 1159-1178.
Hardy, A., and Rasson, J. P. (1982), Une nouvelle approche des problemes de
classification automatique, Statist. Anal. Donnies, 7, 41-56. | Assign weights to variables in cluster analysis
One way to assign a weight to a variable is by changing its scale. The trick works for the clustering algorithms you mention, viz. k-means, weighted-average linkage and average-linkage.
Kaufman, Leon |
20,485 | Variance of sample mean of bootstrap sample | The correct answer is $\frac{n-1}{n^2}S^2$. The solution is #4 here | Variance of sample mean of bootstrap sample | The correct answer is $\frac{n-1}{n^2}S^2$. The solution is #4 here | Variance of sample mean of bootstrap sample
The correct answer is $\frac{n-1}{n^2}S^2$. The solution is #4 here | Variance of sample mean of bootstrap sample
The correct answer is $\frac{n-1}{n^2}S^2$. The solution is #4 here |
20,486 | Variance of sample mean of bootstrap sample | This may be a late answer, but what is wrong in your calculation is the following: you have assumed that unconditionally your bootstrap sample is iid. This is false: conditional on your sample, the bootstrap sample is indeed iid, but unconditionally you lose independence (but you still have identically distributed random variables). This is essentially Exercise 13 in Larry Wasserman All of nonparametric statistics. | Variance of sample mean of bootstrap sample | This may be a late answer, but what is wrong in your calculation is the following: you have assumed that unconditionally your bootstrap sample is iid. This is false: conditional on your sample, the bo | Variance of sample mean of bootstrap sample
This may be a late answer, but what is wrong in your calculation is the following: you have assumed that unconditionally your bootstrap sample is iid. This is false: conditional on your sample, the bootstrap sample is indeed iid, but unconditionally you lose independence (but you still have identically distributed random variables). This is essentially Exercise 13 in Larry Wasserman All of nonparametric statistics. | Variance of sample mean of bootstrap sample
This may be a late answer, but what is wrong in your calculation is the following: you have assumed that unconditionally your bootstrap sample is iid. This is false: conditional on your sample, the bo |
20,487 | Variance of sample mean of bootstrap sample | For anyone in the future finding this question: the 2nd variance value computed from the conditional variance formula (at the bottom of the question) is correct. The first value is incorrect.
The answer above that says "The correct answer is" shows the value of the conditional variance $Var(\bar{X^*_n}|X_1,\dots,X_n)=\frac{n-1}{n^2}S^2$. The unconditional variance is $Var(\bar{X^*_n}) = \frac{(2n-1)\sigma^2}{n^2} = \frac{\sigma^2}{n}\left(2 - \frac{1}{n}\right)$. This can be directly read from the linked source pdf, but wasn't copied correctly to this page.
Indeed, the mistake is that the $\bar{X_i^*}$ are not independent (only conditionally independent), so the first computation of $Var(\bar{X^*_n})$ is incorrect. $Var(\bar{X^*_n}) \neq \frac{1}{n^2}\Sigma_{i=1}^n Var(X_i^*)$. | Variance of sample mean of bootstrap sample | For anyone in the future finding this question: the 2nd variance value computed from the conditional variance formula (at the bottom of the question) is correct. The first value is incorrect.
The answ | Variance of sample mean of bootstrap sample
For anyone in the future finding this question: the 2nd variance value computed from the conditional variance formula (at the bottom of the question) is correct. The first value is incorrect.
The answer above that says "The correct answer is" shows the value of the conditional variance $Var(\bar{X^*_n}|X_1,\dots,X_n)=\frac{n-1}{n^2}S^2$. The unconditional variance is $Var(\bar{X^*_n}) = \frac{(2n-1)\sigma^2}{n^2} = \frac{\sigma^2}{n}\left(2 - \frac{1}{n}\right)$. This can be directly read from the linked source pdf, but wasn't copied correctly to this page.
Indeed, the mistake is that the $\bar{X_i^*}$ are not independent (only conditionally independent), so the first computation of $Var(\bar{X^*_n})$ is incorrect. $Var(\bar{X^*_n}) \neq \frac{1}{n^2}\Sigma_{i=1}^n Var(X_i^*)$. | Variance of sample mean of bootstrap sample
For anyone in the future finding this question: the 2nd variance value computed from the conditional variance formula (at the bottom of the question) is correct. The first value is incorrect.
The answ |
20,488 | Binary Models (Probit and Logit) with a Logarithmic Offset | You can always include an offset in any GLM: it's just a predictor variable whose coefficient is fixed at 1. Poisson regression just happens to be a very common use case.
Note that in a binomial model, the analogue to log-exposure as an offset is just the binomial denominator, so there's usually no need to specify it explicitly. Just as you can model a Poisson RV as a count with log-exposure as an offset, or as a ratio with exposure as a weight, you can similarly model a binomial RV as counts of successes and failures, or as a frequency with trials as a weight.
In a logistic regression, you would interpret a $\log Z$ offset in terms of the odds ratios: a proportional change in $Z$ results in a given proportional change in $p/(1-p)$.
$$\begin{equation}\begin{split}
\log (p/(1-p)) &= \beta' \mathrm{X} + \log Z \\
p/(1-p) &= Z \exp(\beta' \mathrm{X})
\end{split}\end{equation}$$
But this doesn't have any particular significance like log-exposure does in a Poisson regression. That said, if your binomial probability is small enough, a logistic model will approach a Poisson model with log link (since the denominator on the LHS approaches 1) and the offset can be treated as a log-exposure term.
(The problem described in your linked R question was rather idiosyncratic.) | Binary Models (Probit and Logit) with a Logarithmic Offset | You can always include an offset in any GLM: it's just a predictor variable whose coefficient is fixed at 1. Poisson regression just happens to be a very common use case.
Note that in a binomial model | Binary Models (Probit and Logit) with a Logarithmic Offset
You can always include an offset in any GLM: it's just a predictor variable whose coefficient is fixed at 1. Poisson regression just happens to be a very common use case.
Note that in a binomial model, the analogue to log-exposure as an offset is just the binomial denominator, so there's usually no need to specify it explicitly. Just as you can model a Poisson RV as a count with log-exposure as an offset, or as a ratio with exposure as a weight, you can similarly model a binomial RV as counts of successes and failures, or as a frequency with trials as a weight.
In a logistic regression, you would interpret a $\log Z$ offset in terms of the odds ratios: a proportional change in $Z$ results in a given proportional change in $p/(1-p)$.
$$\begin{equation}\begin{split}
\log (p/(1-p)) &= \beta' \mathrm{X} + \log Z \\
p/(1-p) &= Z \exp(\beta' \mathrm{X})
\end{split}\end{equation}$$
But this doesn't have any particular significance like log-exposure does in a Poisson regression. That said, if your binomial probability is small enough, a logistic model will approach a Poisson model with log link (since the denominator on the LHS approaches 1) and the offset can be treated as a log-exposure term.
(The problem described in your linked R question was rather idiosyncratic.) | Binary Models (Probit and Logit) with a Logarithmic Offset
You can always include an offset in any GLM: it's just a predictor variable whose coefficient is fixed at 1. Poisson regression just happens to be a very common use case.
Note that in a binomial model |
20,489 | Binary Models (Probit and Logit) with a Logarithmic Offset | Recasting this as a time-to-event problem, wouldn't a logistic model with a ln(time) offset effectively commit you to a parametric survival function that may or may not fit the data well?
$p/(1-p)=Z*\exp(\text{xbeta})$
$p = [Z*\exp(\text{xbeta})]/[1+Z*\exp(\text{xbeta})]$
Predicted survival at time $Z = 1-[Z*\exp(\text{xbeta})]/[1+Z*\exp(\text{xbeta})]$ | Binary Models (Probit and Logit) with a Logarithmic Offset | Recasting this as a time-to-event problem, wouldn't a logistic model with a ln(time) offset effectively commit you to a parametric survival function that may or may not fit the data well?
$p/(1-p)=Z*\ | Binary Models (Probit and Logit) with a Logarithmic Offset
Recasting this as a time-to-event problem, wouldn't a logistic model with a ln(time) offset effectively commit you to a parametric survival function that may or may not fit the data well?
$p/(1-p)=Z*\exp(\text{xbeta})$
$p = [Z*\exp(\text{xbeta})]/[1+Z*\exp(\text{xbeta})]$
Predicted survival at time $Z = 1-[Z*\exp(\text{xbeta})]/[1+Z*\exp(\text{xbeta})]$ | Binary Models (Probit and Logit) with a Logarithmic Offset
Recasting this as a time-to-event problem, wouldn't a logistic model with a ln(time) offset effectively commit you to a parametric survival function that may or may not fit the data well?
$p/(1-p)=Z*\ |
20,490 | When to include a random effect in a model | As I understand, you have a simple nested observational design (plots within patches) and your interest is in a correlation/regression between two continuous variables (the two indices). Your sample size is m patches x n plots = N pairs of observations (or the appropriate sumatory if unbalanced). No proper randomization was involved, but maybe you can/should/want to consider that (1) the patches were "randomly" selected from all the patches of this kind or in some area, and then (2) the plots were "randomly" selected within each patch.
If you ignore the random factor Patch, you may be pseudoreplicating by considering that you have randomly selected N plots "freely", without constraining them to be (in number or type) in those (previously) selected patches.
So, your first question: yes, that is what a random factor allows. The validity of such inference depends on the validity of the assumption that haphazard selection is equivalent to random selection of patches (e.g., that your results would not be different if a different set of forest patches was selected). That puts a limit also on your space of inference: the kind of forest or geographical area up to which your results extend depends on the maximal (imaginary) population of patches from where your sample is a credible "random" sample. Maybe your observations are a "reasonable random" sample of the mammals of the forest patches in your region but would be a suspiciously aggregated sample of the mammals of the whole continent.
The second one: the test will depend on "the degree of pseudoreplication", or the evidence in your sample that plots "belong" to patches. This is, how much variation there is among patches and among plots within patches (search for intraclass correlation). In an extreme, only variation among patches is present (plots within a patch are all the same) and you have "pure pseudoreplication": your N should be the number of patches, and sampling one or many plots from each of them does not provide new information. On the other extreme, all variation happens between plots, and there is no extra variation explained by knowing to which forest patch each plot belongs (and then the model without the random factor would appear more parsimonious); you have "independent" plots. NONE of the extremes are very likely to happen... particularly for biological variables observed on the ground, if only because of spatial autocorrelation and geographical distributions of the mammals. I personally prefer to keep factors by design anyway (e.g., even when patches is not a relevant source of variation IN THIS SAMPLE) to sustain the "experimental-observational" analogy explained above; remember: not having evidence in your sample to reject the null hypothesis stating that variation among patches is zero does not mean that variation is really zero in the population. | When to include a random effect in a model | As I understand, you have a simple nested observational design (plots within patches) and your interest is in a correlation/regression between two continuous variables (the two indices). Your sample s | When to include a random effect in a model
As I understand, you have a simple nested observational design (plots within patches) and your interest is in a correlation/regression between two continuous variables (the two indices). Your sample size is m patches x n plots = N pairs of observations (or the appropriate sumatory if unbalanced). No proper randomization was involved, but maybe you can/should/want to consider that (1) the patches were "randomly" selected from all the patches of this kind or in some area, and then (2) the plots were "randomly" selected within each patch.
If you ignore the random factor Patch, you may be pseudoreplicating by considering that you have randomly selected N plots "freely", without constraining them to be (in number or type) in those (previously) selected patches.
So, your first question: yes, that is what a random factor allows. The validity of such inference depends on the validity of the assumption that haphazard selection is equivalent to random selection of patches (e.g., that your results would not be different if a different set of forest patches was selected). That puts a limit also on your space of inference: the kind of forest or geographical area up to which your results extend depends on the maximal (imaginary) population of patches from where your sample is a credible "random" sample. Maybe your observations are a "reasonable random" sample of the mammals of the forest patches in your region but would be a suspiciously aggregated sample of the mammals of the whole continent.
The second one: the test will depend on "the degree of pseudoreplication", or the evidence in your sample that plots "belong" to patches. This is, how much variation there is among patches and among plots within patches (search for intraclass correlation). In an extreme, only variation among patches is present (plots within a patch are all the same) and you have "pure pseudoreplication": your N should be the number of patches, and sampling one or many plots from each of them does not provide new information. On the other extreme, all variation happens between plots, and there is no extra variation explained by knowing to which forest patch each plot belongs (and then the model without the random factor would appear more parsimonious); you have "independent" plots. NONE of the extremes are very likely to happen... particularly for biological variables observed on the ground, if only because of spatial autocorrelation and geographical distributions of the mammals. I personally prefer to keep factors by design anyway (e.g., even when patches is not a relevant source of variation IN THIS SAMPLE) to sustain the "experimental-observational" analogy explained above; remember: not having evidence in your sample to reject the null hypothesis stating that variation among patches is zero does not mean that variation is really zero in the population. | When to include a random effect in a model
As I understand, you have a simple nested observational design (plots within patches) and your interest is in a correlation/regression between two continuous variables (the two indices). Your sample s |
20,491 | When to include a random effect in a model | Random effects induce heteroskedasticity and correlation in "error terms" in the model
A useful way to look at random effects models is to use some mathematical manipulation to mash them back onto the form of a traditional model without a random effect term. This can be accomplished by absorbing the random effects terms into the error terms of a traditional fixed effect model. We can then examine the properties of the new error term in the traditional fixed effect model form, to see what happens with this variation.
To illustrate this technique, consider a Gaussian linear random effects model with observations $Y_{i,j}$ taken over categories $j=1,...,k$. The model can be written as:
$$Y_{i,j} = \beta_0 + \beta_j + u_j + \varepsilon_{i,j}
\quad \quad \quad
u_j \sim \text{N}(0,\sigma_j^2)
\quad \quad \quad
\varepsilon_{i,j} \sim \text{N}(0,\sigma^2),$$
where the random effects terms and error terms are mutually independent. Since both the error terms and the random effect terms are random variables in the model, we can combine them into a single alternative error term and write the model in a form that does not show a separate random effect. Specifically, we write the model in the traditional fixed effect form:
$$Y_{i,j} = \beta_0 + \beta_j + \eta_{i,j},$$
where we have defined the quantities:
$$\eta_{i,j} = z_j + \varepsilon_{i,j}
\quad \quad \quad \quad \quad
\rho_j = \frac{\sigma_j^2}{\sigma_j^2+\sigma^2}
\quad \quad \quad \quad \quad
\sigma_{j*}^2 = \sigma_j^2+\sigma^2.$$
In this latter form, the error terms $\eta_{i,j}$ are still (jointly) normally distributed with zero mean, but they are no longer homoskedastic and uncorrelated --- they have covariance values:
$$\mathbb{C}(\eta_{i,j},\eta_{i',j'})
= \mathbb{C}(z_j + \varepsilon_{i,j}, z_{j'} + \varepsilon_{i',j'})
= \begin{cases}
\sigma_{j*}^2 & & \text{if } i = i' \text{ and } j = j', \\[6pt]
\rho_j \sigma_{j*}^2 & & \text{if } i \neq i' \text{ and } j = j', \\[6pt]
0 & & \text{otherwise}. \\[6pt]
\end{cases}$$
This form means that there is heteroskedasticity in the model (i.e., with variance $\sigma_{j*}^2$ for observations in category $j$) and the errors within a group are positively correlated (with correlation coefficient $\rho_{j}$).
Should you use a random effects model? As you can see from the above, a Gaussian linear random effects model using categorical predictors is equivalent to a traditional Gaussian linear regression model using categorical predictors, where the latter has heterosedasticity across the categories of observations and positively correlated errors within each category. Consequently, your choice of whether or not to include random effects terms can be framed equivalently as a choice of whether or not to generalise the behaviour of the error terms in the model to allow heteroskedasticity across categories and correlation of error terms within categories.
In the study you are conducting, your forest patches are the categories and your plots within these forest patches are your individual observations. In this case, including a random effect in your model (taken at the level of the forest patches) is equivalent to allowing heteroskedasticity across different forest patches and also having positive correlation of the mammal abundance of different plots within each forest patch. In order to decide whether or not this is appropriate, you merely need to ask yourself if this type of heteroskedasticity/correlation might plausibly occur in this case.
You have also noted that your forest patches were not randomly sampled. This is not necessarily a problem for the random effects model, and it is no more of a problem than to a fixed effects model. In assessing your sampling method you should consider whether your choice of sites was influenced by any of the variables under study, and consider the types of biases this could induce. However, there is nothing inherent in the random effects model (as opposed to the fixed effects model) that presents an analytical difference here.
I would suggest that in the present case this kind of heteroskedasticity/correlation might plausibly occur, owing to the closeness of the plots within a patch and the possible movement behaviour of the mammals under study. Different plots within the same forest patch might plausibly have positively correlated mammal abundance due to the movement of mammals around a forest patch, breeding of mammals from amongst nearby plots within a forest patch, and common conditions/threats to mammals within the same forest patch. You have noted that you tried both the random effects and fixed effects models and conducted a likelihood-ratio test, finding that there was no significant evidence of the presence of random effects. That is perfectly fine, and it is one way to conduct your analysis --- you have an initially plausible model form and then you find that it does not operate better than a simpler model form. | When to include a random effect in a model | Random effects induce heteroskedasticity and correlation in "error terms" in the model
A useful way to look at random effects models is to use some mathematical manipulation to mash them back onto the | When to include a random effect in a model
Random effects induce heteroskedasticity and correlation in "error terms" in the model
A useful way to look at random effects models is to use some mathematical manipulation to mash them back onto the form of a traditional model without a random effect term. This can be accomplished by absorbing the random effects terms into the error terms of a traditional fixed effect model. We can then examine the properties of the new error term in the traditional fixed effect model form, to see what happens with this variation.
To illustrate this technique, consider a Gaussian linear random effects model with observations $Y_{i,j}$ taken over categories $j=1,...,k$. The model can be written as:
$$Y_{i,j} = \beta_0 + \beta_j + u_j + \varepsilon_{i,j}
\quad \quad \quad
u_j \sim \text{N}(0,\sigma_j^2)
\quad \quad \quad
\varepsilon_{i,j} \sim \text{N}(0,\sigma^2),$$
where the random effects terms and error terms are mutually independent. Since both the error terms and the random effect terms are random variables in the model, we can combine them into a single alternative error term and write the model in a form that does not show a separate random effect. Specifically, we write the model in the traditional fixed effect form:
$$Y_{i,j} = \beta_0 + \beta_j + \eta_{i,j},$$
where we have defined the quantities:
$$\eta_{i,j} = z_j + \varepsilon_{i,j}
\quad \quad \quad \quad \quad
\rho_j = \frac{\sigma_j^2}{\sigma_j^2+\sigma^2}
\quad \quad \quad \quad \quad
\sigma_{j*}^2 = \sigma_j^2+\sigma^2.$$
In this latter form, the error terms $\eta_{i,j}$ are still (jointly) normally distributed with zero mean, but they are no longer homoskedastic and uncorrelated --- they have covariance values:
$$\mathbb{C}(\eta_{i,j},\eta_{i',j'})
= \mathbb{C}(z_j + \varepsilon_{i,j}, z_{j'} + \varepsilon_{i',j'})
= \begin{cases}
\sigma_{j*}^2 & & \text{if } i = i' \text{ and } j = j', \\[6pt]
\rho_j \sigma_{j*}^2 & & \text{if } i \neq i' \text{ and } j = j', \\[6pt]
0 & & \text{otherwise}. \\[6pt]
\end{cases}$$
This form means that there is heteroskedasticity in the model (i.e., with variance $\sigma_{j*}^2$ for observations in category $j$) and the errors within a group are positively correlated (with correlation coefficient $\rho_{j}$).
Should you use a random effects model? As you can see from the above, a Gaussian linear random effects model using categorical predictors is equivalent to a traditional Gaussian linear regression model using categorical predictors, where the latter has heterosedasticity across the categories of observations and positively correlated errors within each category. Consequently, your choice of whether or not to include random effects terms can be framed equivalently as a choice of whether or not to generalise the behaviour of the error terms in the model to allow heteroskedasticity across categories and correlation of error terms within categories.
In the study you are conducting, your forest patches are the categories and your plots within these forest patches are your individual observations. In this case, including a random effect in your model (taken at the level of the forest patches) is equivalent to allowing heteroskedasticity across different forest patches and also having positive correlation of the mammal abundance of different plots within each forest patch. In order to decide whether or not this is appropriate, you merely need to ask yourself if this type of heteroskedasticity/correlation might plausibly occur in this case.
You have also noted that your forest patches were not randomly sampled. This is not necessarily a problem for the random effects model, and it is no more of a problem than to a fixed effects model. In assessing your sampling method you should consider whether your choice of sites was influenced by any of the variables under study, and consider the types of biases this could induce. However, there is nothing inherent in the random effects model (as opposed to the fixed effects model) that presents an analytical difference here.
I would suggest that in the present case this kind of heteroskedasticity/correlation might plausibly occur, owing to the closeness of the plots within a patch and the possible movement behaviour of the mammals under study. Different plots within the same forest patch might plausibly have positively correlated mammal abundance due to the movement of mammals around a forest patch, breeding of mammals from amongst nearby plots within a forest patch, and common conditions/threats to mammals within the same forest patch. You have noted that you tried both the random effects and fixed effects models and conducted a likelihood-ratio test, finding that there was no significant evidence of the presence of random effects. That is perfectly fine, and it is one way to conduct your analysis --- you have an initially plausible model form and then you find that it does not operate better than a simpler model form. | When to include a random effect in a model
Random effects induce heteroskedasticity and correlation in "error terms" in the model
A useful way to look at random effects models is to use some mathematical manipulation to mash them back onto the |
20,492 | Linear mixed-effects modeling with twin study data | You can include twins and non-twins in a unified model by using a dummy variable and including random slopes in that dummy variable. Since all families have at most one set of twins, this will be relatively simple:
Let $A_{ij} = 1$ if sibling $j$ in family $i$ is a twin, and 0 otherwise. I'm assuming you also want the random slope to differ for twins vs. regular siblings - if not, do not include the $ \eta_{i3}$ term in the model below.
Then fit the model:
$$ y_{ij} = \alpha_{0} + \alpha_{1} x_{ij} + \eta_{i0} + \eta_{i1} A_{ij}
+ \eta_{i2} x_{ij} + \eta_{i3} x_{ij} A_{ij} + \varepsilon_{ij} $$
$\alpha_{0}, \alpha_{1}$ are fixed effect, as in your specifiation
$\eta_{i0}$ is the 'baseline' sibling random effect and $\eta_{i1}$ is the additional random effect that allows twins to be more similar than regular siblings. The sizes of the corresponding random effect variances quantify how similar siblings are and how much more similar twins are than regular siblings. Note that both twin and non-twin correlations are characterized by this model - twin correlations are calculated by summing random effects appropriately (plug in $A_{ij}=1$).
$\eta_{i2}$ and $\eta_{i3}$ have analogous roles, only they act as the random slopes of $x_{ij}$
$\varepsilon_{ij}$ are iid error terms - note that I have written your model slightly differently in terms of random intercepts rather than correlated residual errors.
You can fit the model using the R package lme4. In the code below the dependent variable is y, the dummy variable is A, the predictor is x, the product of the dummy variable and the predictor is Ax and famID is the identifier number for the family. Your data is assumed to be stored in a data frame D, with these variables as columns.
library(lme4)
g <- lmer(y ~ x + (1+A+x+Ax|famID), data=D)
The random effect variables and the fixed effects estimates can be viewed by typing summary(g). Note that this model allows the random effects to be freely correlated with each other.
In many cases, it may make more sense (or be more easily interpretable) to assume independence between the random effects (e.g. this assumption is often made to decompose genetic vs. environmental familial correlation), in which case you'd instead type
g <- lmer(y ~ x + (1|famID) + (A-1|famID) + (x-1|famID) +(Ax-1|famID), data=D) | Linear mixed-effects modeling with twin study data | You can include twins and non-twins in a unified model by using a dummy variable and including random slopes in that dummy variable. Since all families have at most one set of twins, this will be rela | Linear mixed-effects modeling with twin study data
You can include twins and non-twins in a unified model by using a dummy variable and including random slopes in that dummy variable. Since all families have at most one set of twins, this will be relatively simple:
Let $A_{ij} = 1$ if sibling $j$ in family $i$ is a twin, and 0 otherwise. I'm assuming you also want the random slope to differ for twins vs. regular siblings - if not, do not include the $ \eta_{i3}$ term in the model below.
Then fit the model:
$$ y_{ij} = \alpha_{0} + \alpha_{1} x_{ij} + \eta_{i0} + \eta_{i1} A_{ij}
+ \eta_{i2} x_{ij} + \eta_{i3} x_{ij} A_{ij} + \varepsilon_{ij} $$
$\alpha_{0}, \alpha_{1}$ are fixed effect, as in your specifiation
$\eta_{i0}$ is the 'baseline' sibling random effect and $\eta_{i1}$ is the additional random effect that allows twins to be more similar than regular siblings. The sizes of the corresponding random effect variances quantify how similar siblings are and how much more similar twins are than regular siblings. Note that both twin and non-twin correlations are characterized by this model - twin correlations are calculated by summing random effects appropriately (plug in $A_{ij}=1$).
$\eta_{i2}$ and $\eta_{i3}$ have analogous roles, only they act as the random slopes of $x_{ij}$
$\varepsilon_{ij}$ are iid error terms - note that I have written your model slightly differently in terms of random intercepts rather than correlated residual errors.
You can fit the model using the R package lme4. In the code below the dependent variable is y, the dummy variable is A, the predictor is x, the product of the dummy variable and the predictor is Ax and famID is the identifier number for the family. Your data is assumed to be stored in a data frame D, with these variables as columns.
library(lme4)
g <- lmer(y ~ x + (1+A+x+Ax|famID), data=D)
The random effect variables and the fixed effects estimates can be viewed by typing summary(g). Note that this model allows the random effects to be freely correlated with each other.
In many cases, it may make more sense (or be more easily interpretable) to assume independence between the random effects (e.g. this assumption is often made to decompose genetic vs. environmental familial correlation), in which case you'd instead type
g <- lmer(y ~ x + (1|famID) + (A-1|famID) + (x-1|famID) +(Ax-1|famID), data=D) | Linear mixed-effects modeling with twin study data
You can include twins and non-twins in a unified model by using a dummy variable and including random slopes in that dummy variable. Since all families have at most one set of twins, this will be rela |
20,493 | Proper use and interpretation of zero-inflated gamma models | First, you are not seeing genuine zeros in expression data. Your biologist is saying that, like all biologists do, but when a biologist says "it's zero" it actually means "it's below my detection threshold, so it doesn't exist." It's a language issue due to the lack of mathematical sophistication in the field. I speak from personal experience here.
The explanation of the zero inflated Gamma in the link you provide is excellent. The physical process leading to your data is, if I understand it, a donor is selected, then treated with a certain peptide, and the response is measured from that donor's cells. There are a couple layers here. One is the overall strength of the donor's response, which feeds into the expression level of each particular cell being measured. If you interpret your Bernoulli variable in the zero inflated Gamma as "donor's response is strong enough to measure", then it might be fine. Just note that in that case you're lumping the noise of the individual cell's expression with the variation between strongly responding donors. Since the noise in expression in a single cell is roughly gamma distributed, that may end up causing too much dispersion in your distribution -- something to check for.
If the additional variation from donors vs cells doesn't screw up your Gamma fit, and you're just trying to get expression vs applied peptide, then there's no reason why this shouldn't be alright.
If more detailed analysis is in order, then I would recommend constructing a custom hierarchical model to match the process leading to your measurements. | Proper use and interpretation of zero-inflated gamma models | First, you are not seeing genuine zeros in expression data. Your biologist is saying that, like all biologists do, but when a biologist says "it's zero" it actually means "it's below my detection thr | Proper use and interpretation of zero-inflated gamma models
First, you are not seeing genuine zeros in expression data. Your biologist is saying that, like all biologists do, but when a biologist says "it's zero" it actually means "it's below my detection threshold, so it doesn't exist." It's a language issue due to the lack of mathematical sophistication in the field. I speak from personal experience here.
The explanation of the zero inflated Gamma in the link you provide is excellent. The physical process leading to your data is, if I understand it, a donor is selected, then treated with a certain peptide, and the response is measured from that donor's cells. There are a couple layers here. One is the overall strength of the donor's response, which feeds into the expression level of each particular cell being measured. If you interpret your Bernoulli variable in the zero inflated Gamma as "donor's response is strong enough to measure", then it might be fine. Just note that in that case you're lumping the noise of the individual cell's expression with the variation between strongly responding donors. Since the noise in expression in a single cell is roughly gamma distributed, that may end up causing too much dispersion in your distribution -- something to check for.
If the additional variation from donors vs cells doesn't screw up your Gamma fit, and you're just trying to get expression vs applied peptide, then there's no reason why this shouldn't be alright.
If more detailed analysis is in order, then I would recommend constructing a custom hierarchical model to match the process leading to your measurements. | Proper use and interpretation of zero-inflated gamma models
First, you are not seeing genuine zeros in expression data. Your biologist is saying that, like all biologists do, but when a biologist says "it's zero" it actually means "it's below my detection thr |
20,494 | Proper use and interpretation of zero-inflated gamma models | I have found a solution that I find rather elegant. There is an excellent article in the literature entitled "Analysis of repeated measures data with clumping at zero" which demonstrates a zero-inflated lognormal model for correlated data. The authors provide a SAS macro which is based on PROC NLMIXED and is quite easy to implement. The good news is that this can simplify to cases without clustered observations by omission of the repeated statement in the macro. The bad news is that NLMIXED does not yet have the many correlation structures that we often need, such as autoregressive.
The macro is named MIXCORR, and has a very useful Wiki page that you can find here. The macro itself can be downloaded under section SAS MIXCORR Macro for data with repeated measures and clumping at zero.
I highly recommend all of these links. Hope you find them to be useful. | Proper use and interpretation of zero-inflated gamma models | I have found a solution that I find rather elegant. There is an excellent article in the literature entitled "Analysis of repeated measures data with clumping at zero" which demonstrates a zero-inflat | Proper use and interpretation of zero-inflated gamma models
I have found a solution that I find rather elegant. There is an excellent article in the literature entitled "Analysis of repeated measures data with clumping at zero" which demonstrates a zero-inflated lognormal model for correlated data. The authors provide a SAS macro which is based on PROC NLMIXED and is quite easy to implement. The good news is that this can simplify to cases without clustered observations by omission of the repeated statement in the macro. The bad news is that NLMIXED does not yet have the many correlation structures that we often need, such as autoregressive.
The macro is named MIXCORR, and has a very useful Wiki page that you can find here. The macro itself can be downloaded under section SAS MIXCORR Macro for data with repeated measures and clumping at zero.
I highly recommend all of these links. Hope you find them to be useful. | Proper use and interpretation of zero-inflated gamma models
I have found a solution that I find rather elegant. There is an excellent article in the literature entitled "Analysis of repeated measures data with clumping at zero" which demonstrates a zero-inflat |
20,495 | Robust cluster method for mixed data in R | I'd recommend you to use Gower with subsequent hierarchical clustering. Hierarchical clustering remains most flexible and appropriate method in case of small number of objects (such as 64). If your categorical variable is nominal, Gower will internally recode it into dummy variables and base dice similarity (as part of Gower) on them. If your variable is ordinal, you should know that latest version on Gower coefficient can accomodate it, too.
As for numerous indices to determine the "best" number of clusters, most of them exist independently of this or that clustering algorithm. You need not to seek for clustering packages that necessarily incorporate such indices because the latter may exist as separate packages. You leave a range of cluster solutions after a clustering package and then compare those by an index from another package. | Robust cluster method for mixed data in R | I'd recommend you to use Gower with subsequent hierarchical clustering. Hierarchical clustering remains most flexible and appropriate method in case of small number of objects (such as 64). If your ca | Robust cluster method for mixed data in R
I'd recommend you to use Gower with subsequent hierarchical clustering. Hierarchical clustering remains most flexible and appropriate method in case of small number of objects (such as 64). If your categorical variable is nominal, Gower will internally recode it into dummy variables and base dice similarity (as part of Gower) on them. If your variable is ordinal, you should know that latest version on Gower coefficient can accomodate it, too.
As for numerous indices to determine the "best" number of clusters, most of them exist independently of this or that clustering algorithm. You need not to seek for clustering packages that necessarily incorporate such indices because the latter may exist as separate packages. You leave a range of cluster solutions after a clustering package and then compare those by an index from another package. | Robust cluster method for mixed data in R
I'd recommend you to use Gower with subsequent hierarchical clustering. Hierarchical clustering remains most flexible and appropriate method in case of small number of objects (such as 64). If your ca |
20,496 | Comparing logistic coefficients on models with different dependent variables? | The short answer is "yes you can" - but you should compare the Maximum Likelihood Estimates (MLEs) of the "big model" with all co variates in either model fitted to both.
This is a "quasi-formal" way to get probability theory to answer your question
In the example, $Y_{1}$ and $Y_{2}$ are the same type of variables (fractions/percentages) so they are comparable. I will assume that you fit the same model to both. So we have two models:
$$M_{1}:Y_{1i}\sim Bin(n_{1i},p_{1i})$$
$$log\left(\frac{p_{1i}}{1-p_{1i}}\right)=\alpha_{1}+\beta_{1}X_{i}$$
$$M_{2}:Y_{2i}\sim Bin(n_{2i},p_{2i})$$
$$log\left(\frac{p_{2i}}{1-p_{2i}}\right)=\alpha_{2}+\beta_{2}X_{i}$$
So you have the hypothesis you want to assess:
$$H_{0}:\beta_{1}>\beta_{2}$$
And you have some data $\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n}$, and some prior information (such as the use of logistic model). So you calculate the probability:
$$P=Pr(H_0|\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n},I)$$
Now $H_0$ doesn't depend on the actual value of any of the regression parameters, so they must have be removed by marginalising.
$$P=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} Pr(H_0,\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}|\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n},I) d\alpha_{1}d\alpha_{2}d\beta_{1}d\beta_{2}$$
The hypothesis simply restricts the range of integration, so we have:
$$P=\int_{-\infty}^{\infty} \int_{\beta_{2}}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} Pr(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}|\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n},I) d\alpha_{1}d\alpha_{2}d\beta_{1}d\beta_{2}$$
Because the probability is conditional on the data, it will factor into the two separate posteriors for each model
$$Pr(\alpha_{1},\beta_{1}|\{Y_{1i},X_{i},Y_{2i}\}_{i=1}^{n},I)Pr(\alpha_{2},\beta_{2}|\{Y_{2i},X_{i},Y_{1i}\}_{i=1}^{n},I)$$
Now because there is no direct links between $Y_{1i}$ and $\alpha_{2},\beta_{2}$, only indirect links through $X_{i}$, which is known, it will drop out of the conditioning in the second posterior. same for $Y_{2i}$ in the first posterior.
From standard logistic regression theory, and assuming uniform prior probabilities, the posterior for the parameters is approximately bi-variate normal with mean equal to the MLEs, and variance equal to the information matrix, denoted by $V_{1}$ and $V_{2}$ - which do not depend on the parameters, only the MLEs. so you have straight-forward normal integrals with known variance matrix. $\alpha_{j}$ marginalises out with no contribution (as would any other "common variable") and we are left with the usual result (I can post the details of the derivation if you want, but its pretty "standard" stuff):
$$P=\Phi\left(\frac{\hat{\beta}_{2,MLE}-\hat{\beta}_{1,MLE}}{\sqrt{V_{1:\beta,\beta}+V_{2:\beta,\beta}}}\right)
$$
Where $\Phi()$ is just the standard normal CDF. This is the usual comparison of normal means test. But note that this approach requires the use of the same set of regression variables in each. In the multivariate case with many predictors, if you have different regression variables, the integrals will become effectively equal to the above test, but from the MLEs of the two betas from the "big model" which includes all covariates from both models. | Comparing logistic coefficients on models with different dependent variables? | The short answer is "yes you can" - but you should compare the Maximum Likelihood Estimates (MLEs) of the "big model" with all co variates in either model fitted to both.
This is a "quasi-formal" way | Comparing logistic coefficients on models with different dependent variables?
The short answer is "yes you can" - but you should compare the Maximum Likelihood Estimates (MLEs) of the "big model" with all co variates in either model fitted to both.
This is a "quasi-formal" way to get probability theory to answer your question
In the example, $Y_{1}$ and $Y_{2}$ are the same type of variables (fractions/percentages) so they are comparable. I will assume that you fit the same model to both. So we have two models:
$$M_{1}:Y_{1i}\sim Bin(n_{1i},p_{1i})$$
$$log\left(\frac{p_{1i}}{1-p_{1i}}\right)=\alpha_{1}+\beta_{1}X_{i}$$
$$M_{2}:Y_{2i}\sim Bin(n_{2i},p_{2i})$$
$$log\left(\frac{p_{2i}}{1-p_{2i}}\right)=\alpha_{2}+\beta_{2}X_{i}$$
So you have the hypothesis you want to assess:
$$H_{0}:\beta_{1}>\beta_{2}$$
And you have some data $\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n}$, and some prior information (such as the use of logistic model). So you calculate the probability:
$$P=Pr(H_0|\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n},I)$$
Now $H_0$ doesn't depend on the actual value of any of the regression parameters, so they must have be removed by marginalising.
$$P=\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} Pr(H_0,\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}|\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n},I) d\alpha_{1}d\alpha_{2}d\beta_{1}d\beta_{2}$$
The hypothesis simply restricts the range of integration, so we have:
$$P=\int_{-\infty}^{\infty} \int_{\beta_{2}}^{\infty} \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} Pr(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2}|\{Y_{1i},Y_{2i},X_{i}\}_{i=1}^{n},I) d\alpha_{1}d\alpha_{2}d\beta_{1}d\beta_{2}$$
Because the probability is conditional on the data, it will factor into the two separate posteriors for each model
$$Pr(\alpha_{1},\beta_{1}|\{Y_{1i},X_{i},Y_{2i}\}_{i=1}^{n},I)Pr(\alpha_{2},\beta_{2}|\{Y_{2i},X_{i},Y_{1i}\}_{i=1}^{n},I)$$
Now because there is no direct links between $Y_{1i}$ and $\alpha_{2},\beta_{2}$, only indirect links through $X_{i}$, which is known, it will drop out of the conditioning in the second posterior. same for $Y_{2i}$ in the first posterior.
From standard logistic regression theory, and assuming uniform prior probabilities, the posterior for the parameters is approximately bi-variate normal with mean equal to the MLEs, and variance equal to the information matrix, denoted by $V_{1}$ and $V_{2}$ - which do not depend on the parameters, only the MLEs. so you have straight-forward normal integrals with known variance matrix. $\alpha_{j}$ marginalises out with no contribution (as would any other "common variable") and we are left with the usual result (I can post the details of the derivation if you want, but its pretty "standard" stuff):
$$P=\Phi\left(\frac{\hat{\beta}_{2,MLE}-\hat{\beta}_{1,MLE}}{\sqrt{V_{1:\beta,\beta}+V_{2:\beta,\beta}}}\right)
$$
Where $\Phi()$ is just the standard normal CDF. This is the usual comparison of normal means test. But note that this approach requires the use of the same set of regression variables in each. In the multivariate case with many predictors, if you have different regression variables, the integrals will become effectively equal to the above test, but from the MLEs of the two betas from the "big model" which includes all covariates from both models. | Comparing logistic coefficients on models with different dependent variables?
The short answer is "yes you can" - but you should compare the Maximum Likelihood Estimates (MLEs) of the "big model" with all co variates in either model fitted to both.
This is a "quasi-formal" way |
20,497 | Comparing logistic coefficients on models with different dependent variables? | Why not? The models are estimating how much 1 unit of change in any model predictor will influence the probability of "1" for the outcome variable. I'll assume the models are the same-- that they have the same predictors in them. The most informative way to compare the relative magnitudes of any given predictor in the 2 models is to use the models to calculate (either deterministically or better by simulation) how much some meaningful increment of change (e.g., +/- 1 SD) in the predictor affects the probabilities of the respective outcome variables--& compare them! You'll want to determine confidence intervals for the two estimates as well as so you can satisfy yourself that the difference is "significant," practically & statistically. | Comparing logistic coefficients on models with different dependent variables? | Why not? The models are estimating how much 1 unit of change in any model predictor will influence the probability of "1" for the outcome variable. I'll assume the models are the same-- that they have | Comparing logistic coefficients on models with different dependent variables?
Why not? The models are estimating how much 1 unit of change in any model predictor will influence the probability of "1" for the outcome variable. I'll assume the models are the same-- that they have the same predictors in them. The most informative way to compare the relative magnitudes of any given predictor in the 2 models is to use the models to calculate (either deterministically or better by simulation) how much some meaningful increment of change (e.g., +/- 1 SD) in the predictor affects the probabilities of the respective outcome variables--& compare them! You'll want to determine confidence intervals for the two estimates as well as so you can satisfy yourself that the difference is "significant," practically & statistically. | Comparing logistic coefficients on models with different dependent variables?
Why not? The models are estimating how much 1 unit of change in any model predictor will influence the probability of "1" for the outcome variable. I'll assume the models are the same-- that they have |
20,498 | Comparing logistic coefficients on models with different dependent variables? | I assume that by "my independent variable is the economy" you're using shorthand for some specific predictor.
At one level, I see nothing wrong with making a statement such as
X predicts Y1 with an odds ratio of _ and a 95% confidence interval of [ _ , _ ]
while
X predicts Y2 with an odds ratio of _ and a 95% confidence interval of [ _ , _ ].
@dmk38's recent suggestions look very helpful in this regard.
You might also want to standardize the coefficients to facilitate comparison.
At another level, beware of taking inferential statistics (standard errors, p-values, CIs) literally when your sample constitutes a nonrandom sample of the population of years to which you might want to generalize. | Comparing logistic coefficients on models with different dependent variables? | I assume that by "my independent variable is the economy" you're using shorthand for some specific predictor.
At one level, I see nothing wrong with making a statement such as
X predicts Y1 with an | Comparing logistic coefficients on models with different dependent variables?
I assume that by "my independent variable is the economy" you're using shorthand for some specific predictor.
At one level, I see nothing wrong with making a statement such as
X predicts Y1 with an odds ratio of _ and a 95% confidence interval of [ _ , _ ]
while
X predicts Y2 with an odds ratio of _ and a 95% confidence interval of [ _ , _ ].
@dmk38's recent suggestions look very helpful in this regard.
You might also want to standardize the coefficients to facilitate comparison.
At another level, beware of taking inferential statistics (standard errors, p-values, CIs) literally when your sample constitutes a nonrandom sample of the population of years to which you might want to generalize. | Comparing logistic coefficients on models with different dependent variables?
I assume that by "my independent variable is the economy" you're using shorthand for some specific predictor.
At one level, I see nothing wrong with making a statement such as
X predicts Y1 with an |
20,499 | Comparing logistic coefficients on models with different dependent variables? | Let us say the interest lies in comparing two groups of people: those with $X_{1} = 1$ and those with $X_{1} = 0$.
The exponential of $\beta_{1}$, the corresponding coefficient, is interpreted as the ratio of the odds of success for those with $X_{1} = 1$ over the odds of success for those with $X_{1} = 0$, conditional on the other variables in the model.
So, if you have two models with different dependend variables then the interpretation of $\beta_{1}$ changes since it is not conditioned upon the same set of variables. As a consequence, the comparison is not direct... | Comparing logistic coefficients on models with different dependent variables? | Let us say the interest lies in comparing two groups of people: those with $X_{1} = 1$ and those with $X_{1} = 0$.
The exponential of $\beta_{1}$, the corresponding coefficient, is interpreted as the | Comparing logistic coefficients on models with different dependent variables?
Let us say the interest lies in comparing two groups of people: those with $X_{1} = 1$ and those with $X_{1} = 0$.
The exponential of $\beta_{1}$, the corresponding coefficient, is interpreted as the ratio of the odds of success for those with $X_{1} = 1$ over the odds of success for those with $X_{1} = 0$, conditional on the other variables in the model.
So, if you have two models with different dependend variables then the interpretation of $\beta_{1}$ changes since it is not conditioned upon the same set of variables. As a consequence, the comparison is not direct... | Comparing logistic coefficients on models with different dependent variables?
Let us say the interest lies in comparing two groups of people: those with $X_{1} = 1$ and those with $X_{1} = 0$.
The exponential of $\beta_{1}$, the corresponding coefficient, is interpreted as the |
20,500 | Best way to put two histograms on same scale? | I think you need to use the same bins. Otherwise the mind plays tricks on you. Normal(0,2) looks more dispersed relative to Normal(0,1) in Image #2 than it does in Image #1. Nothing to do with statistics. It just looks like Normal(0,1) went on a "diet".
-Ralph Winters
Midpoint and histogram end points can also alter perception of the dispersion.
Notice that in this applet a maximum bin selection implies a range of >1.5 - ~5 while a minimum bin selection implies a range of <1 - > 5.5
http://www.stat.sc.edu/~west/javahtml/Histogram.html | Best way to put two histograms on same scale? | I think you need to use the same bins. Otherwise the mind plays tricks on you. Normal(0,2) looks more dispersed relative to Normal(0,1) in Image #2 than it does in Image #1. Nothing to do with stati | Best way to put two histograms on same scale?
I think you need to use the same bins. Otherwise the mind plays tricks on you. Normal(0,2) looks more dispersed relative to Normal(0,1) in Image #2 than it does in Image #1. Nothing to do with statistics. It just looks like Normal(0,1) went on a "diet".
-Ralph Winters
Midpoint and histogram end points can also alter perception of the dispersion.
Notice that in this applet a maximum bin selection implies a range of >1.5 - ~5 while a minimum bin selection implies a range of <1 - > 5.5
http://www.stat.sc.edu/~west/javahtml/Histogram.html | Best way to put two histograms on same scale?
I think you need to use the same bins. Otherwise the mind plays tricks on you. Normal(0,2) looks more dispersed relative to Normal(0,1) in Image #2 than it does in Image #1. Nothing to do with stati |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.