idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
7,401
|
Proof of convergence of k-means
|
To add something: Whether the algorithm converges or not also depends on your stop criterion. If you stop the algorithm once the cluster assignments do not change any more, then you can actually prove that the algorithm does not necessarily converge (provided that the cluster assignment does not have a deterministic tie breaker in case multiple centroids have the same distance).
Here you have 8 data-points (dots) and two centroids (red crosses). Now the green-data points have same distance to both the left and the right centroid. The same holds for the blue data-points. Let us assume that the assignment function is not deterministic in this case. Further we assume that at iteration 1 the green dots get assigned to the left cluster and the blue dots get assigned to the right cluster. Then we update the centroids. It turns out that they in fact stay in the same spot. (this is an easy calculation. For the left centroid you average the coordinates of the two left black dots and the two green dots -> (0, 0.5). Same for the right centroid).
Then at iteration 2 the situation looks again the same, but now we assume that our (in case of ties) non-deterministic assignment function assigns the green dots to the right cluster and the blue dots to the left cluster. Again the centroids won't change.
Iteration 3 is again the same as iteration 1. Thus we have a case where the cluster assignments continuously change and the algorithm (with this stop criterion) does not converge.
Essentially we only have a guarantee that each step in k-means reduces the cost or keeps it the same (i.e. $\leq$ instead of $\lt$). This allowed me to construct a case where the cost stays the same through iterates, even though the assignment still changes.
|
Proof of convergence of k-means
|
To add something: Whether the algorithm converges or not also depends on your stop criterion. If you stop the algorithm once the cluster assignments do not change any more, then you can actually prove
|
Proof of convergence of k-means
To add something: Whether the algorithm converges or not also depends on your stop criterion. If you stop the algorithm once the cluster assignments do not change any more, then you can actually prove that the algorithm does not necessarily converge (provided that the cluster assignment does not have a deterministic tie breaker in case multiple centroids have the same distance).
Here you have 8 data-points (dots) and two centroids (red crosses). Now the green-data points have same distance to both the left and the right centroid. The same holds for the blue data-points. Let us assume that the assignment function is not deterministic in this case. Further we assume that at iteration 1 the green dots get assigned to the left cluster and the blue dots get assigned to the right cluster. Then we update the centroids. It turns out that they in fact stay in the same spot. (this is an easy calculation. For the left centroid you average the coordinates of the two left black dots and the two green dots -> (0, 0.5). Same for the right centroid).
Then at iteration 2 the situation looks again the same, but now we assume that our (in case of ties) non-deterministic assignment function assigns the green dots to the right cluster and the blue dots to the left cluster. Again the centroids won't change.
Iteration 3 is again the same as iteration 1. Thus we have a case where the cluster assignments continuously change and the algorithm (with this stop criterion) does not converge.
Essentially we only have a guarantee that each step in k-means reduces the cost or keeps it the same (i.e. $\leq$ instead of $\lt$). This allowed me to construct a case where the cost stays the same through iterates, even though the assignment still changes.
|
Proof of convergence of k-means
To add something: Whether the algorithm converges or not also depends on your stop criterion. If you stop the algorithm once the cluster assignments do not change any more, then you can actually prove
|
7,402
|
In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of fully parametric models?
|
If you know the parametric distribution that your data follows then using a maximum likelihood approach and the distribution makes sense. The real advantage of Cox Proportional Hazards regression is that you can still fit survival models without knowing (or assuming) the distribution. You give an example using the normal distribution, but most survival times (and other types of data that Cox PH regression is used for) do not come close to following a normal distribution. Some may follow a log-normal, or a Weibull, or other parametric distribution, and if you are willing to make that assumption then the maximum likelihood parametric approach is great. But in many real world cases we do not know what the appropriate distribution is (or even a close enough approximation). With censoring and covariates we cannot do a simple histogram and say "that looks like a ... distribution to me". So it is very useful to have a technique that works well without needing a specific distribution.
Why use the hazard instead of the distribution function? Consider the following statement: "People in group A are twice as likely to die at age 80 as people in group B". Now that could be true because people in group B tend to live longer than those in group A, or it could be because people in group B tend to live shorter lives and most of them are dead long before age 80, giving a very small probability of them dying at 80 while enough people in group A live to 80 that a fair number of them will die at that age giving a much higher probability of death at that age. So the same statement could mean being in group A is better or worse than being in group B. What makes more sense is to say, of those people (in each group) that lived to 80, what proportion will die before they turn 81. That is the hazard (and the hazard is a function of the distribution function/survival function/etc.). The hazard is easier to work with in the semi-parametric model and can then give you information about the distribution.
|
In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of ful
|
If you know the parametric distribution that your data follows then using a maximum likelihood approach and the distribution makes sense. The real advantage of Cox Proportional Hazards regression is
|
In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of fully parametric models?
If you know the parametric distribution that your data follows then using a maximum likelihood approach and the distribution makes sense. The real advantage of Cox Proportional Hazards regression is that you can still fit survival models without knowing (or assuming) the distribution. You give an example using the normal distribution, but most survival times (and other types of data that Cox PH regression is used for) do not come close to following a normal distribution. Some may follow a log-normal, or a Weibull, or other parametric distribution, and if you are willing to make that assumption then the maximum likelihood parametric approach is great. But in many real world cases we do not know what the appropriate distribution is (or even a close enough approximation). With censoring and covariates we cannot do a simple histogram and say "that looks like a ... distribution to me". So it is very useful to have a technique that works well without needing a specific distribution.
Why use the hazard instead of the distribution function? Consider the following statement: "People in group A are twice as likely to die at age 80 as people in group B". Now that could be true because people in group B tend to live longer than those in group A, or it could be because people in group B tend to live shorter lives and most of them are dead long before age 80, giving a very small probability of them dying at 80 while enough people in group A live to 80 that a fair number of them will die at that age giving a much higher probability of death at that age. So the same statement could mean being in group A is better or worse than being in group B. What makes more sense is to say, of those people (in each group) that lived to 80, what proportion will die before they turn 81. That is the hazard (and the hazard is a function of the distribution function/survival function/etc.). The hazard is easier to work with in the semi-parametric model and can then give you information about the distribution.
|
In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of ful
If you know the parametric distribution that your data follows then using a maximum likelihood approach and the distribution makes sense. The real advantage of Cox Proportional Hazards regression is
|
7,403
|
In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of fully parametric models?
|
"We" don't necessarily. The range of survival analysis tools ranges from the fully non-parametric, like the Kaplan-Meier method, to fully parametric models where you specify the distribution of the underlying hazard. Each has their advantages and disadvantages.
Semi-parametric methods, like the Cox proportional hazards model, let you get away with not specifying the underlying hazard function. This can be helpful, as we don't always know the underlying hazard function and in many cases also don't care. For example, many epidemiology studies want to know "Does exposure X decrease the time until event Y?" That they care about is the difference in patients who have X and who do not have X. In that case, the underlying hazard doesn't really matter, and the risk of misspecifying it is worse than the consequences of not knowing it.
There are times however when this also isn't true. I've done work with fully parametric models because the underlying hazard was of interest.
|
In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of ful
|
"We" don't necessarily. The range of survival analysis tools ranges from the fully non-parametric, like the Kaplan-Meier method, to fully parametric models where you specify the distribution of the un
|
In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of fully parametric models?
"We" don't necessarily. The range of survival analysis tools ranges from the fully non-parametric, like the Kaplan-Meier method, to fully parametric models where you specify the distribution of the underlying hazard. Each has their advantages and disadvantages.
Semi-parametric methods, like the Cox proportional hazards model, let you get away with not specifying the underlying hazard function. This can be helpful, as we don't always know the underlying hazard function and in many cases also don't care. For example, many epidemiology studies want to know "Does exposure X decrease the time until event Y?" That they care about is the difference in patients who have X and who do not have X. In that case, the underlying hazard doesn't really matter, and the risk of misspecifying it is worse than the consequences of not knowing it.
There are times however when this also isn't true. I've done work with fully parametric models because the underlying hazard was of interest.
|
In survival analysis, why do we use semi-parametric models (Cox proportional hazards) instead of ful
"We" don't necessarily. The range of survival analysis tools ranges from the fully non-parametric, like the Kaplan-Meier method, to fully parametric models where you specify the distribution of the un
|
7,404
|
Overfitting a logistic regression model
|
Yes, you can overfit logistic regression models. But first, I'd like to address the point about the AUC (Area Under the Receiver Operating Characteristic Curve):
There are no universal rules of thumb with the AUC, ever ever ever.
What the AUC is is the probability that a randomly sampled positive (or case) will have a higher marker value than a negative (or control) because the AUC is mathematically equivalent to the U statistic.
What the AUC is not is a standardized measure of predictive accuracy. Highly deterministic events can have single predictor AUCs of 95% or higher (such as in controlled mechatronics, robotics, or optics), some complex multivariable logistic risk prediction models have AUCs of 64% or lower such as breast cancer risk prediction, and those are respectably high levels of predictive accuracy.
A sensible AUC value, as with a power analysis, is prespecified by gathering knowledge of the background and aims of a study apriori. The doctor/engineer describes what they want, and you, the statistician, resolve on a target AUC value for your predictive model. Then begins the investigation.
It is indeed possible to overfit a logistic regression model. Aside from linear dependence (if the model matrix is of deficient rank), you can also have perfect concordance, or that is the plot of fitted values against Y perfectly discriminates cases and controls. In that case, your parameters have not converged but simply reside somewhere on the boundary space that gives a likelihood of $\infty$. Sometimes, however, the AUC is 1 by random chance alone.
There's another type of bias that arises from adding too many predictors to the model, and that's small sample bias. In general, the log odds ratios of a logistic regression model tend toward a biased factor of $2\beta$ because of non-collapsibility of the odds ratio and zero cell counts. In inference, this is handled using conditional logistic regression to control for confounding and precision variables in stratified analyses. However, in prediction, you're SooL. There is no generalizable prediction when you have $p \gg n \pi(1-\pi)$, ($\pi = \mbox{Prob}(Y=1)$) because you're guaranteed to have modeled the "data" and not the "trend" at that point. High dimensional (large $p$) prediction of binary outcomes is better done with machine learning methods. Understanding linear discriminant analysis, partial least squares, nearest neighbor prediction, boosting, and random forests would be a very good place to start.
|
Overfitting a logistic regression model
|
Yes, you can overfit logistic regression models. But first, I'd like to address the point about the AUC (Area Under the Receiver Operating Characteristic Curve):
There are no universal rules of thumb
|
Overfitting a logistic regression model
Yes, you can overfit logistic regression models. But first, I'd like to address the point about the AUC (Area Under the Receiver Operating Characteristic Curve):
There are no universal rules of thumb with the AUC, ever ever ever.
What the AUC is is the probability that a randomly sampled positive (or case) will have a higher marker value than a negative (or control) because the AUC is mathematically equivalent to the U statistic.
What the AUC is not is a standardized measure of predictive accuracy. Highly deterministic events can have single predictor AUCs of 95% or higher (such as in controlled mechatronics, robotics, or optics), some complex multivariable logistic risk prediction models have AUCs of 64% or lower such as breast cancer risk prediction, and those are respectably high levels of predictive accuracy.
A sensible AUC value, as with a power analysis, is prespecified by gathering knowledge of the background and aims of a study apriori. The doctor/engineer describes what they want, and you, the statistician, resolve on a target AUC value for your predictive model. Then begins the investigation.
It is indeed possible to overfit a logistic regression model. Aside from linear dependence (if the model matrix is of deficient rank), you can also have perfect concordance, or that is the plot of fitted values against Y perfectly discriminates cases and controls. In that case, your parameters have not converged but simply reside somewhere on the boundary space that gives a likelihood of $\infty$. Sometimes, however, the AUC is 1 by random chance alone.
There's another type of bias that arises from adding too many predictors to the model, and that's small sample bias. In general, the log odds ratios of a logistic regression model tend toward a biased factor of $2\beta$ because of non-collapsibility of the odds ratio and zero cell counts. In inference, this is handled using conditional logistic regression to control for confounding and precision variables in stratified analyses. However, in prediction, you're SooL. There is no generalizable prediction when you have $p \gg n \pi(1-\pi)$, ($\pi = \mbox{Prob}(Y=1)$) because you're guaranteed to have modeled the "data" and not the "trend" at that point. High dimensional (large $p$) prediction of binary outcomes is better done with machine learning methods. Understanding linear discriminant analysis, partial least squares, nearest neighbor prediction, boosting, and random forests would be a very good place to start.
|
Overfitting a logistic regression model
Yes, you can overfit logistic regression models. But first, I'd like to address the point about the AUC (Area Under the Receiver Operating Characteristic Curve):
There are no universal rules of thumb
|
7,405
|
Overfitting a logistic regression model
|
You can overfit with any method, even if you fit the whole population (if the population is finite).
There are two general solutions to the problem:
penalized maximum likelihood estimation (ridge regression, elastic net, lasso, etc.) and
the use of informative priors with a Bayesian model.
When $Y$ has limited information (e.g. is binary or is categorical but unordered), overfitting is more severe, because whenever you have low information it is like having a smaller sample size. For example, a sample of size 100 from a continuous $Y$ may have the same information as a sample of size 250 from a binary $Y$, for the purposes of statistical power, precision, and overfitting. Binary $Y$ assumes an all-or-nothing phenomenon and has 1 bit of information. Many continuous variables have at least 5 bits of information.
|
Overfitting a logistic regression model
|
You can overfit with any method, even if you fit the whole population (if the population is finite).
There are two general solutions to the problem:
penalized maximum likelihood estimation (ridge
|
Overfitting a logistic regression model
You can overfit with any method, even if you fit the whole population (if the population is finite).
There are two general solutions to the problem:
penalized maximum likelihood estimation (ridge regression, elastic net, lasso, etc.) and
the use of informative priors with a Bayesian model.
When $Y$ has limited information (e.g. is binary or is categorical but unordered), overfitting is more severe, because whenever you have low information it is like having a smaller sample size. For example, a sample of size 100 from a continuous $Y$ may have the same information as a sample of size 250 from a binary $Y$, for the purposes of statistical power, precision, and overfitting. Binary $Y$ assumes an all-or-nothing phenomenon and has 1 bit of information. Many continuous variables have at least 5 bits of information.
|
Overfitting a logistic regression model
You can overfit with any method, even if you fit the whole population (if the population is finite).
There are two general solutions to the problem:
penalized maximum likelihood estimation (ridge
|
7,406
|
Overfitting a logistic regression model
|
In simple words....
an overfitted logistic regression model has large variance, means decision boundry changes largely for small change in variable magnitude. consider following image the right most one is overfitted logistic model, its decision boundry has large no. of ups and downs while the middel one is just fit it has moderate variance and moderate bias. the left one is underfit it has high bias but very less variance.
one more thing_ An overfitted regrresion model have too many features while underfit model has very less no. of features.
|
Overfitting a logistic regression model
|
In simple words....
an overfitted logistic regression model has large variance, means decision boundry changes largely for small change in variable magnitude. consider following image the right most o
|
Overfitting a logistic regression model
In simple words....
an overfitted logistic regression model has large variance, means decision boundry changes largely for small change in variable magnitude. consider following image the right most one is overfitted logistic model, its decision boundry has large no. of ups and downs while the middel one is just fit it has moderate variance and moderate bias. the left one is underfit it has high bias but very less variance.
one more thing_ An overfitted regrresion model have too many features while underfit model has very less no. of features.
|
Overfitting a logistic regression model
In simple words....
an overfitted logistic regression model has large variance, means decision boundry changes largely for small change in variable magnitude. consider following image the right most o
|
7,407
|
Overfitting a logistic regression model
|
Is there any model, leave aside logistic regression, that it is not possible to overfit?
Overfitting arises fundamentally because you fit to a sample & not the whole population. Artifacts of your sample can seem like features of the population and they are not and hence overfitting hurts.
It is akin to a question of external validity. Using only the sample you are trying to get a model that gives you the best performance on the real population which you cannot see.
Sure, some model forms or procedures are more likely to overfit than others but no model is ever truly immune from overfitting, is it?
Even out-of-sample validation, regularization procedures etc. can only guard against over-fitting but there's no silver bullet. In fact, if one were to estimate one's confidence in making a real world prediction based on a fitted model one must always assume that some degree of overfitting has indeed happened.
To what extent might vary, but even a model validated on a hold out dataset will rarely yield in-wild performance that matches what was obtained on the hold-out dataset. And overfitting is a big causative factor.
|
Overfitting a logistic regression model
|
Is there any model, leave aside logistic regression, that it is not possible to overfit?
Overfitting arises fundamentally because you fit to a sample & not the whole population. Artifacts of your sam
|
Overfitting a logistic regression model
Is there any model, leave aside logistic regression, that it is not possible to overfit?
Overfitting arises fundamentally because you fit to a sample & not the whole population. Artifacts of your sample can seem like features of the population and they are not and hence overfitting hurts.
It is akin to a question of external validity. Using only the sample you are trying to get a model that gives you the best performance on the real population which you cannot see.
Sure, some model forms or procedures are more likely to overfit than others but no model is ever truly immune from overfitting, is it?
Even out-of-sample validation, regularization procedures etc. can only guard against over-fitting but there's no silver bullet. In fact, if one were to estimate one's confidence in making a real world prediction based on a fitted model one must always assume that some degree of overfitting has indeed happened.
To what extent might vary, but even a model validated on a hold out dataset will rarely yield in-wild performance that matches what was obtained on the hold-out dataset. And overfitting is a big causative factor.
|
Overfitting a logistic regression model
Is there any model, leave aside logistic regression, that it is not possible to overfit?
Overfitting arises fundamentally because you fit to a sample & not the whole population. Artifacts of your sam
|
7,408
|
Overfitting a logistic regression model
|
What we do with the Roc to check for overfitting is to separete the dataset randomly in training and valudation and compare the AUC between those groups. If the AUC is "much" (there is also no rule of thumb) bigger in training then there might be overfitting.
|
Overfitting a logistic regression model
|
What we do with the Roc to check for overfitting is to separete the dataset randomly in training and valudation and compare the AUC between those groups. If the AUC is "much" (there is also no rule of
|
Overfitting a logistic regression model
What we do with the Roc to check for overfitting is to separete the dataset randomly in training and valudation and compare the AUC between those groups. If the AUC is "much" (there is also no rule of thumb) bigger in training then there might be overfitting.
|
Overfitting a logistic regression model
What we do with the Roc to check for overfitting is to separete the dataset randomly in training and valudation and compare the AUC between those groups. If the AUC is "much" (there is also no rule of
|
7,409
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
|
Some facts about matrix ranks, offered without proof (but proofs of all or almost all of them should be either given in standard linear algebra texts, or in some cases set as exercises after giving enough information to be able to do so):
If $A$ and $B$ are two conformable matrices, then:
(i) column rank of $A$ = row rank of $A$
(ii) $\text{rank}(A) = \text{rank}(A^T) = \text{rank}(A^TA) = \text{rank}(AA^T)$
(iii) $\text{rank}(AB)\leq \min(\text{rank}(A),\text{rank}(B))$
(iv) $\text{rank}(A+B) \leq \text{rank}(A) + \text{rank}(B)$
(v) if $B$ is square matrix of full rank, then $\text{rank}(AB) = \text{rank}(A)$
Consider the $n\times p$ matrix of sample data, $y$. From the above, the rank of $y$ is at most $\min(n,p)$.
Further, from the above clearly the rank of $S$ won't be larger than the rank of $y$ (by considering the computation of $S$ in matrix form, with perhaps some simplification).
If $n<p$ then $\text{rank}(y)<p$ in which case $\text{rank}(S)<p$.
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
|
Some facts about matrix ranks, offered without proof (but proofs of all or almost all of them should be either given in standard linear algebra texts, or in some cases set as exercises after giving en
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
Some facts about matrix ranks, offered without proof (but proofs of all or almost all of them should be either given in standard linear algebra texts, or in some cases set as exercises after giving enough information to be able to do so):
If $A$ and $B$ are two conformable matrices, then:
(i) column rank of $A$ = row rank of $A$
(ii) $\text{rank}(A) = \text{rank}(A^T) = \text{rank}(A^TA) = \text{rank}(AA^T)$
(iii) $\text{rank}(AB)\leq \min(\text{rank}(A),\text{rank}(B))$
(iv) $\text{rank}(A+B) \leq \text{rank}(A) + \text{rank}(B)$
(v) if $B$ is square matrix of full rank, then $\text{rank}(AB) = \text{rank}(A)$
Consider the $n\times p$ matrix of sample data, $y$. From the above, the rank of $y$ is at most $\min(n,p)$.
Further, from the above clearly the rank of $S$ won't be larger than the rank of $y$ (by considering the computation of $S$ in matrix form, with perhaps some simplification).
If $n<p$ then $\text{rank}(y)<p$ in which case $\text{rank}(S)<p$.
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
Some facts about matrix ranks, offered without proof (but proofs of all or almost all of them should be either given in standard linear algebra texts, or in some cases set as exercises after giving en
|
7,410
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
|
The short answer to your question is that rank$(S) \le n - 1$. So if $p > n$, then $S$ is singular.
For a more detailed answer, recall that the (unbiased) sample covariance matrix can be written as
$$
S = \frac{1}{n-1}\sum_{i=1}^n (x_i - \bar{x})(x_i - \bar{x})^T.
$$
Effectively, we are summing $n$ matrices, each having a rank of 1. Assuming the observations are linearly independent, in some sense each observation $x_i$ contributes 1 to rank$(S)$, and a 1 is subtracted from the rank (if $p > n$) because we center each observation by $\bar{x}$. However, if multicollinearity is present in the observations, then rank$(S)$ may be reduced, which explains why the rank might be less than $n - 1$.
A large amount of work has gone into studying this problem. For instance, a colleague of mine and I wrote a paper on this same topic, where we were interested in determining how to proceed if $S$ is singular when applied to linear discriminant analysis in the $p \gg n$ setting.
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
|
The short answer to your question is that rank$(S) \le n - 1$. So if $p > n$, then $S$ is singular.
For a more detailed answer, recall that the (unbiased) sample covariance matrix can be written as
$$
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
The short answer to your question is that rank$(S) \le n - 1$. So if $p > n$, then $S$ is singular.
For a more detailed answer, recall that the (unbiased) sample covariance matrix can be written as
$$
S = \frac{1}{n-1}\sum_{i=1}^n (x_i - \bar{x})(x_i - \bar{x})^T.
$$
Effectively, we are summing $n$ matrices, each having a rank of 1. Assuming the observations are linearly independent, in some sense each observation $x_i$ contributes 1 to rank$(S)$, and a 1 is subtracted from the rank (if $p > n$) because we center each observation by $\bar{x}$. However, if multicollinearity is present in the observations, then rank$(S)$ may be reduced, which explains why the rank might be less than $n - 1$.
A large amount of work has gone into studying this problem. For instance, a colleague of mine and I wrote a paper on this same topic, where we were interested in determining how to proceed if $S$ is singular when applied to linear discriminant analysis in the $p \gg n$ setting.
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
The short answer to your question is that rank$(S) \le n - 1$. So if $p > n$, then $S$ is singular.
For a more detailed answer, recall that the (unbiased) sample covariance matrix can be written as
$$
|
7,411
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
|
When you look at the situation the right way, the conclusion is intuitively obvious and immediate.
This post offers two demonstrations. The first, immediately below, is in words. It is equivalent to a simple drawing, appearing at the very end. In between is an explanation of what the words and the drawing mean.
The covariance matrix for $n$ $p$-variate observations is a $p\times p$ matrix computed by left-multiplying a matrix $\mathbb{X}_{np}$ (the recentered data) by its transpose $\mathbb{X}_{pn}^\prime$. This product of matrices sends vectors through a pipeline of vector spaces in which the dimensions are $p$ and $n$. Consequently the covariance matrix, qua linear transformation, will send $\mathbb{R}^n$ into a subspace whose dimension is at most $\min(p,n)$. It is immediate that the rank of the covariance matrix is no greater than $\min(p,n)$. Consequently, if $p\gt n$ then the rank is at most $n$, which--being strictly less than $p$--means the covariance matrix is singular.
All this terminology is fully explained in the remainder of this post.
(As Amoeba kindly pointed out in a now-deleted comment, and shows in an answer to a related question, the image of $\mathbb X$ actually lies in a codimension-one subspace of $\mathbb{R}^n$ (consisting of vectors whose components sum to zero) because its columns have all been recentered at zero. Therefore the rank of the sample covariance matrix $\frac{1}{n-1}\mathbb{X}^\prime \mathbb{X}$ cannot exceed $n-1$.)
Linear algebra is all about tracking dimensions of vector spaces. You only need to appreciate a few fundamental concepts to have a deep intuition for assertions about rank and singularity:
Matrix multiplication represents linear transformations of vectors. An $m\times n$ matrix $\mathbb{M}$ represents a linear transformation from an $n$-dimensional space $V^n$ to an $m$-dimensional space $V^m$. Specifically, it sends any $x\in V^n$ to $\mathbb{M}x = y \in V^m$. That this is a linear transformation follows immediately from the definition of linear transformation and basic arithmetical properties of matrix multiplication.
Linear transformations can never increase dimensions. This means that the image of the entire vector space $V^n$ under the transformation $\mathbb M$ (which is a sub-vector space of $V^m$) can have a dimension no greater than $n$. This is an (easy) theorem that follows from the definition of dimension.
The dimension of any sub-vector space cannot exceed that of the space in which it lies. This is a theorem, but again it is obvious and easy to prove.
The rank of a linear transformation is the dimension of its image. The rank of a matrix is the rank of the linear transformation it represents. These are definitions.
A singular matrix $\mathbb{M}_{mn}$ has rank strictly less than $n$ (the dimension of its domain). In other words, its image has a smaller dimension. This is a definition.
To develop intuition, it helps to see the dimensions. I will therefore write the dimensions of all vectors and matrices immediately after them, as in $\mathbb{M}_{mn}$ and $x_n$. Thus the generic formula
$$y_m = \mathbb{M}_{mn} x_n$$
is intended to mean that the $m\times n$ matrix $\mathbb M$, when applied to the $n$-vector $x$, produces an $m$-vector $y$.
Products of matrices can be thought of as a "pipeline" of linear transformations. Generically, suppose $y_a$ is an $a$-dimensional vector resulting from the successive applications of the linear transformations $\mathbb{M}_{mn}, \mathbb{L}_{lm}, \ldots, \mathbb{B}_{bc},$ and $\mathbb{A}_{ab}$ to the $n$-vector $x_n$ coming from the space $V^n$. This takes the vector $x_n$ successively through a set of vector spaces of dimensions $m, l, \ldots, c, b,$ and finally $a$.
Look for the bottleneck: because dimensions cannot increase (point 2) and subspaces cannot have dimensions larger than the spaces in which they lie (point 3), it follows that the dimension of the image of $V^n$ cannot exceed the smallest dimension $\min(a,b,c,\ldots,l,m,n)$ encountered in the pipeline.
This diagram of the pipeline, then, fully proves the result when it is applied to the product $\mathbb{X}^\prime \mathbb{X}$:
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
|
When you look at the situation the right way, the conclusion is intuitively obvious and immediate.
This post offers two demonstrations. The first, immediately below, is in words. It is equivalent to
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
When you look at the situation the right way, the conclusion is intuitively obvious and immediate.
This post offers two demonstrations. The first, immediately below, is in words. It is equivalent to a simple drawing, appearing at the very end. In between is an explanation of what the words and the drawing mean.
The covariance matrix for $n$ $p$-variate observations is a $p\times p$ matrix computed by left-multiplying a matrix $\mathbb{X}_{np}$ (the recentered data) by its transpose $\mathbb{X}_{pn}^\prime$. This product of matrices sends vectors through a pipeline of vector spaces in which the dimensions are $p$ and $n$. Consequently the covariance matrix, qua linear transformation, will send $\mathbb{R}^n$ into a subspace whose dimension is at most $\min(p,n)$. It is immediate that the rank of the covariance matrix is no greater than $\min(p,n)$. Consequently, if $p\gt n$ then the rank is at most $n$, which--being strictly less than $p$--means the covariance matrix is singular.
All this terminology is fully explained in the remainder of this post.
(As Amoeba kindly pointed out in a now-deleted comment, and shows in an answer to a related question, the image of $\mathbb X$ actually lies in a codimension-one subspace of $\mathbb{R}^n$ (consisting of vectors whose components sum to zero) because its columns have all been recentered at zero. Therefore the rank of the sample covariance matrix $\frac{1}{n-1}\mathbb{X}^\prime \mathbb{X}$ cannot exceed $n-1$.)
Linear algebra is all about tracking dimensions of vector spaces. You only need to appreciate a few fundamental concepts to have a deep intuition for assertions about rank and singularity:
Matrix multiplication represents linear transformations of vectors. An $m\times n$ matrix $\mathbb{M}$ represents a linear transformation from an $n$-dimensional space $V^n$ to an $m$-dimensional space $V^m$. Specifically, it sends any $x\in V^n$ to $\mathbb{M}x = y \in V^m$. That this is a linear transformation follows immediately from the definition of linear transformation and basic arithmetical properties of matrix multiplication.
Linear transformations can never increase dimensions. This means that the image of the entire vector space $V^n$ under the transformation $\mathbb M$ (which is a sub-vector space of $V^m$) can have a dimension no greater than $n$. This is an (easy) theorem that follows from the definition of dimension.
The dimension of any sub-vector space cannot exceed that of the space in which it lies. This is a theorem, but again it is obvious and easy to prove.
The rank of a linear transformation is the dimension of its image. The rank of a matrix is the rank of the linear transformation it represents. These are definitions.
A singular matrix $\mathbb{M}_{mn}$ has rank strictly less than $n$ (the dimension of its domain). In other words, its image has a smaller dimension. This is a definition.
To develop intuition, it helps to see the dimensions. I will therefore write the dimensions of all vectors and matrices immediately after them, as in $\mathbb{M}_{mn}$ and $x_n$. Thus the generic formula
$$y_m = \mathbb{M}_{mn} x_n$$
is intended to mean that the $m\times n$ matrix $\mathbb M$, when applied to the $n$-vector $x$, produces an $m$-vector $y$.
Products of matrices can be thought of as a "pipeline" of linear transformations. Generically, suppose $y_a$ is an $a$-dimensional vector resulting from the successive applications of the linear transformations $\mathbb{M}_{mn}, \mathbb{L}_{lm}, \ldots, \mathbb{B}_{bc},$ and $\mathbb{A}_{ab}$ to the $n$-vector $x_n$ coming from the space $V^n$. This takes the vector $x_n$ successively through a set of vector spaces of dimensions $m, l, \ldots, c, b,$ and finally $a$.
Look for the bottleneck: because dimensions cannot increase (point 2) and subspaces cannot have dimensions larger than the spaces in which they lie (point 3), it follows that the dimension of the image of $V^n$ cannot exceed the smallest dimension $\min(a,b,c,\ldots,l,m,n)$ encountered in the pipeline.
This diagram of the pipeline, then, fully proves the result when it is applied to the product $\mathbb{X}^\prime \mathbb{X}$:
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
When you look at the situation the right way, the conclusion is intuitively obvious and immediate.
This post offers two demonstrations. The first, immediately below, is in words. It is equivalent to
|
7,412
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
|
The conclusion can be made even slightly more generalized: the sample covariance matrix is singular as long as $n \color{red}{\leq} p$. The key of the following proof, which is also contained in this answer, is to express the sample covariance matrix $S$ in terms of a nice product form of the data matrix $X$ and an idempotent matrix $P$ (which has rank $n - 1$).
In detail, let
\begin{align}
& X = \begin{bmatrix}
x_1^T \\
\vdots \\
x_n^T
\end{bmatrix} \in \mathbb{R}^{n \times p}, \;
e = \begin{bmatrix}
1 \\
\vdots \\
1
\end{bmatrix} \in \mathbb{R}^{n \times 1}, \;
\bar{x} = n^{-1}\sum_{i = 1}^n x_i,
\end{align}
then $\bar{x} = n^{-1}X^Te$, which implies that
\begin{align}
& (n - 1)S = \sum_{i = 1}^n(x_i - \bar{x})(x_i - \bar{x})^T \\
=& \sum_{i = 1}^nx_ix_i^T - n\bar{x}\bar{x}^T \\
=& X^TX - n^{-1}X^Tee^TX \\
=& X^T(I_{(n)} - n^{-1}ee^T)X \in \mathbb{R}^{p \times p}.
\end{align}
Let $P = I_{(n)} - n^{-1}ee^T$, then it is easy to verify that $P^T = P$ and $P^2 = P$, i.e., $P$ is idempotent, whence
\begin{align}
\operatorname{rank}(P) = \operatorname{tr}(P) = \operatorname{tr}(I_{(n)}) -
\operatorname{tr}(n^{-1}ee^T) = n - 1.
\end{align}
It then follows by $\operatorname{rank}(AB) \leq \min(\operatorname{rank}(A),
\operatorname{rank}(B))$ that
\begin{align}
\operatorname{rank}(S) = \operatorname{rank}((n - 1)S) \leq
\operatorname{rank}(P) = n - 1 \leq p - 1 < p.
\end{align}
This shows that $S$ is singular.
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
|
The conclusion can be made even slightly more generalized: the sample covariance matrix is singular as long as $n \color{red}{\leq} p$. The key of the following proof, which is also contained in thi
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
The conclusion can be made even slightly more generalized: the sample covariance matrix is singular as long as $n \color{red}{\leq} p$. The key of the following proof, which is also contained in this answer, is to express the sample covariance matrix $S$ in terms of a nice product form of the data matrix $X$ and an idempotent matrix $P$ (which has rank $n - 1$).
In detail, let
\begin{align}
& X = \begin{bmatrix}
x_1^T \\
\vdots \\
x_n^T
\end{bmatrix} \in \mathbb{R}^{n \times p}, \;
e = \begin{bmatrix}
1 \\
\vdots \\
1
\end{bmatrix} \in \mathbb{R}^{n \times 1}, \;
\bar{x} = n^{-1}\sum_{i = 1}^n x_i,
\end{align}
then $\bar{x} = n^{-1}X^Te$, which implies that
\begin{align}
& (n - 1)S = \sum_{i = 1}^n(x_i - \bar{x})(x_i - \bar{x})^T \\
=& \sum_{i = 1}^nx_ix_i^T - n\bar{x}\bar{x}^T \\
=& X^TX - n^{-1}X^Tee^TX \\
=& X^T(I_{(n)} - n^{-1}ee^T)X \in \mathbb{R}^{p \times p}.
\end{align}
Let $P = I_{(n)} - n^{-1}ee^T$, then it is easy to verify that $P^T = P$ and $P^2 = P$, i.e., $P$ is idempotent, whence
\begin{align}
\operatorname{rank}(P) = \operatorname{tr}(P) = \operatorname{tr}(I_{(n)}) -
\operatorname{tr}(n^{-1}ee^T) = n - 1.
\end{align}
It then follows by $\operatorname{rank}(AB) \leq \min(\operatorname{rank}(A),
\operatorname{rank}(B))$ that
\begin{align}
\operatorname{rank}(S) = \operatorname{rank}((n - 1)S) \leq
\operatorname{rank}(P) = n - 1 \leq p - 1 < p.
\end{align}
This shows that $S$ is singular.
|
Why is a sample covariance matrix singular when sample size is less than number of variables?
The conclusion can be made even slightly more generalized: the sample covariance matrix is singular as long as $n \color{red}{\leq} p$. The key of the following proof, which is also contained in thi
|
7,413
|
What is the difference between generalized estimating equations and GLMM?
|
In terms of the interpretation of the coefficients, there is a difference in the binary case (among others). What differs between GEE and GLMM is the target of inference: population-average or subject-specific.
Let's consider a simple made-up example related to yours. You want to model the failure rate between boys and girls in a school. As with most (elementary) schools, the population of students is divided into classrooms. You observe a binary response $Y$ from $n_i$ children in $N$ classrooms (i.e. $\sum_{i=1}^{N}n_{i}$ binary responses clustered by classroom), where $Y_{ij}=1$ if student $j$ from classroom $i$ passed and $Y_{ij}=0$ if he/she failed. And $x_{ij} =1$ if student $j$ from classroom $i$ is male and 0 otherwise.
To bring in the terminology I used in the first paragraph, you can think of the school as being the population and the classrooms being the subjects.
First consider GLMM. GLMM is fitting a mixed-effects model. The model conditions on the fixed design matrix (which in this case is comprised of the intercept and indicator for gender) and any random effects among classrooms that we include in the model. In our example, let's include a random intercept, $b_i$, which will take the baseline differences in failure rate among classrooms into account. So we are modelling
$\log \left(\frac{P(Y_{ij}=1)}{P(Y_{ij}=0)}\mid x_{ij}, b_i\right)=\beta_0+\beta_1 x_{ij} + b_i $
The odds ratio of risk of failure in the above model differs based on the value of $b_i$ which is different among classrooms. Thus the the estimates are subject-specific.
GEE, on the other hand, is fitting a marginal model. These model population-averages. You're modeling the expectation conditional only on your fixed design matrix.
$\log \left(\frac{P(Y_{ij}=1)}{P(Y_{ij}=0)}\mid x_{ij}\right)=\beta_0+\beta_1 x_{ij} $
This is in contrast to mixed effect models as explained above which condition on both the fixed design matrix and the random effects. So with the marginal model above you're saying, "forget about the difference among classrooms, I just want the population (school-wise) rate of failure and its association with gender." You fit the model and get an odds ratio that is the population-averaged odds ratio of failure associated with gender.
So you may find that your estimates from your GEE model may differ your estimates from your GLMM model and that is because they are not estimating the same thing.
(As far as converting from log-odds-ratio to odds-ratio by exponentiating, yes, you do that whether its a population-level or subject-specific estimate)
Some Notes/Literature:
For the linear case, the population-average and subject-specific estimates are the same.
Zeger, et al. 1988 showed that for logistic regression,
$\beta_M\approx \left[ \left(\frac{16\sqrt{3}}{15\pi }\right)^2 V+1\right]^{-1/2}\beta_{RE}$
where $\beta_M$ are the marginal esttimates, $\beta_{RE}$ are the subject-specific estimates and $V$ is the variance of the random effects.
Molenberghs, Verbeke 2005 has an entire chapter on marginal vs. random effects models.
I learned about this and related material in a course based very much off Diggle, Heagerty, Liang, Zeger 2002, a great reference.
|
What is the difference between generalized estimating equations and GLMM?
|
In terms of the interpretation of the coefficients, there is a difference in the binary case (among others). What differs between GEE and GLMM is the target of inference: population-average or subjec
|
What is the difference between generalized estimating equations and GLMM?
In terms of the interpretation of the coefficients, there is a difference in the binary case (among others). What differs between GEE and GLMM is the target of inference: population-average or subject-specific.
Let's consider a simple made-up example related to yours. You want to model the failure rate between boys and girls in a school. As with most (elementary) schools, the population of students is divided into classrooms. You observe a binary response $Y$ from $n_i$ children in $N$ classrooms (i.e. $\sum_{i=1}^{N}n_{i}$ binary responses clustered by classroom), where $Y_{ij}=1$ if student $j$ from classroom $i$ passed and $Y_{ij}=0$ if he/she failed. And $x_{ij} =1$ if student $j$ from classroom $i$ is male and 0 otherwise.
To bring in the terminology I used in the first paragraph, you can think of the school as being the population and the classrooms being the subjects.
First consider GLMM. GLMM is fitting a mixed-effects model. The model conditions on the fixed design matrix (which in this case is comprised of the intercept and indicator for gender) and any random effects among classrooms that we include in the model. In our example, let's include a random intercept, $b_i$, which will take the baseline differences in failure rate among classrooms into account. So we are modelling
$\log \left(\frac{P(Y_{ij}=1)}{P(Y_{ij}=0)}\mid x_{ij}, b_i\right)=\beta_0+\beta_1 x_{ij} + b_i $
The odds ratio of risk of failure in the above model differs based on the value of $b_i$ which is different among classrooms. Thus the the estimates are subject-specific.
GEE, on the other hand, is fitting a marginal model. These model population-averages. You're modeling the expectation conditional only on your fixed design matrix.
$\log \left(\frac{P(Y_{ij}=1)}{P(Y_{ij}=0)}\mid x_{ij}\right)=\beta_0+\beta_1 x_{ij} $
This is in contrast to mixed effect models as explained above which condition on both the fixed design matrix and the random effects. So with the marginal model above you're saying, "forget about the difference among classrooms, I just want the population (school-wise) rate of failure and its association with gender." You fit the model and get an odds ratio that is the population-averaged odds ratio of failure associated with gender.
So you may find that your estimates from your GEE model may differ your estimates from your GLMM model and that is because they are not estimating the same thing.
(As far as converting from log-odds-ratio to odds-ratio by exponentiating, yes, you do that whether its a population-level or subject-specific estimate)
Some Notes/Literature:
For the linear case, the population-average and subject-specific estimates are the same.
Zeger, et al. 1988 showed that for logistic regression,
$\beta_M\approx \left[ \left(\frac{16\sqrt{3}}{15\pi }\right)^2 V+1\right]^{-1/2}\beta_{RE}$
where $\beta_M$ are the marginal esttimates, $\beta_{RE}$ are the subject-specific estimates and $V$ is the variance of the random effects.
Molenberghs, Verbeke 2005 has an entire chapter on marginal vs. random effects models.
I learned about this and related material in a course based very much off Diggle, Heagerty, Liang, Zeger 2002, a great reference.
|
What is the difference between generalized estimating equations and GLMM?
In terms of the interpretation of the coefficients, there is a difference in the binary case (among others). What differs between GEE and GLMM is the target of inference: population-average or subjec
|
7,414
|
Propensity score matching - What is the problem?
|
It's true that there are not only other ways of performing matching but also ways of adjusting for confounding using just the treatment and potential confounders (e.g., weighting, with or without propensity scores). Here I'll just mention the documented problems with propensity score (PS) matching. Matching, in general, can be a problematic method because it discards units, can change the target estimand, and is nonsmooth, making inference challenging. Using propensity scores to match adds additional problems.
The most famous critique of propensity score matching comes from King and Nielsen (2019). They have three primary arguments: 1) propensity score matching seeks to imitate a randomized experiment instead of a block randomized experiment, the latter of which yields far better precision and control against confounding, 2) propensity score matching induces the "propensity score paradox", where further trimming of the units increases imbalance after a point (not shared by some other matching methods), and 3) effect estimation is more sensitive to model specification after using propensity score matching than other matching methods. I'll discuss these arguments briefly.
Argument (1) is undeniable, but it's possible to improve PS matching by first exact matching on some variables or coarsened versions of them and doing PS matching within strata of the variables or by using the PS just to create a caliper and using a different form of matching (e.g., Mahalanobis distance matching [MDM]) to actually pair units. Though these should be standard methods, researchers typically just apply PS matching without these other beneficial steps. This increases reliance on correct specification of the propensity score model to control confounding since balance is achieved only on average but not exactly or necessarily in various combinations of variables.
Argument (2) is only somewhat tenable. It's true that the PS paradox can occur when the caliper is successively narrowed, excluding more units, but researchers can easily assess whether this is happening with their data and adjust accordingly. If imbalance increases after tightening a caliper, then the caliper can just be relaxed again. In addition, Ripollone et al. (2018) found that while the PS paradox does occur, it doesn't always occur in the typically recommended caliper widths that are most often used by researchers, indicating that the PS paradox is not as problematic for the actual use of PS matching as the paradox would otherwise suggest.
Argument (3) is also only somewhat tenable. King and Nielsen demonstrated that if, after PS matching, you were to use many different models to estimate the treatment effect, the range of possible effect estimates would be much larger than if you were to use a different form of matching (in particular, MDM). The implication is that PS matching doesn't protect against model dependence, which is often touted as its primary benefit. The effect estimate still depends on the outcome model used. The problem with this argument is that researchers typically don't try hundreds of different outcome models after matching; the two most common are no model (i.e., a t-test) or a model involving only main effects for the covariates used in matching. Any other model would be viewed as suspicious, so norms against unusual models already protect against model dependence.
I attempted to replicate King and Nielsen's findings by recreating their data scenario to settle an argument with a colleague (unrelated to the points above; it was about whether it matters whether the covariates included were confounders or mediators). You can see that replication attempt here. Using the same data-generating process, I was able to replicate some of their findings but not all of them. (In the demonstration you can ignore the graphs on the right.)
Other critiques of PS matching are more about their statistical performance. Abadie and Imbens (2016) demonstrate that PS matching is not very precise. De los Angeles Resa and Zubizarreta (2016) find in simulations that PS matching can vastly underperform compared to cardinality matching, which doesn't involve a propensity score. This is because PS matching relies on the theoretical properties of the PS to balance the covariates while cardinality matching uses constraints to require balance, thereby ensuring balance is met in the sample. In almost all scenarios considered, PS matching did worse than cardinality matching. That said, as with many simulation studies, the paper likely wouldn't have been published if PS matching did better, so there may be a selection effect here. Still, it's hard to deny that PS matching is suboptimal.
What should you do? It depends. Matching typically involves a tradeoff among balance, generalizability, and sample size, which correspond to internal validity, external validity, and precision. PS matching optimizes none of them, but it can be modified to sacrifice some to boost another (e.g., using a caliper decreases sample size and hampers generalizability [see my post here for details on that], but often improves balance). If generalizability is less important to you, which is implicitly the case if you were to be using a caliper, then cardinality matching is a good way of maintaining balance and precision. Even better would be overlap weighting (Li et al., 2018), which guarantees exact mean balance and the most precise PS-weighted estimate possible, but uses weighting rather than matching and so is more dependent on correct model specification. In many cases, though, PS matching does just fine, and you can assess whether it is working well in your dataset before you commit to it anyway. If it's not leaving you with good balance (measured broadly) or requires too tight of a caliper to do so, you might consider a different method.
Abadie, A., & Imbens, G. W. (2016). Matching on the Estimated Propensity Score. Econometrica, 84(2), 781–807. https://doi.org/10.3982/ECTA11293
de los Angeles Resa, M., & Zubizarreta, J. R. (2016). Evaluation of subset matching methods and forms of covariate balance. Statistics in Medicine, 35(27), 4961–4979. https://doi.org/10.1002/sim.7036
King, G., & Nielsen, R. (2019). Why Propensity Scores Should Not Be Used for Matching. Political Analysis, 1–20. https://doi.org/10.1017/pan.2019.11
Li, F., Morgan, K. L., & Zaslavsky, A. M. (2018). Balancing covariates via propensity score weighting. Journal of the American Statistical Association, 113(521), 390–400. https://doi.org/10.1080/01621459.2016.1260466
Ripollone, J. E., Huybrechts, K. F., Rothman, K. J., Ferguson, R. E., & Franklin, J. M. (2018). Implications of the Propensity Score Matching Paradox in Pharmacoepidemiology. American Journal of Epidemiology, 187(9), 1951–1961. https://doi.org/10.1093/aje/kwy078
|
Propensity score matching - What is the problem?
|
It's true that there are not only other ways of performing matching but also ways of adjusting for confounding using just the treatment and potential confounders (e.g., weighting, with or without prop
|
Propensity score matching - What is the problem?
It's true that there are not only other ways of performing matching but also ways of adjusting for confounding using just the treatment and potential confounders (e.g., weighting, with or without propensity scores). Here I'll just mention the documented problems with propensity score (PS) matching. Matching, in general, can be a problematic method because it discards units, can change the target estimand, and is nonsmooth, making inference challenging. Using propensity scores to match adds additional problems.
The most famous critique of propensity score matching comes from King and Nielsen (2019). They have three primary arguments: 1) propensity score matching seeks to imitate a randomized experiment instead of a block randomized experiment, the latter of which yields far better precision and control against confounding, 2) propensity score matching induces the "propensity score paradox", where further trimming of the units increases imbalance after a point (not shared by some other matching methods), and 3) effect estimation is more sensitive to model specification after using propensity score matching than other matching methods. I'll discuss these arguments briefly.
Argument (1) is undeniable, but it's possible to improve PS matching by first exact matching on some variables or coarsened versions of them and doing PS matching within strata of the variables or by using the PS just to create a caliper and using a different form of matching (e.g., Mahalanobis distance matching [MDM]) to actually pair units. Though these should be standard methods, researchers typically just apply PS matching without these other beneficial steps. This increases reliance on correct specification of the propensity score model to control confounding since balance is achieved only on average but not exactly or necessarily in various combinations of variables.
Argument (2) is only somewhat tenable. It's true that the PS paradox can occur when the caliper is successively narrowed, excluding more units, but researchers can easily assess whether this is happening with their data and adjust accordingly. If imbalance increases after tightening a caliper, then the caliper can just be relaxed again. In addition, Ripollone et al. (2018) found that while the PS paradox does occur, it doesn't always occur in the typically recommended caliper widths that are most often used by researchers, indicating that the PS paradox is not as problematic for the actual use of PS matching as the paradox would otherwise suggest.
Argument (3) is also only somewhat tenable. King and Nielsen demonstrated that if, after PS matching, you were to use many different models to estimate the treatment effect, the range of possible effect estimates would be much larger than if you were to use a different form of matching (in particular, MDM). The implication is that PS matching doesn't protect against model dependence, which is often touted as its primary benefit. The effect estimate still depends on the outcome model used. The problem with this argument is that researchers typically don't try hundreds of different outcome models after matching; the two most common are no model (i.e., a t-test) or a model involving only main effects for the covariates used in matching. Any other model would be viewed as suspicious, so norms against unusual models already protect against model dependence.
I attempted to replicate King and Nielsen's findings by recreating their data scenario to settle an argument with a colleague (unrelated to the points above; it was about whether it matters whether the covariates included were confounders or mediators). You can see that replication attempt here. Using the same data-generating process, I was able to replicate some of their findings but not all of them. (In the demonstration you can ignore the graphs on the right.)
Other critiques of PS matching are more about their statistical performance. Abadie and Imbens (2016) demonstrate that PS matching is not very precise. De los Angeles Resa and Zubizarreta (2016) find in simulations that PS matching can vastly underperform compared to cardinality matching, which doesn't involve a propensity score. This is because PS matching relies on the theoretical properties of the PS to balance the covariates while cardinality matching uses constraints to require balance, thereby ensuring balance is met in the sample. In almost all scenarios considered, PS matching did worse than cardinality matching. That said, as with many simulation studies, the paper likely wouldn't have been published if PS matching did better, so there may be a selection effect here. Still, it's hard to deny that PS matching is suboptimal.
What should you do? It depends. Matching typically involves a tradeoff among balance, generalizability, and sample size, which correspond to internal validity, external validity, and precision. PS matching optimizes none of them, but it can be modified to sacrifice some to boost another (e.g., using a caliper decreases sample size and hampers generalizability [see my post here for details on that], but often improves balance). If generalizability is less important to you, which is implicitly the case if you were to be using a caliper, then cardinality matching is a good way of maintaining balance and precision. Even better would be overlap weighting (Li et al., 2018), which guarantees exact mean balance and the most precise PS-weighted estimate possible, but uses weighting rather than matching and so is more dependent on correct model specification. In many cases, though, PS matching does just fine, and you can assess whether it is working well in your dataset before you commit to it anyway. If it's not leaving you with good balance (measured broadly) or requires too tight of a caliper to do so, you might consider a different method.
Abadie, A., & Imbens, G. W. (2016). Matching on the Estimated Propensity Score. Econometrica, 84(2), 781–807. https://doi.org/10.3982/ECTA11293
de los Angeles Resa, M., & Zubizarreta, J. R. (2016). Evaluation of subset matching methods and forms of covariate balance. Statistics in Medicine, 35(27), 4961–4979. https://doi.org/10.1002/sim.7036
King, G., & Nielsen, R. (2019). Why Propensity Scores Should Not Be Used for Matching. Political Analysis, 1–20. https://doi.org/10.1017/pan.2019.11
Li, F., Morgan, K. L., & Zaslavsky, A. M. (2018). Balancing covariates via propensity score weighting. Journal of the American Statistical Association, 113(521), 390–400. https://doi.org/10.1080/01621459.2016.1260466
Ripollone, J. E., Huybrechts, K. F., Rothman, K. J., Ferguson, R. E., & Franklin, J. M. (2018). Implications of the Propensity Score Matching Paradox in Pharmacoepidemiology. American Journal of Epidemiology, 187(9), 1951–1961. https://doi.org/10.1093/aje/kwy078
|
Propensity score matching - What is the problem?
It's true that there are not only other ways of performing matching but also ways of adjusting for confounding using just the treatment and potential confounders (e.g., weighting, with or without prop
|
7,415
|
Propensity score matching - What is the problem?
|
@Noah's answer is superb and qualifies as a mini review article. To me, the severe problems with PS matching are topped off by (1) it does not represent reproducible research in that the choice of the matching algorithm is too much up in the air and most matching algorithms give different results depending on how you sort the dataset, and (2) any method that drops relevant observations constitutes bad statistical practice and is usually highly inefficient from a precision/variance standpoint. Another issue needs to be raised: why use propensity scores at all? I see many researchers using PS when direct covariate adjustment would be far superior, e.g., when there are 100,000 observations and 100 covariates.
|
Propensity score matching - What is the problem?
|
@Noah's answer is superb and qualifies as a mini review article. To me, the severe problems with PS matching are topped off by (1) it does not represent reproducible research in that the choice of th
|
Propensity score matching - What is the problem?
@Noah's answer is superb and qualifies as a mini review article. To me, the severe problems with PS matching are topped off by (1) it does not represent reproducible research in that the choice of the matching algorithm is too much up in the air and most matching algorithms give different results depending on how you sort the dataset, and (2) any method that drops relevant observations constitutes bad statistical practice and is usually highly inefficient from a precision/variance standpoint. Another issue needs to be raised: why use propensity scores at all? I see many researchers using PS when direct covariate adjustment would be far superior, e.g., when there are 100,000 observations and 100 covariates.
|
Propensity score matching - What is the problem?
@Noah's answer is superb and qualifies as a mini review article. To me, the severe problems with PS matching are topped off by (1) it does not represent reproducible research in that the choice of th
|
7,416
|
Propensity score matching - What is the problem?
|
A special case where propensity score matching alone may produce biased estimates is pre/post or difference-in-differences analysis.
When matching on the continuous or integer count outcome variable $Y_{pre}$ in the baseline period (e.g., total healthcare expenditures or number of inpatient visits during the 12 month pre-intervention period) regression to the mean (RTM) bias may be present if the baseline treatment/control distributions are significantly different.
As a corrective measure to avoid RTM bias, conduct an ANCOVA regression in the propensity score matched data including the baseline outcome and treatment indicator as regressors and outcome = $Y_{post}$.
|
Propensity score matching - What is the problem?
|
A special case where propensity score matching alone may produce biased estimates is pre/post or difference-in-differences analysis.
When matching on the continuous or integer count outcome variable $
|
Propensity score matching - What is the problem?
A special case where propensity score matching alone may produce biased estimates is pre/post or difference-in-differences analysis.
When matching on the continuous or integer count outcome variable $Y_{pre}$ in the baseline period (e.g., total healthcare expenditures or number of inpatient visits during the 12 month pre-intervention period) regression to the mean (RTM) bias may be present if the baseline treatment/control distributions are significantly different.
As a corrective measure to avoid RTM bias, conduct an ANCOVA regression in the propensity score matched data including the baseline outcome and treatment indicator as regressors and outcome = $Y_{post}$.
|
Propensity score matching - What is the problem?
A special case where propensity score matching alone may produce biased estimates is pre/post or difference-in-differences analysis.
When matching on the continuous or integer count outcome variable $
|
7,417
|
Building an autoencoder in Tensorflow to surpass PCA
|
Here is the key figure from the 2006 Science paper by Hinton and Salakhutdinov:
It shows dimensionality reduction of the MNIST dataset ($28\times 28$ black and white images of single digits) from the original 784 dimensions to two.
Let's try to reproduce it. I will not be using Tensorflow directly, because it's much easier to use Keras (a higher-level library running on top of Tensorflow) for simple deep learning tasks like this. H&S used $$784\to 1000\to 500\to 250\to 2\to 250\to 500\to 1000\to 784$$ architecture with logistic units, pre-trained with the stack of Restricted Boltzmann Machines. Ten years later, this sounds very old-school. I will use a simpler $$784\to 512\to 128\to 2\to 128\to 512\to 784$$ architecture with exponential linear units without any pre-training. I will use Adam optimizer (a particular implementation of adaptive stochastic gradient descent with momentum).
The code is copy-pasted from a Jupyter notebook. In Python 3.6 you need to install matplotlib (for pylab), NumPy, seaborn, TensorFlow and Keras. When running in Python shell, you may need to add plt.show() to show the plots.
Initialization
%matplotlib notebook
import pylab as plt
import numpy as np
import seaborn as sns; sns.set()
import keras
from keras.datasets import mnist
from keras.models import Sequential, Model
from keras.layers import Dense
from keras.optimizers import Adam
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000, 784) / 255
x_test = x_test.reshape(10000, 784) / 255
PCA
mu = x_train.mean(axis=0)
U,s,V = np.linalg.svd(x_train - mu, full_matrices=False)
Zpca = np.dot(x_train - mu, V.transpose())
Rpca = np.dot(Zpca[:,:2], V[:2,:]) + mu # reconstruction
err = np.sum((x_train-Rpca)**2)/Rpca.shape[0]/Rpca.shape[1]
print('PCA reconstruction error with 2 PCs: ' + str(round(err,3)));
This outputs:
PCA reconstruction error with 2 PCs: 0.056
Training the autoencoder
m = Sequential()
m.add(Dense(512, activation='elu', input_shape=(784,)))
m.add(Dense(128, activation='elu'))
m.add(Dense(2, activation='linear', name="bottleneck"))
m.add(Dense(128, activation='elu'))
m.add(Dense(512, activation='elu'))
m.add(Dense(784, activation='sigmoid'))
m.compile(loss='mean_squared_error', optimizer = Adam())
history = m.fit(x_train, x_train, batch_size=128, epochs=5, verbose=1,
validation_data=(x_test, x_test))
encoder = Model(m.input, m.get_layer('bottleneck').output)
Zenc = encoder.predict(x_train) # bottleneck representation
Renc = m.predict(x_train) # reconstruction
This takes ~35 sec on my work desktop and outputs:
Train on 60000 samples, validate on 10000 samples
Epoch 1/5
60000/60000 [==============================] - 7s - loss: 0.0577 - val_loss: 0.0482
Epoch 2/5
60000/60000 [==============================] - 7s - loss: 0.0464 - val_loss: 0.0448
Epoch 3/5
60000/60000 [==============================] - 7s - loss: 0.0438 - val_loss: 0.0430
Epoch 4/5
60000/60000 [==============================] - 7s - loss: 0.0423 - val_loss: 0.0416
Epoch 5/5
60000/60000 [==============================] - 7s - loss: 0.0412 - val_loss: 0.0407
so you can already see that we surpassed PCA loss after only two training epochs.
(By the way, it is instructive to change all activation functions to activation='linear' and to observe how the loss converges precisely to the PCA loss. That is because linear autoencoder is equivalent to PCA.)
Plotting PCA projection side-by-side with the bottleneck representation
plt.figure(figsize=(8,4))
plt.subplot(121)
plt.title('PCA')
plt.scatter(Zpca[:5000,0], Zpca[:5000,1], c=y_train[:5000], s=8, cmap='tab10')
plt.gca().get_xaxis().set_ticklabels([])
plt.gca().get_yaxis().set_ticklabels([])
plt.subplot(122)
plt.title('Autoencoder')
plt.scatter(Zenc[:5000,0], Zenc[:5000,1], c=y_train[:5000], s=8, cmap='tab10')
plt.gca().get_xaxis().set_ticklabels([])
plt.gca().get_yaxis().set_ticklabels([])
plt.tight_layout()
Reconstructions
And now let's look at the reconstructions (first row - original images, second row - PCA, third row - autoencoder):
plt.figure(figsize=(9,3))
toPlot = (x_train, Rpca, Renc)
for i in range(10):
for j in range(3):
ax = plt.subplot(3, 10, 10*j+i+1)
plt.imshow(toPlot[j][i,:].reshape(28,28), interpolation="nearest",
vmin=0, vmax=1)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.tight_layout()
One can obtain much better results with deeper network, some regularization, and longer training. Experiment. Deep learning is easy!
|
Building an autoencoder in Tensorflow to surpass PCA
|
Here is the key figure from the 2006 Science paper by Hinton and Salakhutdinov:
It shows dimensionality reduction of the MNIST dataset ($28\times 28$ black and white images of single digits) from the
|
Building an autoencoder in Tensorflow to surpass PCA
Here is the key figure from the 2006 Science paper by Hinton and Salakhutdinov:
It shows dimensionality reduction of the MNIST dataset ($28\times 28$ black and white images of single digits) from the original 784 dimensions to two.
Let's try to reproduce it. I will not be using Tensorflow directly, because it's much easier to use Keras (a higher-level library running on top of Tensorflow) for simple deep learning tasks like this. H&S used $$784\to 1000\to 500\to 250\to 2\to 250\to 500\to 1000\to 784$$ architecture with logistic units, pre-trained with the stack of Restricted Boltzmann Machines. Ten years later, this sounds very old-school. I will use a simpler $$784\to 512\to 128\to 2\to 128\to 512\to 784$$ architecture with exponential linear units without any pre-training. I will use Adam optimizer (a particular implementation of adaptive stochastic gradient descent with momentum).
The code is copy-pasted from a Jupyter notebook. In Python 3.6 you need to install matplotlib (for pylab), NumPy, seaborn, TensorFlow and Keras. When running in Python shell, you may need to add plt.show() to show the plots.
Initialization
%matplotlib notebook
import pylab as plt
import numpy as np
import seaborn as sns; sns.set()
import keras
from keras.datasets import mnist
from keras.models import Sequential, Model
from keras.layers import Dense
from keras.optimizers import Adam
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train = x_train.reshape(60000, 784) / 255
x_test = x_test.reshape(10000, 784) / 255
PCA
mu = x_train.mean(axis=0)
U,s,V = np.linalg.svd(x_train - mu, full_matrices=False)
Zpca = np.dot(x_train - mu, V.transpose())
Rpca = np.dot(Zpca[:,:2], V[:2,:]) + mu # reconstruction
err = np.sum((x_train-Rpca)**2)/Rpca.shape[0]/Rpca.shape[1]
print('PCA reconstruction error with 2 PCs: ' + str(round(err,3)));
This outputs:
PCA reconstruction error with 2 PCs: 0.056
Training the autoencoder
m = Sequential()
m.add(Dense(512, activation='elu', input_shape=(784,)))
m.add(Dense(128, activation='elu'))
m.add(Dense(2, activation='linear', name="bottleneck"))
m.add(Dense(128, activation='elu'))
m.add(Dense(512, activation='elu'))
m.add(Dense(784, activation='sigmoid'))
m.compile(loss='mean_squared_error', optimizer = Adam())
history = m.fit(x_train, x_train, batch_size=128, epochs=5, verbose=1,
validation_data=(x_test, x_test))
encoder = Model(m.input, m.get_layer('bottleneck').output)
Zenc = encoder.predict(x_train) # bottleneck representation
Renc = m.predict(x_train) # reconstruction
This takes ~35 sec on my work desktop and outputs:
Train on 60000 samples, validate on 10000 samples
Epoch 1/5
60000/60000 [==============================] - 7s - loss: 0.0577 - val_loss: 0.0482
Epoch 2/5
60000/60000 [==============================] - 7s - loss: 0.0464 - val_loss: 0.0448
Epoch 3/5
60000/60000 [==============================] - 7s - loss: 0.0438 - val_loss: 0.0430
Epoch 4/5
60000/60000 [==============================] - 7s - loss: 0.0423 - val_loss: 0.0416
Epoch 5/5
60000/60000 [==============================] - 7s - loss: 0.0412 - val_loss: 0.0407
so you can already see that we surpassed PCA loss after only two training epochs.
(By the way, it is instructive to change all activation functions to activation='linear' and to observe how the loss converges precisely to the PCA loss. That is because linear autoencoder is equivalent to PCA.)
Plotting PCA projection side-by-side with the bottleneck representation
plt.figure(figsize=(8,4))
plt.subplot(121)
plt.title('PCA')
plt.scatter(Zpca[:5000,0], Zpca[:5000,1], c=y_train[:5000], s=8, cmap='tab10')
plt.gca().get_xaxis().set_ticklabels([])
plt.gca().get_yaxis().set_ticklabels([])
plt.subplot(122)
plt.title('Autoencoder')
plt.scatter(Zenc[:5000,0], Zenc[:5000,1], c=y_train[:5000], s=8, cmap='tab10')
plt.gca().get_xaxis().set_ticklabels([])
plt.gca().get_yaxis().set_ticklabels([])
plt.tight_layout()
Reconstructions
And now let's look at the reconstructions (first row - original images, second row - PCA, third row - autoencoder):
plt.figure(figsize=(9,3))
toPlot = (x_train, Rpca, Renc)
for i in range(10):
for j in range(3):
ax = plt.subplot(3, 10, 10*j+i+1)
plt.imshow(toPlot[j][i,:].reshape(28,28), interpolation="nearest",
vmin=0, vmax=1)
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.tight_layout()
One can obtain much better results with deeper network, some regularization, and longer training. Experiment. Deep learning is easy!
|
Building an autoencoder in Tensorflow to surpass PCA
Here is the key figure from the 2006 Science paper by Hinton and Salakhutdinov:
It shows dimensionality reduction of the MNIST dataset ($28\times 28$ black and white images of single digits) from the
|
7,418
|
Building an autoencoder in Tensorflow to surpass PCA
|
Huge props to @amoeba for making this great example. I just want to show that the auto-encoder training and reconstruction procedure described in that post can be done also in R with similar ease. The auto-encoder below is setup so it emulates amoeba's example as close as possible - same optimiser and overall architecture. The exact costs are not reproducible due to the TensorFlow back-end not being seeded similarly.
Initialisation
library(keras)
library(rARPACK) # to use SVDS
rm(list=ls())
mnist = dataset_mnist()
x_train = mnist$train$x
y_train = mnist$train$y
x_test = mnist$test$x
y_test = mnist$test$y
# reshape & rescale
dim(x_train) = c(nrow(x_train), 784)
dim(x_test) = c(nrow(x_test), 784)
x_train = x_train / 255
x_test = x_test / 255
PCA
mus = colMeans(x_train)
x_train_c = sweep(x_train, 2, mus)
x_test_c = sweep(x_test, 2, mus)
digitSVDS = svds(x_train_c, k = 2)
ZpcaTEST = x_test_c %*% digitSVDS$v # PCA projection of test data
Autoencoder
model = keras_model_sequential()
model %>%
layer_dense(units = 512, activation = 'elu', input_shape = c(784)) %>%
layer_dense(units = 128, activation = 'elu') %>%
layer_dense(units = 2, activation = 'linear', name = "bottleneck") %>%
layer_dense(units = 128, activation = 'elu') %>%
layer_dense(units = 512, activation = 'elu') %>%
layer_dense(units = 784, activation='sigmoid')
model %>% compile(
loss = loss_mean_squared_error, optimizer = optimizer_adam())
history = model %>% fit(verbose = 2, validation_data = list(x_test, x_test),
x_train, x_train, epochs = 5, batch_size = 128)
# Unsurprisingly a 3-year old laptop is slower than a desktop
# Train on 60000 samples, validate on 10000 samples
# Epoch 1/5
# - 14s - loss: 0.0570 - val_loss: 0.0488
# Epoch 2/5
# - 15s - loss: 0.0470 - val_loss: 0.0449
# Epoch 3/5
# - 15s - loss: 0.0439 - val_loss: 0.0426
# Epoch 4/5
# - 15s - loss: 0.0421 - val_loss: 0.0413
# Epoch 5/5
# - 14s - loss: 0.0408 - val_loss: 0.0403
# Set the auto-encoder
autoencoder = keras_model(model$input, model$get_layer('bottleneck')$output)
ZencTEST = autoencoder$predict(x_test) # bottleneck representation of test data
Plotting PCA projection side-by-side with the bottleneck representation
par(mfrow=c(1,2))
myCols = colorRampPalette(c('green', 'red', 'blue', 'orange', 'steelblue2',
'darkgreen', 'cyan', 'black', 'grey', 'magenta') )
plot(ZpcaTEST[1:5000,], col= myCols(10)[(y_test+1)],
pch=16, xlab = 'Score 1', ylab = 'Score 2', main = 'PCA' )
legend( 'bottomright', col= myCols(10), legend = seq(0,9, by=1), pch = 16 )
plot(ZencTEST[1:5000,], col= myCols(10)[(y_test+1)],
pch=16, xlab = 'Score 1', ylab = 'Score 2', main = 'Autoencoder' )
legend( 'bottomleft', col= myCols(10), legend = seq(0,9, by=1), pch = 16 )
Reconstructions
We can make the reconstruction of the digits with the usual manner. (Top row are the original digits, middle row the PCA reconstructions and bottom row the autoencoder reconstructions.)
Renc = predict(model, x_test) # autoencoder reconstruction
Rpca = sweep( ZpcaTEST %*% t(digitSVDS$v), 2, -mus) # PCA reconstruction
dev.off()
par(mfcol=c(3,9), mar = c(1, 1, 0, 0))
myGrays = gray(1:256 / 256)
for(u in seq_len(9) ){
image( matrix( x_test[u,], 28,28, byrow = TRUE)[,28:1], col = myGrays,
xaxt='n', yaxt='n')
image( matrix( Rpca[u,], 28,28, byrow = TRUE)[,28:1], col = myGrays ,
xaxt='n', yaxt='n')
image( matrix( Renc[u,], 28,28, byrow = TRUE)[,28:1], col = myGrays,
xaxt='n', yaxt='n')
}
As noted, more epochs and a deeper and/or more smartly trained network will give much better results. For example, the PCA reconstruction error of $k$= 9 is approximately $0.0356$, we can get almost the same error ($0.0359$) from the autoencoder described above, just by increasing the training epochs from 5 to 25. In this use-case, the 2 autoencoder-derived components will provide similar reconstruction error as 9 principal components. Cool!
|
Building an autoencoder in Tensorflow to surpass PCA
|
Huge props to @amoeba for making this great example. I just want to show that the auto-encoder training and reconstruction procedure described in that post can be done also in R with similar ease. The
|
Building an autoencoder in Tensorflow to surpass PCA
Huge props to @amoeba for making this great example. I just want to show that the auto-encoder training and reconstruction procedure described in that post can be done also in R with similar ease. The auto-encoder below is setup so it emulates amoeba's example as close as possible - same optimiser and overall architecture. The exact costs are not reproducible due to the TensorFlow back-end not being seeded similarly.
Initialisation
library(keras)
library(rARPACK) # to use SVDS
rm(list=ls())
mnist = dataset_mnist()
x_train = mnist$train$x
y_train = mnist$train$y
x_test = mnist$test$x
y_test = mnist$test$y
# reshape & rescale
dim(x_train) = c(nrow(x_train), 784)
dim(x_test) = c(nrow(x_test), 784)
x_train = x_train / 255
x_test = x_test / 255
PCA
mus = colMeans(x_train)
x_train_c = sweep(x_train, 2, mus)
x_test_c = sweep(x_test, 2, mus)
digitSVDS = svds(x_train_c, k = 2)
ZpcaTEST = x_test_c %*% digitSVDS$v # PCA projection of test data
Autoencoder
model = keras_model_sequential()
model %>%
layer_dense(units = 512, activation = 'elu', input_shape = c(784)) %>%
layer_dense(units = 128, activation = 'elu') %>%
layer_dense(units = 2, activation = 'linear', name = "bottleneck") %>%
layer_dense(units = 128, activation = 'elu') %>%
layer_dense(units = 512, activation = 'elu') %>%
layer_dense(units = 784, activation='sigmoid')
model %>% compile(
loss = loss_mean_squared_error, optimizer = optimizer_adam())
history = model %>% fit(verbose = 2, validation_data = list(x_test, x_test),
x_train, x_train, epochs = 5, batch_size = 128)
# Unsurprisingly a 3-year old laptop is slower than a desktop
# Train on 60000 samples, validate on 10000 samples
# Epoch 1/5
# - 14s - loss: 0.0570 - val_loss: 0.0488
# Epoch 2/5
# - 15s - loss: 0.0470 - val_loss: 0.0449
# Epoch 3/5
# - 15s - loss: 0.0439 - val_loss: 0.0426
# Epoch 4/5
# - 15s - loss: 0.0421 - val_loss: 0.0413
# Epoch 5/5
# - 14s - loss: 0.0408 - val_loss: 0.0403
# Set the auto-encoder
autoencoder = keras_model(model$input, model$get_layer('bottleneck')$output)
ZencTEST = autoencoder$predict(x_test) # bottleneck representation of test data
Plotting PCA projection side-by-side with the bottleneck representation
par(mfrow=c(1,2))
myCols = colorRampPalette(c('green', 'red', 'blue', 'orange', 'steelblue2',
'darkgreen', 'cyan', 'black', 'grey', 'magenta') )
plot(ZpcaTEST[1:5000,], col= myCols(10)[(y_test+1)],
pch=16, xlab = 'Score 1', ylab = 'Score 2', main = 'PCA' )
legend( 'bottomright', col= myCols(10), legend = seq(0,9, by=1), pch = 16 )
plot(ZencTEST[1:5000,], col= myCols(10)[(y_test+1)],
pch=16, xlab = 'Score 1', ylab = 'Score 2', main = 'Autoencoder' )
legend( 'bottomleft', col= myCols(10), legend = seq(0,9, by=1), pch = 16 )
Reconstructions
We can make the reconstruction of the digits with the usual manner. (Top row are the original digits, middle row the PCA reconstructions and bottom row the autoencoder reconstructions.)
Renc = predict(model, x_test) # autoencoder reconstruction
Rpca = sweep( ZpcaTEST %*% t(digitSVDS$v), 2, -mus) # PCA reconstruction
dev.off()
par(mfcol=c(3,9), mar = c(1, 1, 0, 0))
myGrays = gray(1:256 / 256)
for(u in seq_len(9) ){
image( matrix( x_test[u,], 28,28, byrow = TRUE)[,28:1], col = myGrays,
xaxt='n', yaxt='n')
image( matrix( Rpca[u,], 28,28, byrow = TRUE)[,28:1], col = myGrays ,
xaxt='n', yaxt='n')
image( matrix( Renc[u,], 28,28, byrow = TRUE)[,28:1], col = myGrays,
xaxt='n', yaxt='n')
}
As noted, more epochs and a deeper and/or more smartly trained network will give much better results. For example, the PCA reconstruction error of $k$= 9 is approximately $0.0356$, we can get almost the same error ($0.0359$) from the autoencoder described above, just by increasing the training epochs from 5 to 25. In this use-case, the 2 autoencoder-derived components will provide similar reconstruction error as 9 principal components. Cool!
|
Building an autoencoder in Tensorflow to surpass PCA
Huge props to @amoeba for making this great example. I just want to show that the auto-encoder training and reconstruction procedure described in that post can be done also in R with similar ease. The
|
7,419
|
Building an autoencoder in Tensorflow to surpass PCA
|
Here is my jupyter notebook where I try to replicate your result, with the following differences:
instead of using tensorflow directly, I use it view keras
leaky relu instead of relu to avoid saturation (i.e. encoded output being 0)
this might be a reason for poor performance of AE
autoencoder input is data scaled to [0,1]
I think I read somewhere that autoencoders with relu work best with [0-1] data
running my notebook with autoencoders' input being the mean=0, std=1 gave MSE for AE > 0.7 for all dimensionality reductions, so maybe this is one of your problems
PCA input is kept being data with mean=0 and std=1
This may also mean that the MSE result of PCA is not comparable to the MSE result of PCA
Maybe I'll just re-run this later with [0-1] data for both PCA and AE
PCA input is also scaled to [0-1]. PCA works with (mean=0,std=1) data too, but the MSE would be incomparable to AE
My MSE results for PCA from dimensionality reduction of 1 to 6
(where the input has 6 columns)
and for AE from dim. red. of 1 to 6 are below:
With PCA input being (mean=0,std=1) while AE input being [0-1] range
- 4e-15 : PCA6
- .015 : PCA5
- .0502 : AE5
- .0508 : AE6
- .051 : AE4
- .053 : AE3
- .157 : PCA4
- .258 : AE2
- .259 : PCA3
- .377 : AE1
- .483 : PCA2
- .682 : PCA1
9e-15 : PCA6
.0094 : PCA5
.0502 : AE5
.0507 : AE6
.0514 : AE4
.0532 : AE3
.0772 : PCA4
.1231 : PCA3
.2588 : AE2
.2831 : PCA2
.3773 : AE1
.3885 : PCA1
Linear PCA with no dimensionality reduction can achieve 9e-15 because it can just push whatever it was unable to fit into the last component.
|
Building an autoencoder in Tensorflow to surpass PCA
|
Here is my jupyter notebook where I try to replicate your result, with the following differences:
instead of using tensorflow directly, I use it view keras
leaky relu instead of relu to avoid saturat
|
Building an autoencoder in Tensorflow to surpass PCA
Here is my jupyter notebook where I try to replicate your result, with the following differences:
instead of using tensorflow directly, I use it view keras
leaky relu instead of relu to avoid saturation (i.e. encoded output being 0)
this might be a reason for poor performance of AE
autoencoder input is data scaled to [0,1]
I think I read somewhere that autoencoders with relu work best with [0-1] data
running my notebook with autoencoders' input being the mean=0, std=1 gave MSE for AE > 0.7 for all dimensionality reductions, so maybe this is one of your problems
PCA input is kept being data with mean=0 and std=1
This may also mean that the MSE result of PCA is not comparable to the MSE result of PCA
Maybe I'll just re-run this later with [0-1] data for both PCA and AE
PCA input is also scaled to [0-1]. PCA works with (mean=0,std=1) data too, but the MSE would be incomparable to AE
My MSE results for PCA from dimensionality reduction of 1 to 6
(where the input has 6 columns)
and for AE from dim. red. of 1 to 6 are below:
With PCA input being (mean=0,std=1) while AE input being [0-1] range
- 4e-15 : PCA6
- .015 : PCA5
- .0502 : AE5
- .0508 : AE6
- .051 : AE4
- .053 : AE3
- .157 : PCA4
- .258 : AE2
- .259 : PCA3
- .377 : AE1
- .483 : PCA2
- .682 : PCA1
9e-15 : PCA6
.0094 : PCA5
.0502 : AE5
.0507 : AE6
.0514 : AE4
.0532 : AE3
.0772 : PCA4
.1231 : PCA3
.2588 : AE2
.2831 : PCA2
.3773 : AE1
.3885 : PCA1
Linear PCA with no dimensionality reduction can achieve 9e-15 because it can just push whatever it was unable to fit into the last component.
|
Building an autoencoder in Tensorflow to surpass PCA
Here is my jupyter notebook where I try to replicate your result, with the following differences:
instead of using tensorflow directly, I use it view keras
leaky relu instead of relu to avoid saturat
|
7,420
|
Interpretation of simple predictions to odds ratios in logistic regression
|
It seems self-evident to me that
$$
\exp(\beta_0 + \beta_1x) \neq\frac{\exp(\beta_0 + \beta_1x)}{1+\exp(\beta_0 + \beta_1x)}
$$
unless $\exp(\beta_0 + \beta_1x)=0$. So, I'm less clear about what the confusion might be. What I can say is that the left hand side (LHS) of the (not) equals sign is the odds of being undernourished, whereas the RHS is the probability of being undernourished. When examined on its own, $\exp(\beta_1)$, is the odds ratio, that is the multiplicative factor that allows you to move from the odds($x$) to the odds($x+1$).
Let me know if you need additional / different information.
Update:
I think this is mostly an issue of being unfamiliar with probabilities and odds, and how they relate to one another. None of that is very intuitive, you need to sit down and work with it for a while and learn to think in those terms; it doesn't come naturally to anyone.
The issue is that absolute numbers are very difficult to interpret on their own. Lets say I was telling you about a time when I had a coin and I wondered whether it was fair. So I flipped it some and got 6 heads. What does that mean? Is 6 a lot, a little, about right? It's awfully hard to say. To deal with this issue we want to give numbers some context. In a case like this there are two obvious choices for how to provide the needed context: I could give the total number of flips, or I could give the number of tails. In either case, you have adequate information to make sense of 6 heads, and you could compute the other value if the one I told you wasn't the one you preferred. Probability is the number of heads divided by the total number of events. The odds is the ratio of the number of heads to the number of non-heads (intuitively we want to say the number of tails, which works in this case, but not if there are more than 2 possibilities). With the odds, it is possible to give both numbers, e.g. 4 to 5. This means that in the long run something will happen 4 times for every 5 times it doesn't happen. When the odds are presented this way, they're called "Las Vegas odds". However in statistics, we typically divide through and say the odds are .8 instead (i.e., 4/5 = .8) for purposes of standardization. We can also convert between the odds and probabilities:
$$
\text{probability}=\frac{\text{odds}}{1+\text{odds}} ~~~~~~~~~~~~~~~~ \text{odds}=\frac{\text{probability}}{1-\text{probability}}
$$
(With these formulas it can be difficult to recognize that the odds is the LHS at top, and the probability is the RHS, but remember that it's the not equals sign in the middle.) An odds ratio is just the odds of something divided by the odds of something else; in the context of logistic regression, each $\exp(\beta)$ is the ratio of the odds for successive values of the associated covariate when all else is held equal.
What's important to recognize from all of these equations is that probabilities, odds, and odds ratios do not equate in any straightforward way; just because the probability goes up by .04 very much does not imply that the odds or odds ratio should be anything like .04! Moreover, probabilities range from $[0, 1]$, whereas ln odds (the output from the raw logistic regression equation) can range from $(-\infty, +\infty)$, and odds and odds ratios can range from $(0, +\infty)$. This last part is vital: Due to the bounded range of probabilities, probabilities are non-linear, but ln odds can be linear. That is, as (for example) wealth goes up by constant increments, the probability of undernourishment will increase by varying amounts, but the ln odds will increase by a constant amount and the odds will increase by a constant multiplicative factor. For any given set of values in your logistic regression model, there may be some point where
$$
\exp(\beta_0 + \beta_1x)-\exp(\beta_0 + \beta_1x') =\frac{\exp(\beta_0 + \beta_1x)}{1+\exp(\beta_0 + \beta_1x)}-\frac{\exp(\beta_0 + \beta_1x')}{1+\exp(\beta_0 + \beta_1x')}
$$
for some given $x$ and $x'$, but it will be unequal everywhere else.
(Although it was written in the context of a different question, my answer here contains a lot of information about logistic regression that may be helpful for you in understanding LR and related issues more fully.)
|
Interpretation of simple predictions to odds ratios in logistic regression
|
It seems self-evident to me that
$$
\exp(\beta_0 + \beta_1x) \neq\frac{\exp(\beta_0 + \beta_1x)}{1+\exp(\beta_0 + \beta_1x)}
$$
unless $\exp(\beta_0 + \beta_1x)=0$. So, I'm less clear about what the
|
Interpretation of simple predictions to odds ratios in logistic regression
It seems self-evident to me that
$$
\exp(\beta_0 + \beta_1x) \neq\frac{\exp(\beta_0 + \beta_1x)}{1+\exp(\beta_0 + \beta_1x)}
$$
unless $\exp(\beta_0 + \beta_1x)=0$. So, I'm less clear about what the confusion might be. What I can say is that the left hand side (LHS) of the (not) equals sign is the odds of being undernourished, whereas the RHS is the probability of being undernourished. When examined on its own, $\exp(\beta_1)$, is the odds ratio, that is the multiplicative factor that allows you to move from the odds($x$) to the odds($x+1$).
Let me know if you need additional / different information.
Update:
I think this is mostly an issue of being unfamiliar with probabilities and odds, and how they relate to one another. None of that is very intuitive, you need to sit down and work with it for a while and learn to think in those terms; it doesn't come naturally to anyone.
The issue is that absolute numbers are very difficult to interpret on their own. Lets say I was telling you about a time when I had a coin and I wondered whether it was fair. So I flipped it some and got 6 heads. What does that mean? Is 6 a lot, a little, about right? It's awfully hard to say. To deal with this issue we want to give numbers some context. In a case like this there are two obvious choices for how to provide the needed context: I could give the total number of flips, or I could give the number of tails. In either case, you have adequate information to make sense of 6 heads, and you could compute the other value if the one I told you wasn't the one you preferred. Probability is the number of heads divided by the total number of events. The odds is the ratio of the number of heads to the number of non-heads (intuitively we want to say the number of tails, which works in this case, but not if there are more than 2 possibilities). With the odds, it is possible to give both numbers, e.g. 4 to 5. This means that in the long run something will happen 4 times for every 5 times it doesn't happen. When the odds are presented this way, they're called "Las Vegas odds". However in statistics, we typically divide through and say the odds are .8 instead (i.e., 4/5 = .8) for purposes of standardization. We can also convert between the odds and probabilities:
$$
\text{probability}=\frac{\text{odds}}{1+\text{odds}} ~~~~~~~~~~~~~~~~ \text{odds}=\frac{\text{probability}}{1-\text{probability}}
$$
(With these formulas it can be difficult to recognize that the odds is the LHS at top, and the probability is the RHS, but remember that it's the not equals sign in the middle.) An odds ratio is just the odds of something divided by the odds of something else; in the context of logistic regression, each $\exp(\beta)$ is the ratio of the odds for successive values of the associated covariate when all else is held equal.
What's important to recognize from all of these equations is that probabilities, odds, and odds ratios do not equate in any straightforward way; just because the probability goes up by .04 very much does not imply that the odds or odds ratio should be anything like .04! Moreover, probabilities range from $[0, 1]$, whereas ln odds (the output from the raw logistic regression equation) can range from $(-\infty, +\infty)$, and odds and odds ratios can range from $(0, +\infty)$. This last part is vital: Due to the bounded range of probabilities, probabilities are non-linear, but ln odds can be linear. That is, as (for example) wealth goes up by constant increments, the probability of undernourishment will increase by varying amounts, but the ln odds will increase by a constant amount and the odds will increase by a constant multiplicative factor. For any given set of values in your logistic regression model, there may be some point where
$$
\exp(\beta_0 + \beta_1x)-\exp(\beta_0 + \beta_1x') =\frac{\exp(\beta_0 + \beta_1x)}{1+\exp(\beta_0 + \beta_1x)}-\frac{\exp(\beta_0 + \beta_1x')}{1+\exp(\beta_0 + \beta_1x')}
$$
for some given $x$ and $x'$, but it will be unequal everywhere else.
(Although it was written in the context of a different question, my answer here contains a lot of information about logistic regression that may be helpful for you in understanding LR and related issues more fully.)
|
Interpretation of simple predictions to odds ratios in logistic regression
It seems self-evident to me that
$$
\exp(\beta_0 + \beta_1x) \neq\frac{\exp(\beta_0 + \beta_1x)}{1+\exp(\beta_0 + \beta_1x)}
$$
unless $\exp(\beta_0 + \beta_1x)=0$. So, I'm less clear about what the
|
7,421
|
Interpretation of simple predictions to odds ratios in logistic regression
|
Odds ratio OR=Exp(b) translates to Probability A = SQRT(OR)/(SQRT(OR)+1), where Probability A is probability of Event A and OR is ratio of happening event A/not happening event A (or exposed/not exposed by insurance as in the question above).
It took me quite a while to solve; I'm not sure why is that not well-known formula.
There is an example. Suppose, there are 10 persons admitted to the university; 7 of them are men. So, for every man it is 70% probability to be admitted. Odds to be admitted for men are 7/3=2.33 and not to be admitted 3/7=0.43. Odds ratio (OR) is 2.33/0.43=5.44 that means that for men 5.44 times higher chance to be admitted rather for women. Let's find probability to be admitted for man from OR: P=SQRT(5.44)/(SQRT(5.44)+1)=0.7
Update
This is true only if number of men or women admitted are equal to number of applicants. In other words, it is not OR. We can't find probability gain (or loss) depends on factor without knowing additional information.
|
Interpretation of simple predictions to odds ratios in logistic regression
|
Odds ratio OR=Exp(b) translates to Probability A = SQRT(OR)/(SQRT(OR)+1), where Probability A is probability of Event A and OR is ratio of happening event A/not happening event A (or exposed/not expos
|
Interpretation of simple predictions to odds ratios in logistic regression
Odds ratio OR=Exp(b) translates to Probability A = SQRT(OR)/(SQRT(OR)+1), where Probability A is probability of Event A and OR is ratio of happening event A/not happening event A (or exposed/not exposed by insurance as in the question above).
It took me quite a while to solve; I'm not sure why is that not well-known formula.
There is an example. Suppose, there are 10 persons admitted to the university; 7 of them are men. So, for every man it is 70% probability to be admitted. Odds to be admitted for men are 7/3=2.33 and not to be admitted 3/7=0.43. Odds ratio (OR) is 2.33/0.43=5.44 that means that for men 5.44 times higher chance to be admitted rather for women. Let's find probability to be admitted for man from OR: P=SQRT(5.44)/(SQRT(5.44)+1)=0.7
Update
This is true only if number of men or women admitted are equal to number of applicants. In other words, it is not OR. We can't find probability gain (or loss) depends on factor without knowing additional information.
|
Interpretation of simple predictions to odds ratios in logistic regression
Odds ratio OR=Exp(b) translates to Probability A = SQRT(OR)/(SQRT(OR)+1), where Probability A is probability of Event A and OR is ratio of happening event A/not happening event A (or exposed/not expos
|
7,422
|
How do I use the SVD in collaborative filtering?
|
However: With pure vanilla SVD you might have problems recreating the original matrix, let alone predicting values for missing items. The useful rule-of-thumb in this area is calculating average rating per movie, and subtracting this average for each user / movie combination, that is, subtracting movie bias from each user. Then it is recommended you run SVD, and of course, you would have to record these bias values somewhere, in order to recreate ratings, or predict for unknown values. I'd read Simon Funk's post on SVD for recommendations - he invented an incremental SVD approach during Netflix competition.
http://sifter.org/~simon/journal/20061211.html
I guess demeaning matrix A before SVD makes sense, since SVD's close cousin PCA also works in a similar way. In terms of incremental computation, Funk told me that if you do not demean, first gradient direction dominates the rest of the computation. I've seen this firsthand, basically without demeaning things do not work.
|
How do I use the SVD in collaborative filtering?
|
However: With pure vanilla SVD you might have problems recreating the original matrix, let alone predicting values for missing items. The useful rule-of-thumb in this area is calculating average ratin
|
How do I use the SVD in collaborative filtering?
However: With pure vanilla SVD you might have problems recreating the original matrix, let alone predicting values for missing items. The useful rule-of-thumb in this area is calculating average rating per movie, and subtracting this average for each user / movie combination, that is, subtracting movie bias from each user. Then it is recommended you run SVD, and of course, you would have to record these bias values somewhere, in order to recreate ratings, or predict for unknown values. I'd read Simon Funk's post on SVD for recommendations - he invented an incremental SVD approach during Netflix competition.
http://sifter.org/~simon/journal/20061211.html
I guess demeaning matrix A before SVD makes sense, since SVD's close cousin PCA also works in a similar way. In terms of incremental computation, Funk told me that if you do not demean, first gradient direction dominates the rest of the computation. I've seen this firsthand, basically without demeaning things do not work.
|
How do I use the SVD in collaborative filtering?
However: With pure vanilla SVD you might have problems recreating the original matrix, let alone predicting values for missing items. The useful rule-of-thumb in this area is calculating average ratin
|
7,423
|
How do I use the SVD in collaborative filtering?
|
I would like to offer a dissenting opinion:
Missing Edges as Missing Values
In a collaborative filtering problem, the connections that do not exist (user $i$ has not rated item $j$, person $x$ has not friended person $y$) are generally treated as missing values to be predicted, rather than as zeros. That is, if user $i$ hasn't rated item $j$, we want to guess what he might rate it if he had rated it. If person $x$ hasn't friended $y$, we want to guess how likely it is that he'd want to friend him. The recommendations are based on the reconstructed values.
When you take the SVD of the social graph (e.g., plug it through svd()), you are basically imputing zeros in all those missing spots. That this is problematic is more obvious in the user-item-rating setup for collaborative filtering. If I had a way to reliably fill in the missing entries, I wouldn't need to use SVD at all. I'd just give recommendations based on the filled in entries. If I don't have a way to do that, then I shouldn't fill them before I do the SVD.*
SVD with Missing Values
Of course, the svd() function doesn't know how to cope with missing values. So, what exactly are you supposed to do? Well, there's a way to reframe the problem as
"Find the matrix of rank $k$ which is closest to the original matrix"
That's really the problem you're trying to solve, and you're not going to use svd() to solve it. A way that worked for me (on the Netflix prize data) was this:
Try to fit the entries with a simple model, e.g., $\hat{X}_{i,j} = \mu + \alpha_i + \beta_j$. This actually does a good job.
Assign each user $i$ a $k$-vector $u_i$ and each item $j$ a $k$-vector $v_j$. (In your case, each person gets a right and left $k$-vector). You'll ultimately be predicting the residuals as dot products: $\sum u_{im}v_{jm}$
Use some algorithm to find the vectors which minimize the distance to the original matrix. For instance, use this paper
Best of luck!
* : What Tenali is recommending is basically nearest neighbors. You try to find users who are similar and make recommendations on that. Unfortunately, the sparsity problem (~99% of the matrix is missing values) makes it hard to find nearest neighbors using cosine distance or jaccard similarity or whatever. So, he's recommending doing an SVD of the matrix (with zeros imputed at the missing values) to first compress users into a smaller feature space and then do comparisons there. Doing SVD-nearest-neighbors is fine, but I would still recommend doing the SVD the right way (I mean... my way). No need to do nonsensical value imputation!
|
How do I use the SVD in collaborative filtering?
|
I would like to offer a dissenting opinion:
Missing Edges as Missing Values
In a collaborative filtering problem, the connections that do not exist (user $i$ has not rated item $j$, person $x$ has not
|
How do I use the SVD in collaborative filtering?
I would like to offer a dissenting opinion:
Missing Edges as Missing Values
In a collaborative filtering problem, the connections that do not exist (user $i$ has not rated item $j$, person $x$ has not friended person $y$) are generally treated as missing values to be predicted, rather than as zeros. That is, if user $i$ hasn't rated item $j$, we want to guess what he might rate it if he had rated it. If person $x$ hasn't friended $y$, we want to guess how likely it is that he'd want to friend him. The recommendations are based on the reconstructed values.
When you take the SVD of the social graph (e.g., plug it through svd()), you are basically imputing zeros in all those missing spots. That this is problematic is more obvious in the user-item-rating setup for collaborative filtering. If I had a way to reliably fill in the missing entries, I wouldn't need to use SVD at all. I'd just give recommendations based on the filled in entries. If I don't have a way to do that, then I shouldn't fill them before I do the SVD.*
SVD with Missing Values
Of course, the svd() function doesn't know how to cope with missing values. So, what exactly are you supposed to do? Well, there's a way to reframe the problem as
"Find the matrix of rank $k$ which is closest to the original matrix"
That's really the problem you're trying to solve, and you're not going to use svd() to solve it. A way that worked for me (on the Netflix prize data) was this:
Try to fit the entries with a simple model, e.g., $\hat{X}_{i,j} = \mu + \alpha_i + \beta_j$. This actually does a good job.
Assign each user $i$ a $k$-vector $u_i$ and each item $j$ a $k$-vector $v_j$. (In your case, each person gets a right and left $k$-vector). You'll ultimately be predicting the residuals as dot products: $\sum u_{im}v_{jm}$
Use some algorithm to find the vectors which minimize the distance to the original matrix. For instance, use this paper
Best of luck!
* : What Tenali is recommending is basically nearest neighbors. You try to find users who are similar and make recommendations on that. Unfortunately, the sparsity problem (~99% of the matrix is missing values) makes it hard to find nearest neighbors using cosine distance or jaccard similarity or whatever. So, he's recommending doing an SVD of the matrix (with zeros imputed at the missing values) to first compress users into a smaller feature space and then do comparisons there. Doing SVD-nearest-neighbors is fine, but I would still recommend doing the SVD the right way (I mean... my way). No need to do nonsensical value imputation!
|
How do I use the SVD in collaborative filtering?
I would like to offer a dissenting opinion:
Missing Edges as Missing Values
In a collaborative filtering problem, the connections that do not exist (user $i$ has not rated item $j$, person $x$ has not
|
7,424
|
How do I use the SVD in collaborative filtering?
|
The reason no one tells you what to do with it is because if you know what SVD does, then it is a bit obvious what to do with it :-).
Since your rows and columns are the same set, I will explain this through a different matrix A. Let the matrix A be such that rows are the users and the columns are the items that the user likes. Note that this matrix need not be symmetric, but in your case, I guess it turns out to be symmetric.
One way to think of SVD is as follows :
SVD finds a hidden feature space where the users and items they like have feature vectors that are closely aligned.
So, when we compute $A = U \times s \times V$, the $U$ matrix represents the feature vectors corresponding to the users in the hidden feature space and the $V$ matrix represents the feature vectors corresponding to the items in the hidden feature space.
Now, if I give you two vectors from the same feature space and ask you to find if they are similar, what is the simplest thing that you can think of for accomplishing that? Dot product.
So, if I want to see user $i$ likes item $j$, all I need to do is take the dot product of the $i$th entry in $U$ and $j$th entry in V. Of course, dot product is by no means the only thing you can apply, any similarity measure that you can think of is applicable.
|
How do I use the SVD in collaborative filtering?
|
The reason no one tells you what to do with it is because if you know what SVD does, then it is a bit obvious what to do with it :-).
Since your rows and columns are the same set, I will explain this
|
How do I use the SVD in collaborative filtering?
The reason no one tells you what to do with it is because if you know what SVD does, then it is a bit obvious what to do with it :-).
Since your rows and columns are the same set, I will explain this through a different matrix A. Let the matrix A be such that rows are the users and the columns are the items that the user likes. Note that this matrix need not be symmetric, but in your case, I guess it turns out to be symmetric.
One way to think of SVD is as follows :
SVD finds a hidden feature space where the users and items they like have feature vectors that are closely aligned.
So, when we compute $A = U \times s \times V$, the $U$ matrix represents the feature vectors corresponding to the users in the hidden feature space and the $V$ matrix represents the feature vectors corresponding to the items in the hidden feature space.
Now, if I give you two vectors from the same feature space and ask you to find if they are similar, what is the simplest thing that you can think of for accomplishing that? Dot product.
So, if I want to see user $i$ likes item $j$, all I need to do is take the dot product of the $i$th entry in $U$ and $j$th entry in V. Of course, dot product is by no means the only thing you can apply, any similarity measure that you can think of is applicable.
|
How do I use the SVD in collaborative filtering?
The reason no one tells you what to do with it is because if you know what SVD does, then it is a bit obvious what to do with it :-).
Since your rows and columns are the same set, I will explain this
|
7,425
|
How do I use the SVD in collaborative filtering?
|
This is to try and answer the "how to" part of the question for those who want to practically implement sparse-SVD recommendations or inspect source code for the details. You can use an off-the-shelf FOSS software to model sparse-SVD. For example, vowpal wabbit, libFM, or redsvd.
vowpal wabbit has 3 implementations of "SVD-like" algorithms (each selectable by one of 3 command line options). Strictly speaking these should be called "approximate, iterative, matrix factorization" rather than pure "classic "SVD" but they are closely related to SVD. You may think of them as a very computationally-efficient approximate SVD-factorization of a sparse (mostly zeroes) matrix.
Here's a full, working recipe for doing Netflix style movie recommendations with vowpal wabbit and its "low-ranked quadratic" (--lrq) option which seems to work best for me:
Data set format file ratings.vw (each rating on one line by user and movie):
5 |user 1 |movie 37
3 |user 2 |movie 1019
4 |user 1 |movie 25
1 |user 3 |movie 238
...
Where the 1st number is the rating (1 to 5 stars) followed by the ID of user who rated and and the movie ID that was rated.
Test data is in the same format but can (optionally) omit the ratings column:
|user 1 |movie 234
|user 12 |movie 1019
...
optionally because to evaluate/test predictions we need ratings to compare the predictions to. If we omit the ratings, vowpal wabbit will still predict the ratings but won't be able to estimate the prediction error (predicted values vs actual values in the data).
To train we ask vowpal wabbit to find a set of N latent interaction factors between users and movies they like (or dislike). You may think about this as finding common themes where similar users rate a subset of movies in a similar way and using these common themes to predict how a user would rate a movie he hasn't rated yet.
vw options and arguments we need to use:
--lrq <x><y><N> finds "low-ranked quadratic" latent-factors.
<x><y> : "um" means cross the u[sers] and m[ovie] name-spaces in the data-set. Note that only the 1st letter in each name-space is used with the --lrq option.
<N> : N=14 below is the number of latent factors we want to find
-f model_filename: write the final model into model_filename
So a simple full training command would be:
vw --lrq um14 -d ratings.vw -f ratings.model
Once we have the ratings.model model file, we can use it to predict additional ratings on a new data-set more_ratings.vw:
vw -i ratings.model -d more_ratings.vw -p more_ratings.predicted
The predictions will be written to the file more_ratings.predicted.
Using demo/movielens in the vowpalwabbit source tree, I get ~0.693 MAE (Mean Absolute Error) after training on 1 million user/movie ratings ml-1m.ratings.train.vw with 14 latent-factors (meaning that the SVD middle matrix is a 14x14 rows x columns matrix) and testing on the independent test-set ml-1m.ratings.test.vw. How good is 0.69 MAE? For the full range of possible predictions, including the unrated (0) case [0 to 5], a 0.69 error is ~13.8% (0.69/5.0) of the full range, i.e. about 86.2% accuracy (1 - 0.138).
You can find examples and a full demo for a similar data-set (movielens) with documentation in the vowpal wabbit source tree on github:
Matrix factorization example: using the --rank option
Low rank quadratic demo: using the --lrq option
Notes:
The movielens demo uses several options I omitted (for simplicity) from my example: in particular --loss_function quantile, --adaptive, and --invariant
The --lrq implementation in vw is much faster than --rank, in particular when storing and loading the models.
Credits:
--rank vw option was implemented by Jake Hofman
--lrq vw option (with optional dropout) was implemented by Paul Minero
vowpal wabbit (aka vw) is the brain child of John Langford
|
How do I use the SVD in collaborative filtering?
|
This is to try and answer the "how to" part of the question for those who want to practically implement sparse-SVD recommendations or inspect source code for the details. You can use an off-the-shelf
|
How do I use the SVD in collaborative filtering?
This is to try and answer the "how to" part of the question for those who want to practically implement sparse-SVD recommendations or inspect source code for the details. You can use an off-the-shelf FOSS software to model sparse-SVD. For example, vowpal wabbit, libFM, or redsvd.
vowpal wabbit has 3 implementations of "SVD-like" algorithms (each selectable by one of 3 command line options). Strictly speaking these should be called "approximate, iterative, matrix factorization" rather than pure "classic "SVD" but they are closely related to SVD. You may think of them as a very computationally-efficient approximate SVD-factorization of a sparse (mostly zeroes) matrix.
Here's a full, working recipe for doing Netflix style movie recommendations with vowpal wabbit and its "low-ranked quadratic" (--lrq) option which seems to work best for me:
Data set format file ratings.vw (each rating on one line by user and movie):
5 |user 1 |movie 37
3 |user 2 |movie 1019
4 |user 1 |movie 25
1 |user 3 |movie 238
...
Where the 1st number is the rating (1 to 5 stars) followed by the ID of user who rated and and the movie ID that was rated.
Test data is in the same format but can (optionally) omit the ratings column:
|user 1 |movie 234
|user 12 |movie 1019
...
optionally because to evaluate/test predictions we need ratings to compare the predictions to. If we omit the ratings, vowpal wabbit will still predict the ratings but won't be able to estimate the prediction error (predicted values vs actual values in the data).
To train we ask vowpal wabbit to find a set of N latent interaction factors between users and movies they like (or dislike). You may think about this as finding common themes where similar users rate a subset of movies in a similar way and using these common themes to predict how a user would rate a movie he hasn't rated yet.
vw options and arguments we need to use:
--lrq <x><y><N> finds "low-ranked quadratic" latent-factors.
<x><y> : "um" means cross the u[sers] and m[ovie] name-spaces in the data-set. Note that only the 1st letter in each name-space is used with the --lrq option.
<N> : N=14 below is the number of latent factors we want to find
-f model_filename: write the final model into model_filename
So a simple full training command would be:
vw --lrq um14 -d ratings.vw -f ratings.model
Once we have the ratings.model model file, we can use it to predict additional ratings on a new data-set more_ratings.vw:
vw -i ratings.model -d more_ratings.vw -p more_ratings.predicted
The predictions will be written to the file more_ratings.predicted.
Using demo/movielens in the vowpalwabbit source tree, I get ~0.693 MAE (Mean Absolute Error) after training on 1 million user/movie ratings ml-1m.ratings.train.vw with 14 latent-factors (meaning that the SVD middle matrix is a 14x14 rows x columns matrix) and testing on the independent test-set ml-1m.ratings.test.vw. How good is 0.69 MAE? For the full range of possible predictions, including the unrated (0) case [0 to 5], a 0.69 error is ~13.8% (0.69/5.0) of the full range, i.e. about 86.2% accuracy (1 - 0.138).
You can find examples and a full demo for a similar data-set (movielens) with documentation in the vowpal wabbit source tree on github:
Matrix factorization example: using the --rank option
Low rank quadratic demo: using the --lrq option
Notes:
The movielens demo uses several options I omitted (for simplicity) from my example: in particular --loss_function quantile, --adaptive, and --invariant
The --lrq implementation in vw is much faster than --rank, in particular when storing and loading the models.
Credits:
--rank vw option was implemented by Jake Hofman
--lrq vw option (with optional dropout) was implemented by Paul Minero
vowpal wabbit (aka vw) is the brain child of John Langford
|
How do I use the SVD in collaborative filtering?
This is to try and answer the "how to" part of the question for those who want to practically implement sparse-SVD recommendations or inspect source code for the details. You can use an off-the-shelf
|
7,426
|
How do I use the SVD in collaborative filtering?
|
I would say that the name SVD is misleading.
In fact, the SVD method in recommender system doesn't directly use SVD factorization. Instead, it uses stochastic gradient descent to train the biases and factor vectors.
The details of the SVD and SVD++ algorithms for recommender system can be found in Sections 5.3.1 and 5.3.2 of the book Francesco Ricci, Lior Rokach, Bracha Shapira, and Paul B. Kantor. Recommender Systems Handbook. 1st edition, 2010.
In Python, there is a well-established package implemented these algorithms named surprise. In its documentation, they also mention the details of these algorithms.
|
How do I use the SVD in collaborative filtering?
|
I would say that the name SVD is misleading.
In fact, the SVD method in recommender system doesn't directly use SVD factorization. Instead, it uses stochastic gradient descent to train the biases and
|
How do I use the SVD in collaborative filtering?
I would say that the name SVD is misleading.
In fact, the SVD method in recommender system doesn't directly use SVD factorization. Instead, it uses stochastic gradient descent to train the biases and factor vectors.
The details of the SVD and SVD++ algorithms for recommender system can be found in Sections 5.3.1 and 5.3.2 of the book Francesco Ricci, Lior Rokach, Bracha Shapira, and Paul B. Kantor. Recommender Systems Handbook. 1st edition, 2010.
In Python, there is a well-established package implemented these algorithms named surprise. In its documentation, they also mention the details of these algorithms.
|
How do I use the SVD in collaborative filtering?
I would say that the name SVD is misleading.
In fact, the SVD method in recommender system doesn't directly use SVD factorization. Instead, it uses stochastic gradient descent to train the biases and
|
7,427
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
Let's say you've trained your Naive Bayes Classifier on 2 classes, "Ham" and "Spam" (i.e. it classifies emails). For the sake of simplicity, we'll assume prior probabilities to be 50/50.
Now let's say you have an email $(w_1, w_2,...,w_n)$ which your classifier rates very highly as "Ham", say $$P(Ham|w_1,w_2,...w_n) = .90$$ and $$P(Spam|w_1,w_2,..w_n) = .10$$
So far so good.
Now let's say you have another email $(w_1, w_2, ...,w_n,w_{n+1})$ which is exactly the same as the above email except that there's one word in it that isn't included in the vocabulary. Therefore, since this word's count is 0, $$P(Ham|w_{n+1}) = P(Spam|w_{n+1}) = 0$$
Suddenly, $$P(Ham|w_1,w_2,...w_n,w_{n+1}) = P(Ham|w_1,w_2,...w_n) * P(Ham|w_{n+1}) = 0$$ and $$P(Spam|w_1,w_2,..w_n,w_{n+1}) = P(Spam|w_1,w_2,...w_n) * P(Spam|w_{n+1}) = 0$$
Despite the 1st email being strongly classified in one class, this 2nd email may be classified differently because of that last word having a probability of zero.
Laplace smoothing solves this by giving the last word a small non-zero probability for both classes, so that the posterior probabilities don't suddenly drop to zero.
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
Let's say you've trained your Naive Bayes Classifier on 2 classes, "Ham" and "Spam" (i.e. it classifies emails). For the sake of simplicity, we'll assume prior probabilities to be 50/50.
Now let's sa
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
Let's say you've trained your Naive Bayes Classifier on 2 classes, "Ham" and "Spam" (i.e. it classifies emails). For the sake of simplicity, we'll assume prior probabilities to be 50/50.
Now let's say you have an email $(w_1, w_2,...,w_n)$ which your classifier rates very highly as "Ham", say $$P(Ham|w_1,w_2,...w_n) = .90$$ and $$P(Spam|w_1,w_2,..w_n) = .10$$
So far so good.
Now let's say you have another email $(w_1, w_2, ...,w_n,w_{n+1})$ which is exactly the same as the above email except that there's one word in it that isn't included in the vocabulary. Therefore, since this word's count is 0, $$P(Ham|w_{n+1}) = P(Spam|w_{n+1}) = 0$$
Suddenly, $$P(Ham|w_1,w_2,...w_n,w_{n+1}) = P(Ham|w_1,w_2,...w_n) * P(Ham|w_{n+1}) = 0$$ and $$P(Spam|w_1,w_2,..w_n,w_{n+1}) = P(Spam|w_1,w_2,...w_n) * P(Spam|w_{n+1}) = 0$$
Despite the 1st email being strongly classified in one class, this 2nd email may be classified differently because of that last word having a probability of zero.
Laplace smoothing solves this by giving the last word a small non-zero probability for both classes, so that the posterior probabilities don't suddenly drop to zero.
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
Let's say you've trained your Naive Bayes Classifier on 2 classes, "Ham" and "Spam" (i.e. it classifies emails). For the sake of simplicity, we'll assume prior probabilities to be 50/50.
Now let's sa
|
7,428
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
You always need this 'fail-safe' probability.
To see why consider the worst case where none of the words in the training sample appear in the test sentence. In this case, under your model we would conclude that the sentence is impossible but it clearly exists creating a contradiction.
Another extreme example is the test sentence "Alex met Steve." where "met" appears several times in the training sample but "Alex" and "Steve" don't. Your model would conclude this statement is very likely which is not true.
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
You always need this 'fail-safe' probability.
To see why consider the worst case where none of the words in the training sample appear in the test sentence. In this case, under your model we would co
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
You always need this 'fail-safe' probability.
To see why consider the worst case where none of the words in the training sample appear in the test sentence. In this case, under your model we would conclude that the sentence is impossible but it clearly exists creating a contradiction.
Another extreme example is the test sentence "Alex met Steve." where "met" appears several times in the training sample but "Alex" and "Steve" don't. Your model would conclude this statement is very likely which is not true.
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
You always need this 'fail-safe' probability.
To see why consider the worst case where none of the words in the training sample appear in the test sentence. In this case, under your model we would co
|
7,429
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
This question is rather simple if you are familiar with Bayes estimators, since it is the directly conclusion of Bayes estimator.
In the Bayesian approach, parameters are considered to be a quantity whose variation can be described by a probability distribution(or prior distribution).
So, if we view the procedure of picking up as multinomial distribution, then we can solve the question in few steps.
First, define
$$m = |V|, n = \sum n_i$$
If we assume the prior distribution of $p_i$ is uniform distribution, we can calculate it's conditional probability distribution as
$$p(p_1,p_2,...,p_m|n_1,n_2,...,n_m) = \frac{\Gamma(n+m)}{\prod\limits_{i=1}^{m}\Gamma(n_i+1)}\prod\limits_{i=1}^{m}p_i^{n_i}$$
we can find it's in fact Dirichlet distribution, and expectation of $p_i$ is
$$
E[p_i] = \frac{n_i+1}{n+m}
$$
A natural estimate for $p_i$ is the mean of the posterior distribution. So we can give the Bayes estimator of $p_i$:
$$
\hat p_i = E[p_i]
$$
You can see we just draw the same conclusion as Laplace Smoothing.
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
This question is rather simple if you are familiar with Bayes estimators, since it is the directly conclusion of Bayes estimator.
In the Bayesian approach, parameters are considered to be a quantity w
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
This question is rather simple if you are familiar with Bayes estimators, since it is the directly conclusion of Bayes estimator.
In the Bayesian approach, parameters are considered to be a quantity whose variation can be described by a probability distribution(or prior distribution).
So, if we view the procedure of picking up as multinomial distribution, then we can solve the question in few steps.
First, define
$$m = |V|, n = \sum n_i$$
If we assume the prior distribution of $p_i$ is uniform distribution, we can calculate it's conditional probability distribution as
$$p(p_1,p_2,...,p_m|n_1,n_2,...,n_m) = \frac{\Gamma(n+m)}{\prod\limits_{i=1}^{m}\Gamma(n_i+1)}\prod\limits_{i=1}^{m}p_i^{n_i}$$
we can find it's in fact Dirichlet distribution, and expectation of $p_i$ is
$$
E[p_i] = \frac{n_i+1}{n+m}
$$
A natural estimate for $p_i$ is the mean of the posterior distribution. So we can give the Bayes estimator of $p_i$:
$$
\hat p_i = E[p_i]
$$
You can see we just draw the same conclusion as Laplace Smoothing.
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
This question is rather simple if you are familiar with Bayes estimators, since it is the directly conclusion of Bayes estimator.
In the Bayesian approach, parameters are considered to be a quantity w
|
7,430
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
Disregarding those words is another way to handle it. It corresponds to averaging (integrate out) over all missing variables. So the result is different. How?
Assuming the notation used here:
$$
P(C^{*}|d) = \arg\max_{C} \frac{\prod_{i}p(t_{i}|C)P(C)}{P(d)} \propto \arg\max_{C} \prod_{i}p(t_{i}|C)P(C)
$$
where $t_{i}$ are the tokens in the vocabulary and $d$ is a document.
Let say token $t_{k}$ does not appear. Instead of using a Laplace smoothing (which comes from imposing a Dirichlet prior on the multinomial Bayes), you sum out $t_{k}$ which corresponds to saying: I take a weighted voting over all possibilities for the unknown tokens (having them or not).
$$
P(C^{*}|d) \propto \arg\max_{C} \sum_{t_{k}} \prod_{i}p(t_{i}|C)P(C) =
\arg\max_{C} P(C)\prod_{i \neq k}p(t_{i}|C) \sum_{t_{k}} p(t_{k}|C) =
\arg\max_{C} P(C)\prod_{i \neq k}p(t_{i}|C)
$$
But in practice one prefers the smoothing approach. Instead of ignoring those tokens, you assign them a low probability which is like thinking: if I have unknown tokens, it is more unlikely that is the kind of document I'd otherwise think it is.
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
Disregarding those words is another way to handle it. It corresponds to averaging (integrate out) over all missing variables. So the result is different. How?
Assuming the notation used here:
$$
P(C^{
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
Disregarding those words is another way to handle it. It corresponds to averaging (integrate out) over all missing variables. So the result is different. How?
Assuming the notation used here:
$$
P(C^{*}|d) = \arg\max_{C} \frac{\prod_{i}p(t_{i}|C)P(C)}{P(d)} \propto \arg\max_{C} \prod_{i}p(t_{i}|C)P(C)
$$
where $t_{i}$ are the tokens in the vocabulary and $d$ is a document.
Let say token $t_{k}$ does not appear. Instead of using a Laplace smoothing (which comes from imposing a Dirichlet prior on the multinomial Bayes), you sum out $t_{k}$ which corresponds to saying: I take a weighted voting over all possibilities for the unknown tokens (having them or not).
$$
P(C^{*}|d) \propto \arg\max_{C} \sum_{t_{k}} \prod_{i}p(t_{i}|C)P(C) =
\arg\max_{C} P(C)\prod_{i \neq k}p(t_{i}|C) \sum_{t_{k}} p(t_{k}|C) =
\arg\max_{C} P(C)\prod_{i \neq k}p(t_{i}|C)
$$
But in practice one prefers the smoothing approach. Instead of ignoring those tokens, you assign them a low probability which is like thinking: if I have unknown tokens, it is more unlikely that is the kind of document I'd otherwise think it is.
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
Disregarding those words is another way to handle it. It corresponds to averaging (integrate out) over all missing variables. So the result is different. How?
Assuming the notation used here:
$$
P(C^{
|
7,431
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
You want to know why we bother with smoothing at all in a Naive Bayes classifier (when we can throw away the unknown features instead).
The answer to your question is: not all words have to be unknown in all classes.
Say there are two classes M and N with features A, B and C, as follows:
M: A=3, B=1, C=0
(In the class M, A appears 3 times and B only once)
N: A=0, B=1, C=3
(In the class N, C appears 3 times and B only once)
Let's see what happens when you throw away features that appear zero times.
A) Throw Away Features That Appear Zero Times In Any Class
If you throw away features A and C because they appear zero times in any of the classes, then you are only left with feature B to classify documents with.
And losing that information is a bad thing as you will see below!
If you're presented with a test document as follows:
B=1, C=3
(It contains B once and C three times)
Now, since you've discarded the features A and B, you won't be able to tell whether the above document belongs to class M or class N.
So, losing any feature information is a bad thing!
B) Throw Away Features That Appear Zero Times In All Classes
Is it possible to get around this problem by discarding only those features that appear zero times in all of the classes?
No, because that would create its own problems!
The following test document illustrates what would happen if we did that:
A=3, B=1, C=1
The probability of M and N would both become zero (because we did not throw away the zero probability of A in class N and the zero probability of C in class M).
C) Don't Throw Anything Away - Use Smoothing Instead
Smoothing allows you to classify both the above documents correctly because:
You do not lose count information in classes where such information is available and
You do not have to contend with zero counts.
Naive Bayes Classifiers In Practice
The Naive Bayes classifier in NLTK used to throw away features that had zero counts in any of the classes.
This used to make it perform poorly when trained using a hard EM procedure (where the classifier is bootstrapped up from very little training data).
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
You want to know why we bother with smoothing at all in a Naive Bayes classifier (when we can throw away the unknown features instead).
The answer to your question is: not all words have to be unknown
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
You want to know why we bother with smoothing at all in a Naive Bayes classifier (when we can throw away the unknown features instead).
The answer to your question is: not all words have to be unknown in all classes.
Say there are two classes M and N with features A, B and C, as follows:
M: A=3, B=1, C=0
(In the class M, A appears 3 times and B only once)
N: A=0, B=1, C=3
(In the class N, C appears 3 times and B only once)
Let's see what happens when you throw away features that appear zero times.
A) Throw Away Features That Appear Zero Times In Any Class
If you throw away features A and C because they appear zero times in any of the classes, then you are only left with feature B to classify documents with.
And losing that information is a bad thing as you will see below!
If you're presented with a test document as follows:
B=1, C=3
(It contains B once and C three times)
Now, since you've discarded the features A and B, you won't be able to tell whether the above document belongs to class M or class N.
So, losing any feature information is a bad thing!
B) Throw Away Features That Appear Zero Times In All Classes
Is it possible to get around this problem by discarding only those features that appear zero times in all of the classes?
No, because that would create its own problems!
The following test document illustrates what would happen if we did that:
A=3, B=1, C=1
The probability of M and N would both become zero (because we did not throw away the zero probability of A in class N and the zero probability of C in class M).
C) Don't Throw Anything Away - Use Smoothing Instead
Smoothing allows you to classify both the above documents correctly because:
You do not lose count information in classes where such information is available and
You do not have to contend with zero counts.
Naive Bayes Classifiers In Practice
The Naive Bayes classifier in NLTK used to throw away features that had zero counts in any of the classes.
This used to make it perform poorly when trained using a hard EM procedure (where the classifier is bootstrapped up from very little training data).
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
You want to know why we bother with smoothing at all in a Naive Bayes classifier (when we can throw away the unknown features instead).
The answer to your question is: not all words have to be unknown
|
7,432
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
I also came across the same problem while studying Naive Bayes.
According to me, whenever we encounter a test example which we hadn't come across during training, then out Posterior probability will become 0.
So adding the 1 , even if we never train on a particular feature/class, the Posterior probability will never be 0.
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
I also came across the same problem while studying Naive Bayes.
According to me, whenever we encounter a test example which we hadn't come across during training, then out Posterior probability will b
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
I also came across the same problem while studying Naive Bayes.
According to me, whenever we encounter a test example which we hadn't come across during training, then out Posterior probability will become 0.
So adding the 1 , even if we never train on a particular feature/class, the Posterior probability will never be 0.
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
I also came across the same problem while studying Naive Bayes.
According to me, whenever we encounter a test example which we hadn't come across during training, then out Posterior probability will b
|
7,433
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
Matt you are correct you raise a very good point - yes Laplace Smoothing is quite frankly nonsense! Just simply throwing away those features can be a valid approach, particularly when the denominator is also a small number - there is simply not enough evidence to support the probability estimation.
I have a strong aversion to solving any problem via use of some arbitrary adjustment. The problem here is zeros, the "solution" is to just "add some small value to zero so it's not zero anymore - MAGIC the problem is no more". Of course that's totally arbitrary.
Your suggestion of better feature selection to begin with is a less arbitrary approach and IME increases performance. Furthermore Laplace Smoothing in conjunction with naive Bayes as the model has in my experience worsens the granularity problem - i.e. the problem where scores output tend to be close to 1.0 or 0.0 (if the number of features is infinite then every score will be 1.0 or 0.0 - this is a consequence of the independence assumption).
Now alternative techniques for probability estimation exist (other than max likelihood + Laplace smoothing), but are massively under documented. In fact there is a whole field called Inductive Logic and Inference Processes that use a lot of tools from Information Theory.
What we use in practice is of Minimum Cross Entropy Updating which is an extension of Jeffrey's Updating where we define the convex region of probability space consistent with the evidence to be the region such that a point in it would mean the Maximum Likelihood estimation is within the Expected Absolute Deviation from the point.
This has a nice property that as the number of data points decreases the estimations peace-wise smoothly approach the prior - and therefore their effect in the Bayesian calculation is null. Laplace smoothing on the other hand makes each estimation approach the point of Maximum Entropy that may not be the prior and therefore the effect in the calculation is not null and will just add noise.
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
Matt you are correct you raise a very good point - yes Laplace Smoothing is quite frankly nonsense! Just simply throwing away those features can be a valid approach, particularly when the denominator
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
Matt you are correct you raise a very good point - yes Laplace Smoothing is quite frankly nonsense! Just simply throwing away those features can be a valid approach, particularly when the denominator is also a small number - there is simply not enough evidence to support the probability estimation.
I have a strong aversion to solving any problem via use of some arbitrary adjustment. The problem here is zeros, the "solution" is to just "add some small value to zero so it's not zero anymore - MAGIC the problem is no more". Of course that's totally arbitrary.
Your suggestion of better feature selection to begin with is a less arbitrary approach and IME increases performance. Furthermore Laplace Smoothing in conjunction with naive Bayes as the model has in my experience worsens the granularity problem - i.e. the problem where scores output tend to be close to 1.0 or 0.0 (if the number of features is infinite then every score will be 1.0 or 0.0 - this is a consequence of the independence assumption).
Now alternative techniques for probability estimation exist (other than max likelihood + Laplace smoothing), but are massively under documented. In fact there is a whole field called Inductive Logic and Inference Processes that use a lot of tools from Information Theory.
What we use in practice is of Minimum Cross Entropy Updating which is an extension of Jeffrey's Updating where we define the convex region of probability space consistent with the evidence to be the region such that a point in it would mean the Maximum Likelihood estimation is within the Expected Absolute Deviation from the point.
This has a nice property that as the number of data points decreases the estimations peace-wise smoothly approach the prior - and therefore their effect in the Bayesian calculation is null. Laplace smoothing on the other hand makes each estimation approach the point of Maximum Entropy that may not be the prior and therefore the effect in the calculation is not null and will just add noise.
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
Matt you are correct you raise a very good point - yes Laplace Smoothing is quite frankly nonsense! Just simply throwing away those features can be a valid approach, particularly when the denominator
|
7,434
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
You may don't have enough data for the task and hence the estimate would not be accurate or the model would overfit training data, for example, we may end up with a black swan problem. There is no black swan in our training examples but that doesn't mean that there exists no black swan in the world. We can just add a prior to our model and we can also call it "pseudocount".
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
|
You may don't have enough data for the task and hence the estimate would not be accurate or the model would overfit training data, for example, we may end up with a black swan problem. There is no bla
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
You may don't have enough data for the task and hence the estimate would not be accurate or the model would overfit training data, for example, we may end up with a black swan problem. There is no black swan in our training examples but that doesn't mean that there exists no black swan in the world. We can just add a prior to our model and we can also call it "pseudocount".
|
In Naive Bayes, why bother with Laplace smoothing when we have unknown words in the test set?
You may don't have enough data for the task and hence the estimate would not be accurate or the model would overfit training data, for example, we may end up with a black swan problem. There is no bla
|
7,435
|
Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
|
Improper scoring rules such as proportion classified correctly, sensitivity, and specificity are not only arbitrary (in choice of threshold) but are improper, i.e., they have the property that maximizing them leads to a bogus model, inaccurate predictions, and selecting the wrong features. It is good that they disagree with proper scoring (log-likelihood; logarithmic scoring rule; Brier score) rules and the $c$-index (a semi-proper scoring rule - area under ROC curve; concordance probability; Wilcoxon statistic; Somers' $D_{xy}$ rank correlation coefficient); this gives us more confidence in proper scoring rules.
|
Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
|
Improper scoring rules such as proportion classified correctly, sensitivity, and specificity are not only arbitrary (in choice of threshold) but are improper, i.e., they have the property that maximiz
|
Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
Improper scoring rules such as proportion classified correctly, sensitivity, and specificity are not only arbitrary (in choice of threshold) but are improper, i.e., they have the property that maximizing them leads to a bogus model, inaccurate predictions, and selecting the wrong features. It is good that they disagree with proper scoring (log-likelihood; logarithmic scoring rule; Brier score) rules and the $c$-index (a semi-proper scoring rule - area under ROC curve; concordance probability; Wilcoxon statistic; Somers' $D_{xy}$ rank correlation coefficient); this gives us more confidence in proper scoring rules.
|
Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
Improper scoring rules such as proportion classified correctly, sensitivity, and specificity are not only arbitrary (in choice of threshold) but are improper, i.e., they have the property that maximiz
|
7,436
|
Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
|
Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
Accuracy is computed at the threshold value of 0.5. While AUC is computed by adding all the "accuracies" computed for all the possible threshold values. ROC can be seen as an average (expected value) of those accuracies when are computed for all threshold values.
So, how do i really judge/compare the classification performances of A and B? I mean, do i use the AUC value? do i use the acc value? and why?
It depends. ROC curves tells you something about how well your model your model separates the two classes, no matter where the threshold value is. Accuracy is a measure which works well usually when classes keeps the same balance on train and test sets, and when scores are really probabilities. ROC gives you more hints on how model will behave if this assumption is violated (however is only an idea).
furthermore, when i apply proper scoring rules to A and B, B outperforms A in terms of log loss, quadratic loss, and spherical loss (p < 0.001). how do these weigh in on judging classification performance with respect to AUC?
I do not know. You have to understand better what you data is about. What each model is capable to understand from your data. And decide later which is the best compromise. The reason why that happens is that there is no universal metric about a classifier performance.
The ROC graph for A looks very smooth (it is a curved arc), but the ROC graph for B looks like a set of connected lines. why is this?
That is probably because the bayesian model gives you smooth transitions between those two classes. That is translated in many threshold values. Which means many points on ROC curve. The second model probably produce less values due to prediction with the same value on bigger regions of the input space. Basically, also the first ROC curve is made by lines, the only difference is that there are so many adjacent small lines, that you see it as a curve.
|
Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
|
Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
Accuracy is computed at the threshold value of 0.5. While AUC is computed by adding all the "accuracies" co
|
Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
Accuracy is computed at the threshold value of 0.5. While AUC is computed by adding all the "accuracies" computed for all the possible threshold values. ROC can be seen as an average (expected value) of those accuracies when are computed for all threshold values.
So, how do i really judge/compare the classification performances of A and B? I mean, do i use the AUC value? do i use the acc value? and why?
It depends. ROC curves tells you something about how well your model your model separates the two classes, no matter where the threshold value is. Accuracy is a measure which works well usually when classes keeps the same balance on train and test sets, and when scores are really probabilities. ROC gives you more hints on how model will behave if this assumption is violated (however is only an idea).
furthermore, when i apply proper scoring rules to A and B, B outperforms A in terms of log loss, quadratic loss, and spherical loss (p < 0.001). how do these weigh in on judging classification performance with respect to AUC?
I do not know. You have to understand better what you data is about. What each model is capable to understand from your data. And decide later which is the best compromise. The reason why that happens is that there is no universal metric about a classifier performance.
The ROC graph for A looks very smooth (it is a curved arc), but the ROC graph for B looks like a set of connected lines. why is this?
That is probably because the bayesian model gives you smooth transitions between those two classes. That is translated in many threshold values. Which means many points on ROC curve. The second model probably produce less values due to prediction with the same value on bigger regions of the input space. Basically, also the first ROC curve is made by lines, the only difference is that there are so many adjacent small lines, that you see it as a curve.
|
Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
Accuracy is computed at the threshold value of 0.5. While AUC is computed by adding all the "accuracies" co
|
7,437
|
Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
|
Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
First, although the cut-off (0.5) is the same, it is not comparable at all between A and B. In fact, it looks pretty different from your histograms! Look at B: all your predictions are < 0.5.
Second, why is B so accurate? Because of class imbalance. In test B, you have 19138 negative examples, and 6687 positives (why the numbers are different in A is unclear to me: missing values maybe?). This means that by simply saying that everything is negative, I can already achieve a pretty good accuracy: precisely 19138 / (19138 + 6687) = 74%. Note that this requires absolutely no knowledge at all beyond the fact that there is an imbalance between the classes: even the dumbest model can do that!
And this is exactly what test B does at the 0.5 threshold... you get (nearly) only negative predictions.
A is more of a mixed bag with. Although it has a slightly lower accuracy, note that its sensitivity is much higher at this cut-off...
Finally, you cannot compare the accuracy (a performance at one threshold) with the AUC (an average performance on all possible thresholds). As these metrics measure different things, it is not surprising that they are different.
So, how do I really judge/compare the classification performances of A
and B? i mean, do i use the AUC value? do i use the acc value? and
why?
Furthermore, when I apply proper scoring rules to A and B, B
outperforms A in terms of log loss, quadratic loss, and spherical loss
(p < 0.001). How do these weigh in on judging classification
performance with respect to AUC?
You have to think: what is it you really want to do? What is important? Ultimately, only you can answer this question based on your knowledge of the question. Maybe AUC makes sense (it rarely really does when you really think about it, except when you don't want to make a decision youself but let others do so - that's most likely if you are making a tool for others to use), maybe the accuracy (if you need a binary, go-no go answer), but maybe at different thresholds, maybe some other more continuous measures, maybe one of the measures suggested by Frank Harrell... as already stated, there is no universal question here.
The ROC graph for A looks very smooth (it is a curved arc), but the ROC graph for B looks like a set of connected lines. Why is this?
Back to the predictions that you showed on the histograms. A gives you a continuous, or nearly-continuous prediction. To the contrary, B returns mostly only a few different values (as you can see by the "spiky" histogram).
In a ROC curve, each point correspond to a threshold. In A, you have a lot of thresholds (because the predictions are continuous), so the curve is smooth. In B, you have only a few thresholds, so the curve looks "jumps" from a SN/SP to an other.
You see vertical jumps when the sensitivity only changes (the threshold makes differences only for positive cases), horizontal jumps when the specificity only changes (the threshold makes differences only for negative examples), and diagonal jumps when the change of threshold affects both classes.
|
Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
|
Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
First, although the cut-off (0.5) is the same, it is not comparable at all between A and B. In fact, it lo
|
Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
First, although the cut-off (0.5) is the same, it is not comparable at all between A and B. In fact, it looks pretty different from your histograms! Look at B: all your predictions are < 0.5.
Second, why is B so accurate? Because of class imbalance. In test B, you have 19138 negative examples, and 6687 positives (why the numbers are different in A is unclear to me: missing values maybe?). This means that by simply saying that everything is negative, I can already achieve a pretty good accuracy: precisely 19138 / (19138 + 6687) = 74%. Note that this requires absolutely no knowledge at all beyond the fact that there is an imbalance between the classes: even the dumbest model can do that!
And this is exactly what test B does at the 0.5 threshold... you get (nearly) only negative predictions.
A is more of a mixed bag with. Although it has a slightly lower accuracy, note that its sensitivity is much higher at this cut-off...
Finally, you cannot compare the accuracy (a performance at one threshold) with the AUC (an average performance on all possible thresholds). As these metrics measure different things, it is not surprising that they are different.
So, how do I really judge/compare the classification performances of A
and B? i mean, do i use the AUC value? do i use the acc value? and
why?
Furthermore, when I apply proper scoring rules to A and B, B
outperforms A in terms of log loss, quadratic loss, and spherical loss
(p < 0.001). How do these weigh in on judging classification
performance with respect to AUC?
You have to think: what is it you really want to do? What is important? Ultimately, only you can answer this question based on your knowledge of the question. Maybe AUC makes sense (it rarely really does when you really think about it, except when you don't want to make a decision youself but let others do so - that's most likely if you are making a tool for others to use), maybe the accuracy (if you need a binary, go-no go answer), but maybe at different thresholds, maybe some other more continuous measures, maybe one of the measures suggested by Frank Harrell... as already stated, there is no universal question here.
The ROC graph for A looks very smooth (it is a curved arc), but the ROC graph for B looks like a set of connected lines. Why is this?
Back to the predictions that you showed on the histograms. A gives you a continuous, or nearly-continuous prediction. To the contrary, B returns mostly only a few different values (as you can see by the "spiky" histogram).
In a ROC curve, each point correspond to a threshold. In A, you have a lot of thresholds (because the predictions are continuous), so the curve is smooth. In B, you have only a few thresholds, so the curve looks "jumps" from a SN/SP to an other.
You see vertical jumps when the sensitivity only changes (the threshold makes differences only for positive cases), horizontal jumps when the specificity only changes (the threshold makes differences only for negative examples), and diagonal jumps when the change of threshold affects both classes.
|
Why is AUC higher for a classifier that is less accurate than for one that is more accurate?
Why is the AUC for A better than B, when B "seems" to outperform A with respect to accuracy?
First, although the cut-off (0.5) is the same, it is not comparable at all between A and B. In fact, it lo
|
7,438
|
Why are the weights of RNN/LSTM networks shared across time?
|
The accepted answer focuses on the practical side of the question: it would require a lot of resources, if there parameters are not shared. However, the decision to share parameters in an RNN has been made when any serious computation was a problem (1980s according to wiki), so I believe it wasn't the main argument (though still valid).
There are pure theoretical reasons for parameter sharing:
It helps in applying the model to examples of different lengths. While reading a sequence, if RNN model uses different parameters for each step during training, it won't generalize to unseen sequences of different lengths.
Oftentimes, the sequences operate according to the same rules across the sequence.
For instance, in NLP:
"On Monday it was snowing"
"It was snowing on Monday"
...these two sentences mean the same thing, though the details are in different parts of the sequence. Parameter sharing reflects the fact that we are performing the same task at each step, as a result, we don't have to relearn the rules at each point in the sentence.
LSTM is no different in this sense, hence it uses shared parameters as well.
|
Why are the weights of RNN/LSTM networks shared across time?
|
The accepted answer focuses on the practical side of the question: it would require a lot of resources, if there parameters are not shared. However, the decision to share parameters in an RNN has been
|
Why are the weights of RNN/LSTM networks shared across time?
The accepted answer focuses on the practical side of the question: it would require a lot of resources, if there parameters are not shared. However, the decision to share parameters in an RNN has been made when any serious computation was a problem (1980s according to wiki), so I believe it wasn't the main argument (though still valid).
There are pure theoretical reasons for parameter sharing:
It helps in applying the model to examples of different lengths. While reading a sequence, if RNN model uses different parameters for each step during training, it won't generalize to unseen sequences of different lengths.
Oftentimes, the sequences operate according to the same rules across the sequence.
For instance, in NLP:
"On Monday it was snowing"
"It was snowing on Monday"
...these two sentences mean the same thing, though the details are in different parts of the sequence. Parameter sharing reflects the fact that we are performing the same task at each step, as a result, we don't have to relearn the rules at each point in the sentence.
LSTM is no different in this sense, hence it uses shared parameters as well.
|
Why are the weights of RNN/LSTM networks shared across time?
The accepted answer focuses on the practical side of the question: it would require a lot of resources, if there parameters are not shared. However, the decision to share parameters in an RNN has been
|
7,439
|
Why are the weights of RNN/LSTM networks shared across time?
|
The 'shared weights' perspective comes from thinking about RNNs as feedforward networks unrolled across time. If the weights were different at each moment in time, this would just be a feedforward network. But, I suppose another way to think about it would be as an RNN whose weights are a time-varying function (and that could let you keep the ability to process variable length sequences).
If you did this, the number of parameters would grow linearly with the number of time steps. That would be a big explosion of parameters for sequences of any appreciable length. It would indeed make the network more powerful, if you had the massive computational resources to run it and the massive data to constrain it. For long sequences, it would probably be computationally infeasible and you'd get overfitting. In fact, people usually go in the opposite direction by running truncated backpropagation through time, which only unrolls the network for some short period of time, rather than over the entire sequence. This is done for computational feasibility. Interestingly, RNNs can still learn temporal structure that extends beyond the truncation length, because the recurrent units can store memory from before.
|
Why are the weights of RNN/LSTM networks shared across time?
|
The 'shared weights' perspective comes from thinking about RNNs as feedforward networks unrolled across time. If the weights were different at each moment in time, this would just be a feedforward net
|
Why are the weights of RNN/LSTM networks shared across time?
The 'shared weights' perspective comes from thinking about RNNs as feedforward networks unrolled across time. If the weights were different at each moment in time, this would just be a feedforward network. But, I suppose another way to think about it would be as an RNN whose weights are a time-varying function (and that could let you keep the ability to process variable length sequences).
If you did this, the number of parameters would grow linearly with the number of time steps. That would be a big explosion of parameters for sequences of any appreciable length. It would indeed make the network more powerful, if you had the massive computational resources to run it and the massive data to constrain it. For long sequences, it would probably be computationally infeasible and you'd get overfitting. In fact, people usually go in the opposite direction by running truncated backpropagation through time, which only unrolls the network for some short period of time, rather than over the entire sequence. This is done for computational feasibility. Interestingly, RNNs can still learn temporal structure that extends beyond the truncation length, because the recurrent units can store memory from before.
|
Why are the weights of RNN/LSTM networks shared across time?
The 'shared weights' perspective comes from thinking about RNNs as feedforward networks unrolled across time. If the weights were different at each moment in time, this would just be a feedforward net
|
7,440
|
Why are the weights of RNN/LSTM networks shared across time?
|
I think since the RNNs with hidden-to-hidden recurrences (and time shared weights) are equivalent to Universal Turing Machines, letting them have different weights for different time steps does not make them more powerful.
|
Why are the weights of RNN/LSTM networks shared across time?
|
I think since the RNNs with hidden-to-hidden recurrences (and time shared weights) are equivalent to Universal Turing Machines, letting them have different weights for different time steps does not ma
|
Why are the weights of RNN/LSTM networks shared across time?
I think since the RNNs with hidden-to-hidden recurrences (and time shared weights) are equivalent to Universal Turing Machines, letting them have different weights for different time steps does not make them more powerful.
|
Why are the weights of RNN/LSTM networks shared across time?
I think since the RNNs with hidden-to-hidden recurrences (and time shared weights) are equivalent to Universal Turing Machines, letting them have different weights for different time steps does not ma
|
7,441
|
Why are the weights of RNN/LSTM networks shared across time?
|
I am trying hard to visualize how weight sharing combined with recurrence and combined with word embeddings behaves in a high-dimensional space.
Taking the example from @Maxim and visualizing a network that suggests the next word in the sequence:
"On Monday it was" when accumulated using recurrence will be a point in a high dimensional space, and thanks to word embeddings, "On Tuesday it was" will be in the same manifold. Given this accumulated weight as an input to a downstream fully connected layer with high memory capacity, it will learn to map to things like cold, snowing, etc. There may be other stored mappings like hectic, slow, obvious, etc. This may be learnt by one unit of the layer. Another unit may have learnt to map the high dimensional vector formed from the accumulated weight of "It was snowing on" to vectors like christmas, Monday, the, etc. This is about hidden-hidden weights and hidden-output weights. About input-hidden weights, although the weights are shared, the units of the layer that they lead to will be activated for different aspects of a sentence (people and places, stop words, etc), making them position (time) agnostic.
|
Why are the weights of RNN/LSTM networks shared across time?
|
I am trying hard to visualize how weight sharing combined with recurrence and combined with word embeddings behaves in a high-dimensional space.
Taking the example from @Maxim and visualizing a networ
|
Why are the weights of RNN/LSTM networks shared across time?
I am trying hard to visualize how weight sharing combined with recurrence and combined with word embeddings behaves in a high-dimensional space.
Taking the example from @Maxim and visualizing a network that suggests the next word in the sequence:
"On Monday it was" when accumulated using recurrence will be a point in a high dimensional space, and thanks to word embeddings, "On Tuesday it was" will be in the same manifold. Given this accumulated weight as an input to a downstream fully connected layer with high memory capacity, it will learn to map to things like cold, snowing, etc. There may be other stored mappings like hectic, slow, obvious, etc. This may be learnt by one unit of the layer. Another unit may have learnt to map the high dimensional vector formed from the accumulated weight of "It was snowing on" to vectors like christmas, Monday, the, etc. This is about hidden-hidden weights and hidden-output weights. About input-hidden weights, although the weights are shared, the units of the layer that they lead to will be activated for different aspects of a sentence (people and places, stop words, etc), making them position (time) agnostic.
|
Why are the weights of RNN/LSTM networks shared across time?
I am trying hard to visualize how weight sharing combined with recurrence and combined with word embeddings behaves in a high-dimensional space.
Taking the example from @Maxim and visualizing a networ
|
7,442
|
Why are the weights of RNN/LSTM networks shared across time?
|
RNN is a time based neural network.. at the end of time steps ( length of the input) it forms a vector which represents a thought preserving sequence information across the time. Thinking of thought vector like some sort of figure or object might help, which gets it's proper shape (depending on the input sequence) through time steps depending on the inputs it see's each time.
The weight matrices are initialized randomly first, If we take example as predicting the next letter using RNN, when we send the first letter and the network predicts the next letter by assigning probabilities to each possible letters. we can update the weights using the gradients in that timestep. same goes for all the letters until the word ends. At the end the weights gets updated in such a way that, it would increase the confidence ( probability ) of finding the right word which can be achieved through backpropagation.
This Training process continues for large number of data, thus tuning the weight parameters in such a way that given the sequence seen so far and this is the current state then this particular letter/ word ( in case of machine translations ) has the high probability of occurring.
Weight Sharing across the time stamps thus helps in understanding the sequence as well as in applied point of view reduces the training time.
|
Why are the weights of RNN/LSTM networks shared across time?
|
RNN is a time based neural network.. at the end of time steps ( length of the input) it forms a vector which represents a thought preserving sequence information across the time. Thinking of thought
|
Why are the weights of RNN/LSTM networks shared across time?
RNN is a time based neural network.. at the end of time steps ( length of the input) it forms a vector which represents a thought preserving sequence information across the time. Thinking of thought vector like some sort of figure or object might help, which gets it's proper shape (depending on the input sequence) through time steps depending on the inputs it see's each time.
The weight matrices are initialized randomly first, If we take example as predicting the next letter using RNN, when we send the first letter and the network predicts the next letter by assigning probabilities to each possible letters. we can update the weights using the gradients in that timestep. same goes for all the letters until the word ends. At the end the weights gets updated in such a way that, it would increase the confidence ( probability ) of finding the right word which can be achieved through backpropagation.
This Training process continues for large number of data, thus tuning the weight parameters in such a way that given the sequence seen so far and this is the current state then this particular letter/ word ( in case of machine translations ) has the high probability of occurring.
Weight Sharing across the time stamps thus helps in understanding the sequence as well as in applied point of view reduces the training time.
|
Why are the weights of RNN/LSTM networks shared across time?
RNN is a time based neural network.. at the end of time steps ( length of the input) it forms a vector which represents a thought preserving sequence information across the time. Thinking of thought
|
7,443
|
How to tell the difference between linear and non-linear regression models?
|
There are (at least) three senses in which a regression can be considered "linear." To distinguish them, let's start with an extremely general regression model
$$Y = f(X,\theta,\varepsilon).$$
To keep the discussion simple, take the independent variables $X$ to be fixed and accurately measured (rather than random variables). They model $n$ observations of $p$ attributes each, giving rise to the $n$-vector of responses $Y$. Conventionally, $X$ is represented as an $n\times p$ matrix and $Y$ as a column $n$-vector. The (finite $q$-vector) $\theta$ comprises the parameters. $\varepsilon$ is a vector-valued random variable. It usually has $n$ components, but sometimes has fewer. The function $f$ is vector-valued (with $n$ components to match $Y$) and is usually assumed continuous in its last two arguments ($\theta$ and $\varepsilon$).
The archetypal example, of fitting a line to $(x,y)$ data, is the case where $X$ is a vector of numbers $(x_i,\,i=1,2,\ldots,n)$--the x-values; $Y$ is a parallel vector of $n$ numbers $(y_i)$; $\theta = (\alpha,\beta)$ gives the intercept $\alpha$ and slope $\beta$; and $\varepsilon = (\varepsilon_1,\varepsilon_2,\ldots,\varepsilon_n)$ is a vector of "random errors" whose components are independent (and usually assumed to have identical but unknown distributions of mean zero). In the preceding notation,
$$y_i = \alpha + \beta x_i +\varepsilon_i = f(X,\theta,\varepsilon)_i$$
with $\theta = (\alpha,\beta)$.
The regression function may be linear in any (or all) of its three arguments:
"Linear regression, or a "linear model," ordinarily means that $f$ is linear as a function of the parameters $\theta$. The SAS meaning of "nonlinear regression" is in this sense, with the added assumption that $f$ is differentiable in its second argument (the parameters). This assumption makes it easier to find solutions.
A "linear relationship between $X$ and $Y$" means $f$ is linear as a
function of $X$.
A model has additive errors when $f$ is linear in $\varepsilon$.
In such cases it is always assumed that $\mathbb{E}(\varepsilon) =
0$. (Otherwise, it wouldn't be right to think of $\varepsilon$ as
"errors" or "deviations" from "correct" values.)
Every possible combination of these characteristics can happen and is useful. Let's survey the possibilities.
A linear model of a linear relationship with additive errors. This is ordinary (multiple) regression, already exhibited above and more generally written as
$$Y = X\theta + \varepsilon.$$
$X$ has been augmented, if necessary, by adjoining a column of constants, and $\theta$ is a $p$-vector.
A linear model of a nonlinear relationship with additive errors. This can be couched as a multiple regression by augmenting the columns of $X$ with nonlinear functions of $X$ itself. For instance,
$$y_i = \alpha + \beta x_i^2 + \varepsilon$$
is of this form. It is linear in $\theta=(\alpha,\beta)$; it has additive errors; and it is linear in the values $(1,x_i^2)$ even though $x_i^2$ is a nonlinear function of $x_i$.
A linear model of a linear relationship with nonadditive errors. An example is multiplicative error,
$$y_i = (\alpha + \beta x_i)\varepsilon_i.$$
(In such cases the $\varepsilon_i$ can be interpreted as "multiplicative errors" when the location of $\varepsilon_i$ is $1$. However, the proper sense of location is not necessarily the expectation $\mathbb{E}(\varepsilon_i)$ anymore: it might be the median or the geometric mean, for instance. A similar comment about location assumptions applies, mutatis mutandis, in all other non-additive-error contexts too.)
A linear model of a nonlinear relationship with nonadditive errors. E.g.,
$$y_i = (\alpha + \beta x_i^2)\varepsilon_i.$$
A nonlinear model of a linear relationship with additive errors. A nonlinear model involves combinations of its parameters that not only are nonlinear, they cannot even be linearized by re-expressing the parameters.
As a non-example, consider
$$y_i = \alpha\beta + \beta^2 x_i + \varepsilon_i.$$
By defining $\alpha^\prime = \alpha\beta$ and $\beta^\prime=\beta^2$, and restricting $\beta^\prime \ge 0$, this model can be rewritten
$$y_i = \alpha^\prime + \beta^\prime x_i + \varepsilon_i,$$
exhibiting it as a linear model (of a linear relationship with additive errors).
As an example, consider
$$y_i = \alpha + \alpha^2 x_i + \varepsilon_i.$$
It is impossible to find a new parameter $\alpha^\prime$, depending on $\alpha$, that will linearize this as a function of $\alpha^\prime$ (while keeping it linear in $x_i$ as well).
A nonlinear model of a nonlinear relationship with additive errors.
$$y_i = \alpha + \alpha^2 x_i^2 + \varepsilon_i.$$
A nonlinear model of a linear relationship with nonadditive errors.
$$y_i = (\alpha + \alpha^2 x_i)\varepsilon_i.$$
A nonlinear model of a nonlinear relationship with nonadditive errors.
$$y_i = (\alpha + \alpha^2 x_i^2)\varepsilon_i.$$
Although these exhibit eight distinct forms of regression, they do not constitute a classification system because some forms can be converted into others. A standard example is the conversion of a linear model with nonadditive errors (assumed to have positive support)
$$y_i = (\alpha + \beta x_i)\varepsilon_i$$
into a linear model of a nonlinear relationship with additive errors via the logarithm,
$$\log(y_i) = \mu_i + \log(\alpha + \beta x_i) + (\log(\varepsilon_i) - \mu_i)$$
Here, the log geometric mean $\mu_i = \mathbb{E}\left(\log(\varepsilon_i)\right)$ has been removed from the error terms (to ensure they have zero means, as required) and incorporated into the other terms (where its value will need to be estimated). Indeed, one major reason to re-express the dependent variable $Y$ is to create a model with additive errors. Re-expression can also linearize $Y$ as a function of either (or both) of the parameters and explanatory variables.
###Collinearity###
Collinearity (of the column vectors in $X$) can be an issue in any form of regression. The key to understanding this is to recognize that collinearity leads to difficulties in estimating the parameters. Abstractly and quite generally, compare two models $Y = f(X,\theta,\varepsilon)$ and $Y=f(X^\prime,\theta,\varepsilon^\prime)$ where $X^\prime$ is $X$ with one column slightly changed. If this induces enormous changes in the estimates $\hat\theta$ and $\hat\theta^\prime$, then obviously we have a problem. One way in which this problem can arise is in a linear model, linear in $X$ (that is, types (1) or (5) above), where the components of $\theta$ are in one-to-one correspondence with the columns of $X$. When one column is a non-trivial linear combination of the others, the estimate of its corresponding parameter can be any real number at all. That is an extreme example of such sensitivity.
From this point of view it should be clear that collinearity is a potential problem for linear models of nonlinear relationships (regardless of the additivity of the errors) and that this generalized concept of collinearity is potentially a problem in any regression model. When you have redundant variables, you will have problems identifying some parameters.
|
How to tell the difference between linear and non-linear regression models?
|
There are (at least) three senses in which a regression can be considered "linear." To distinguish them, let's start with an extremely general regression model
$$Y = f(X,\theta,\varepsilon).$$
To kee
|
How to tell the difference between linear and non-linear regression models?
There are (at least) three senses in which a regression can be considered "linear." To distinguish them, let's start with an extremely general regression model
$$Y = f(X,\theta,\varepsilon).$$
To keep the discussion simple, take the independent variables $X$ to be fixed and accurately measured (rather than random variables). They model $n$ observations of $p$ attributes each, giving rise to the $n$-vector of responses $Y$. Conventionally, $X$ is represented as an $n\times p$ matrix and $Y$ as a column $n$-vector. The (finite $q$-vector) $\theta$ comprises the parameters. $\varepsilon$ is a vector-valued random variable. It usually has $n$ components, but sometimes has fewer. The function $f$ is vector-valued (with $n$ components to match $Y$) and is usually assumed continuous in its last two arguments ($\theta$ and $\varepsilon$).
The archetypal example, of fitting a line to $(x,y)$ data, is the case where $X$ is a vector of numbers $(x_i,\,i=1,2,\ldots,n)$--the x-values; $Y$ is a parallel vector of $n$ numbers $(y_i)$; $\theta = (\alpha,\beta)$ gives the intercept $\alpha$ and slope $\beta$; and $\varepsilon = (\varepsilon_1,\varepsilon_2,\ldots,\varepsilon_n)$ is a vector of "random errors" whose components are independent (and usually assumed to have identical but unknown distributions of mean zero). In the preceding notation,
$$y_i = \alpha + \beta x_i +\varepsilon_i = f(X,\theta,\varepsilon)_i$$
with $\theta = (\alpha,\beta)$.
The regression function may be linear in any (or all) of its three arguments:
"Linear regression, or a "linear model," ordinarily means that $f$ is linear as a function of the parameters $\theta$. The SAS meaning of "nonlinear regression" is in this sense, with the added assumption that $f$ is differentiable in its second argument (the parameters). This assumption makes it easier to find solutions.
A "linear relationship between $X$ and $Y$" means $f$ is linear as a
function of $X$.
A model has additive errors when $f$ is linear in $\varepsilon$.
In such cases it is always assumed that $\mathbb{E}(\varepsilon) =
0$. (Otherwise, it wouldn't be right to think of $\varepsilon$ as
"errors" or "deviations" from "correct" values.)
Every possible combination of these characteristics can happen and is useful. Let's survey the possibilities.
A linear model of a linear relationship with additive errors. This is ordinary (multiple) regression, already exhibited above and more generally written as
$$Y = X\theta + \varepsilon.$$
$X$ has been augmented, if necessary, by adjoining a column of constants, and $\theta$ is a $p$-vector.
A linear model of a nonlinear relationship with additive errors. This can be couched as a multiple regression by augmenting the columns of $X$ with nonlinear functions of $X$ itself. For instance,
$$y_i = \alpha + \beta x_i^2 + \varepsilon$$
is of this form. It is linear in $\theta=(\alpha,\beta)$; it has additive errors; and it is linear in the values $(1,x_i^2)$ even though $x_i^2$ is a nonlinear function of $x_i$.
A linear model of a linear relationship with nonadditive errors. An example is multiplicative error,
$$y_i = (\alpha + \beta x_i)\varepsilon_i.$$
(In such cases the $\varepsilon_i$ can be interpreted as "multiplicative errors" when the location of $\varepsilon_i$ is $1$. However, the proper sense of location is not necessarily the expectation $\mathbb{E}(\varepsilon_i)$ anymore: it might be the median or the geometric mean, for instance. A similar comment about location assumptions applies, mutatis mutandis, in all other non-additive-error contexts too.)
A linear model of a nonlinear relationship with nonadditive errors. E.g.,
$$y_i = (\alpha + \beta x_i^2)\varepsilon_i.$$
A nonlinear model of a linear relationship with additive errors. A nonlinear model involves combinations of its parameters that not only are nonlinear, they cannot even be linearized by re-expressing the parameters.
As a non-example, consider
$$y_i = \alpha\beta + \beta^2 x_i + \varepsilon_i.$$
By defining $\alpha^\prime = \alpha\beta$ and $\beta^\prime=\beta^2$, and restricting $\beta^\prime \ge 0$, this model can be rewritten
$$y_i = \alpha^\prime + \beta^\prime x_i + \varepsilon_i,$$
exhibiting it as a linear model (of a linear relationship with additive errors).
As an example, consider
$$y_i = \alpha + \alpha^2 x_i + \varepsilon_i.$$
It is impossible to find a new parameter $\alpha^\prime$, depending on $\alpha$, that will linearize this as a function of $\alpha^\prime$ (while keeping it linear in $x_i$ as well).
A nonlinear model of a nonlinear relationship with additive errors.
$$y_i = \alpha + \alpha^2 x_i^2 + \varepsilon_i.$$
A nonlinear model of a linear relationship with nonadditive errors.
$$y_i = (\alpha + \alpha^2 x_i)\varepsilon_i.$$
A nonlinear model of a nonlinear relationship with nonadditive errors.
$$y_i = (\alpha + \alpha^2 x_i^2)\varepsilon_i.$$
Although these exhibit eight distinct forms of regression, they do not constitute a classification system because some forms can be converted into others. A standard example is the conversion of a linear model with nonadditive errors (assumed to have positive support)
$$y_i = (\alpha + \beta x_i)\varepsilon_i$$
into a linear model of a nonlinear relationship with additive errors via the logarithm,
$$\log(y_i) = \mu_i + \log(\alpha + \beta x_i) + (\log(\varepsilon_i) - \mu_i)$$
Here, the log geometric mean $\mu_i = \mathbb{E}\left(\log(\varepsilon_i)\right)$ has been removed from the error terms (to ensure they have zero means, as required) and incorporated into the other terms (where its value will need to be estimated). Indeed, one major reason to re-express the dependent variable $Y$ is to create a model with additive errors. Re-expression can also linearize $Y$ as a function of either (or both) of the parameters and explanatory variables.
###Collinearity###
Collinearity (of the column vectors in $X$) can be an issue in any form of regression. The key to understanding this is to recognize that collinearity leads to difficulties in estimating the parameters. Abstractly and quite generally, compare two models $Y = f(X,\theta,\varepsilon)$ and $Y=f(X^\prime,\theta,\varepsilon^\prime)$ where $X^\prime$ is $X$ with one column slightly changed. If this induces enormous changes in the estimates $\hat\theta$ and $\hat\theta^\prime$, then obviously we have a problem. One way in which this problem can arise is in a linear model, linear in $X$ (that is, types (1) or (5) above), where the components of $\theta$ are in one-to-one correspondence with the columns of $X$. When one column is a non-trivial linear combination of the others, the estimate of its corresponding parameter can be any real number at all. That is an extreme example of such sensitivity.
From this point of view it should be clear that collinearity is a potential problem for linear models of nonlinear relationships (regardless of the additivity of the errors) and that this generalized concept of collinearity is potentially a problem in any regression model. When you have redundant variables, you will have problems identifying some parameters.
|
How to tell the difference between linear and non-linear regression models?
There are (at least) three senses in which a regression can be considered "linear." To distinguish them, let's start with an extremely general regression model
$$Y = f(X,\theta,\varepsilon).$$
To kee
|
7,444
|
How to tell the difference between linear and non-linear regression models?
|
A model is linear if it is linear in parameters or can be transformed to be linear in parameters (linearizable). Linear models can model linear or non-linear relationships. Let's expand on each of these.
A model is linear in parameters if it can be written as the sum of terms, where each term is either a constant or a parameter multiplying a predictor (Xi):
Note that this definition is very narrow. Only the models meeting this definition are linear. Every other model, is non-linear.
There are a two types of linear models that are confused for non-linear models:
1. Linear models of non-linear relationships
For example, the model below models a non-linear relationship (because the derivative of Y with respect to X1 is a function of X1). By creating a new variable W1=X12, and re-writing the equation with W1 replacing X12, we have an equation that satisfies the definition of a linear model.
2. Models that aren't immediately linear but can become linear after a transformation (linearizable). Below are 2 examples of linearizable models:
Example 1:
This model may appear to be non-linear because it does not meet the definition of a model that is linear in parameters, however it can be transformed into a linear model hence it is linearizable/transformably linear, and is thus considered to be a linear model. The following transformations would linearize it. Start by taking the natural logarithm of both sides to obtain:
then make the following substitutions:
to obtain the linear model below:
Example 2:
This model may appear to be non-linear because it does not meet the definition of a model that is linear in parameters, however it can be transformed into a linear model hence it is linearizable/transformably linear, and is thus considered to be a linear model. The following transformations would linearize it. Start by taking the reciprocal of both sides to obtain:
then make the following substitutions:
to obtain the linear model below:
Any model that is not linear (not even through linearization) is non-linear. Think of it this way: If a model does not meet the definition of a linear model then it is a non-linear model, unless it can be proven to be linearizable, at which point it earns the right to be called a linear model.
Whuber's answer above as well as the Glen_b's answer in this link will add more color to my answer.
Nonlinear vs. generalized linear model: How do you refer to logistic, Poisson, etc. regression?
|
How to tell the difference between linear and non-linear regression models?
|
A model is linear if it is linear in parameters or can be transformed to be linear in parameters (linearizable). Linear models can model linear or non-linear relationships. Let's expand on each of the
|
How to tell the difference between linear and non-linear regression models?
A model is linear if it is linear in parameters or can be transformed to be linear in parameters (linearizable). Linear models can model linear or non-linear relationships. Let's expand on each of these.
A model is linear in parameters if it can be written as the sum of terms, where each term is either a constant or a parameter multiplying a predictor (Xi):
Note that this definition is very narrow. Only the models meeting this definition are linear. Every other model, is non-linear.
There are a two types of linear models that are confused for non-linear models:
1. Linear models of non-linear relationships
For example, the model below models a non-linear relationship (because the derivative of Y with respect to X1 is a function of X1). By creating a new variable W1=X12, and re-writing the equation with W1 replacing X12, we have an equation that satisfies the definition of a linear model.
2. Models that aren't immediately linear but can become linear after a transformation (linearizable). Below are 2 examples of linearizable models:
Example 1:
This model may appear to be non-linear because it does not meet the definition of a model that is linear in parameters, however it can be transformed into a linear model hence it is linearizable/transformably linear, and is thus considered to be a linear model. The following transformations would linearize it. Start by taking the natural logarithm of both sides to obtain:
then make the following substitutions:
to obtain the linear model below:
Example 2:
This model may appear to be non-linear because it does not meet the definition of a model that is linear in parameters, however it can be transformed into a linear model hence it is linearizable/transformably linear, and is thus considered to be a linear model. The following transformations would linearize it. Start by taking the reciprocal of both sides to obtain:
then make the following substitutions:
to obtain the linear model below:
Any model that is not linear (not even through linearization) is non-linear. Think of it this way: If a model does not meet the definition of a linear model then it is a non-linear model, unless it can be proven to be linearizable, at which point it earns the right to be called a linear model.
Whuber's answer above as well as the Glen_b's answer in this link will add more color to my answer.
Nonlinear vs. generalized linear model: How do you refer to logistic, Poisson, etc. regression?
|
How to tell the difference between linear and non-linear regression models?
A model is linear if it is linear in parameters or can be transformed to be linear in parameters (linearizable). Linear models can model linear or non-linear relationships. Let's expand on each of the
|
7,445
|
How to tell the difference between linear and non-linear regression models?
|
You should start right now by making a difference between reality and the model you're using to describe it
The equation you just mentionned is a polynomial equation (x^power) ie. non-linear ... but you can still model it using a generlized linear model (using a link function) or polynomail regression since the parameters are linear (b1, b2, b3, c)
hope that helped, it actually is a bit sketchy : reality/model
|
How to tell the difference between linear and non-linear regression models?
|
You should start right now by making a difference between reality and the model you're using to describe it
The equation you just mentionned is a polynomial equation (x^power) ie. non-linear ... but
|
How to tell the difference between linear and non-linear regression models?
You should start right now by making a difference between reality and the model you're using to describe it
The equation you just mentionned is a polynomial equation (x^power) ie. non-linear ... but you can still model it using a generlized linear model (using a link function) or polynomail regression since the parameters are linear (b1, b2, b3, c)
hope that helped, it actually is a bit sketchy : reality/model
|
How to tell the difference between linear and non-linear regression models?
You should start right now by making a difference between reality and the model you're using to describe it
The equation you just mentionned is a polynomial equation (x^power) ie. non-linear ... but
|
7,446
|
Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum?
|
EM is not guaranteed to converge to a local minimum. It is only guaranteed to converge to a point with zero gradient with respect to the parameters. So it can indeed get stuck at saddle points.
|
Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum?
|
EM is not guaranteed to converge to a local minimum. It is only guaranteed to converge to a point with zero gradient with respect to the parameters. So it can indeed get stuck at saddle points.
|
Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum?
EM is not guaranteed to converge to a local minimum. It is only guaranteed to converge to a point with zero gradient with respect to the parameters. So it can indeed get stuck at saddle points.
|
Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum?
EM is not guaranteed to converge to a local minimum. It is only guaranteed to converge to a point with zero gradient with respect to the parameters. So it can indeed get stuck at saddle points.
|
7,447
|
Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum?
|
First of all, it is possible that EM converges to a local min, a local max, or a saddle point of the likelihood function. More precisely, as Tom Minka pointed out, EM is guaranteed to converge to a point with zero gradient.
I can think of two ways to see this; the first view is pure intuition, and the second view is the sketch of a formal proof. First, I shall, very briefly, explain how EM works:
Expectation Maximization (EM) is a sequential bound optimization technique where in iteration $t$, we first construct a (lower) bound $b_t(\theta)$ on the likelihood function $L(\theta)$ and then maximize the bound to obtain the new solution $\theta_t = \arg\max_\theta b_t(\theta)$, and keep doing this until the new solution does not change.
Expectation Maximization as gradient ascent
In each iteration $t$, EM requires that the bound $b_t$ touches the likelihood function $L$ at the solution of the previous iteration i.e. $\theta_{t-1}$ which implies their gradients are the same too; that is $g = \nabla b_t(\theta_{t-1}) = \nabla L(\theta_{t-1})$. So, EM is at least as good as gradient ascent because $\theta_t$ is at least as good as $\theta_{t-1} + \eta g$. In other words:
if EM converges to $\theta^*$ then $\theta^*$ is a convergent point for gradient ascent too and EM satisfies any property shared among gradient ascent solutions (including zero gradient value).
Sketch of a formal proof
One can show that the gap between the bounds and the likelihood function converges to zero; that is
$$
\lim_{t \rightarrow \infty} L(\theta_t) - b_t(\theta_t) = 0. \tag{1}
$$
One can prove that the gradient of the bound also converges to the gradient of the likelihood function; that is:
$$
\lim_{t \rightarrow \infty} \nabla L(\theta_t) = \nabla b_t(\theta_t). \tag{2}
$$
Because of $(1)$ and $(2)$ and that the bounds used in EM are differentiable, and that $\theta_t = \arg\max_\theta b_t(\theta)$, we have that $\nabla b_t(\theta_t)=0$ and, therefore, $\lim_{t \rightarrow \infty} \nabla L(\theta_t) = 0$.
|
Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum?
|
First of all, it is possible that EM converges to a local min, a local max, or a saddle point of the likelihood function. More precisely, as Tom Minka pointed out, EM is guaranteed to converge to a po
|
Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum?
First of all, it is possible that EM converges to a local min, a local max, or a saddle point of the likelihood function. More precisely, as Tom Minka pointed out, EM is guaranteed to converge to a point with zero gradient.
I can think of two ways to see this; the first view is pure intuition, and the second view is the sketch of a formal proof. First, I shall, very briefly, explain how EM works:
Expectation Maximization (EM) is a sequential bound optimization technique where in iteration $t$, we first construct a (lower) bound $b_t(\theta)$ on the likelihood function $L(\theta)$ and then maximize the bound to obtain the new solution $\theta_t = \arg\max_\theta b_t(\theta)$, and keep doing this until the new solution does not change.
Expectation Maximization as gradient ascent
In each iteration $t$, EM requires that the bound $b_t$ touches the likelihood function $L$ at the solution of the previous iteration i.e. $\theta_{t-1}$ which implies their gradients are the same too; that is $g = \nabla b_t(\theta_{t-1}) = \nabla L(\theta_{t-1})$. So, EM is at least as good as gradient ascent because $\theta_t$ is at least as good as $\theta_{t-1} + \eta g$. In other words:
if EM converges to $\theta^*$ then $\theta^*$ is a convergent point for gradient ascent too and EM satisfies any property shared among gradient ascent solutions (including zero gradient value).
Sketch of a formal proof
One can show that the gap between the bounds and the likelihood function converges to zero; that is
$$
\lim_{t \rightarrow \infty} L(\theta_t) - b_t(\theta_t) = 0. \tag{1}
$$
One can prove that the gradient of the bound also converges to the gradient of the likelihood function; that is:
$$
\lim_{t \rightarrow \infty} \nabla L(\theta_t) = \nabla b_t(\theta_t). \tag{2}
$$
Because of $(1)$ and $(2)$ and that the bounds used in EM are differentiable, and that $\theta_t = \arg\max_\theta b_t(\theta)$, we have that $\nabla b_t(\theta_t)=0$ and, therefore, $\lim_{t \rightarrow \infty} \nabla L(\theta_t) = 0$.
|
Why is the Expectation Maximization algorithm guaranteed to converge to a local optimum?
First of all, it is possible that EM converges to a local min, a local max, or a saddle point of the likelihood function. More precisely, as Tom Minka pointed out, EM is guaranteed to converge to a po
|
7,448
|
Generating data with a given sample covariance matrix
|
There are two different typical situations for these kind of problems:
i) you want to generate a sample from a given distribution whose population characteristics match the ones specified (but due to sampling variation, you don't have the sample characteristics exactly matching).
ii) you want to generate a sample whose sample characteristics match the ones specified (but, due to the constraints of exactly matching sample quantities to a prespecified set of values, don't really come from the distribution you want).
You want the second case -- but you get it by following the same approach as the first case, with an extra standardization step.
So for multivariate normals, either can be done in a fairly straightforward manner:
With first case you could use random normals without the population structure (such as iid standard normal which have expectation 0 and identity covariance matrix) and then impose it - transform to get the covariance matrix and mean you want. If $\mu$ and $\Sigma$ are the population mean and covariance you need and $z$ are iid standard normal, you calculate $y=Lz+\mu$, for some $L$ where $LL'=\Sigma$ (e.g. a suitable $L$ could be obtained via Cholesky decomposition). Then $y$ has the desired population characteristics.
With the second, you have to first transform your random normals to remove even the random variation away from the zero mean and identity covariance (making the sample mean zero and sample covariance $I_n$), then proceed as before. But that initial step of removing the sample deviation from exact mean $0$, variance $I$ interferes with the distribution. (In small samples it can be quite severe.)
This can be done by subtracting the sample mean of $z$ ($z^*=z-\bar z$) and calculating the Cholesky decomposition of $z^*$. If $L^*$ is the left Cholesky factor, then $z^{(0)}=(L^*)^{-1}z^*$ should have sample mean 0 and identity sample covariance. You can then calculate $y=Lz^{(0)}+\mu$ and have a sample with the desired sample moments. (Depending on how your sample quantities are defined, there may be an extra small fiddle involved with multiplying/dividing by factors like $\sqrt{\frac{n-1}{n}}$, but it's easy enough to identify that need.)
|
Generating data with a given sample covariance matrix
|
There are two different typical situations for these kind of problems:
i) you want to generate a sample from a given distribution whose population characteristics match the ones specified (but due to
|
Generating data with a given sample covariance matrix
There are two different typical situations for these kind of problems:
i) you want to generate a sample from a given distribution whose population characteristics match the ones specified (but due to sampling variation, you don't have the sample characteristics exactly matching).
ii) you want to generate a sample whose sample characteristics match the ones specified (but, due to the constraints of exactly matching sample quantities to a prespecified set of values, don't really come from the distribution you want).
You want the second case -- but you get it by following the same approach as the first case, with an extra standardization step.
So for multivariate normals, either can be done in a fairly straightforward manner:
With first case you could use random normals without the population structure (such as iid standard normal which have expectation 0 and identity covariance matrix) and then impose it - transform to get the covariance matrix and mean you want. If $\mu$ and $\Sigma$ are the population mean and covariance you need and $z$ are iid standard normal, you calculate $y=Lz+\mu$, for some $L$ where $LL'=\Sigma$ (e.g. a suitable $L$ could be obtained via Cholesky decomposition). Then $y$ has the desired population characteristics.
With the second, you have to first transform your random normals to remove even the random variation away from the zero mean and identity covariance (making the sample mean zero and sample covariance $I_n$), then proceed as before. But that initial step of removing the sample deviation from exact mean $0$, variance $I$ interferes with the distribution. (In small samples it can be quite severe.)
This can be done by subtracting the sample mean of $z$ ($z^*=z-\bar z$) and calculating the Cholesky decomposition of $z^*$. If $L^*$ is the left Cholesky factor, then $z^{(0)}=(L^*)^{-1}z^*$ should have sample mean 0 and identity sample covariance. You can then calculate $y=Lz^{(0)}+\mu$ and have a sample with the desired sample moments. (Depending on how your sample quantities are defined, there may be an extra small fiddle involved with multiplying/dividing by factors like $\sqrt{\frac{n-1}{n}}$, but it's easy enough to identify that need.)
|
Generating data with a given sample covariance matrix
There are two different typical situations for these kind of problems:
i) you want to generate a sample from a given distribution whose population characteristics match the ones specified (but due to
|
7,449
|
Generating data with a given sample covariance matrix
|
@Glen_b gave a good answer (+1), which I want to illustrate with some code.
How to generate $n$ samples from a $d$-dimensional multivariate Gaussian distribution with a given covariance matrix $\boldsymbol \Sigma$? This is easy to do by generating samples from a standard Gaussian and multiplying them by a square root of the covariance matrix, e.g. by $\mathrm{chol}(\boldsymbol \Sigma)$. This is covered in many threads on CV, e.g. here: How can I generate data with a prespecified correlation matrix? Here is a simple Matlab implementation:
n = 100;
d = 2;
Sigma = [ 1 0.7 ; ...
0.7 1 ];
rng(42)
X = randn(n, d) * chol(Sigma);
The sample covariance matrix of the resulting data will of course not be exactly $\boldsymbol \Sigma$; e.g. in the example above cov(X) returns
1.0690 0.7296
0.7296 1.0720
How to generate data with a pre-specified sample correlation or covariance matrix?
As @Glen_b wrote, after generating data from a standard Gaussian, center, whiten, and standardize it, so that it has sample covariance matrix $\mathbf I$; only then multiply it with $\mathrm{chol}(\boldsymbol \Sigma)$.
Here is the continuation of my Matlab example:
X = randn(n, d);
X = bsxfun(@minus, X, mean(X));
X = X * inv(chol(cov(X)));
X = X * chol(Sigma);
Now cov(X), as required, returns
1.0000 0.7000
0.7000 1.0000
|
Generating data with a given sample covariance matrix
|
@Glen_b gave a good answer (+1), which I want to illustrate with some code.
How to generate $n$ samples from a $d$-dimensional multivariate Gaussian distribution with a given covariance matrix $\bolds
|
Generating data with a given sample covariance matrix
@Glen_b gave a good answer (+1), which I want to illustrate with some code.
How to generate $n$ samples from a $d$-dimensional multivariate Gaussian distribution with a given covariance matrix $\boldsymbol \Sigma$? This is easy to do by generating samples from a standard Gaussian and multiplying them by a square root of the covariance matrix, e.g. by $\mathrm{chol}(\boldsymbol \Sigma)$. This is covered in many threads on CV, e.g. here: How can I generate data with a prespecified correlation matrix? Here is a simple Matlab implementation:
n = 100;
d = 2;
Sigma = [ 1 0.7 ; ...
0.7 1 ];
rng(42)
X = randn(n, d) * chol(Sigma);
The sample covariance matrix of the resulting data will of course not be exactly $\boldsymbol \Sigma$; e.g. in the example above cov(X) returns
1.0690 0.7296
0.7296 1.0720
How to generate data with a pre-specified sample correlation or covariance matrix?
As @Glen_b wrote, after generating data from a standard Gaussian, center, whiten, and standardize it, so that it has sample covariance matrix $\mathbf I$; only then multiply it with $\mathrm{chol}(\boldsymbol \Sigma)$.
Here is the continuation of my Matlab example:
X = randn(n, d);
X = bsxfun(@minus, X, mean(X));
X = X * inv(chol(cov(X)));
X = X * chol(Sigma);
Now cov(X), as required, returns
1.0000 0.7000
0.7000 1.0000
|
Generating data with a given sample covariance matrix
@Glen_b gave a good answer (+1), which I want to illustrate with some code.
How to generate $n$ samples from a $d$-dimensional multivariate Gaussian distribution with a given covariance matrix $\bolds
|
7,450
|
Generating data with a given sample covariance matrix
|
I am very late to the party here, but I recently had to do this in Python. Here is how you can generate a dataset with exact means and covariances/correlations:
import numpy as np
# Define a vector of means and a matrix of covariances
mean = [3, 3]
Sigma = [[1, 0.70],
[0.70, 1]]
# Generate 100 cases
X = np.random.default_rng().multivariate_normal(mean, Sigma, 100).T
# Subtract the mean from each variable
for n in range(X.shape[0]):
X[n] = X[n] - X[n].mean()
# Make each variable in X orthogonal to one another
L_inv = np.linalg.cholesky(np.cov(X, bias = True))
L_inv = np.linalg.inv(L_inv)
X = np.dot(L_inv, X)
# Rescale X to exactly match Sigma
L = np.linalg.cholesky(Sigma)
X = np.dot(L, X)
# Add the mean back into each variable
for n in range(X.shape[0]):
X[n] = X[n] + mean[n]
# The covariance of the generated data should match Sigma
print(np.cov(X, bias = True))
Please note: I am using the population (rather than sample) variances and covariances, which are calculated using N (rather than N - 1) as the denominator. This is represented in the "bias = True" setting for np.cov().
|
Generating data with a given sample covariance matrix
|
I am very late to the party here, but I recently had to do this in Python. Here is how you can generate a dataset with exact means and covariances/correlations:
import numpy as np
# Define a vector o
|
Generating data with a given sample covariance matrix
I am very late to the party here, but I recently had to do this in Python. Here is how you can generate a dataset with exact means and covariances/correlations:
import numpy as np
# Define a vector of means and a matrix of covariances
mean = [3, 3]
Sigma = [[1, 0.70],
[0.70, 1]]
# Generate 100 cases
X = np.random.default_rng().multivariate_normal(mean, Sigma, 100).T
# Subtract the mean from each variable
for n in range(X.shape[0]):
X[n] = X[n] - X[n].mean()
# Make each variable in X orthogonal to one another
L_inv = np.linalg.cholesky(np.cov(X, bias = True))
L_inv = np.linalg.inv(L_inv)
X = np.dot(L_inv, X)
# Rescale X to exactly match Sigma
L = np.linalg.cholesky(Sigma)
X = np.dot(L, X)
# Add the mean back into each variable
for n in range(X.shape[0]):
X[n] = X[n] + mean[n]
# The covariance of the generated data should match Sigma
print(np.cov(X, bias = True))
Please note: I am using the population (rather than sample) variances and covariances, which are calculated using N (rather than N - 1) as the denominator. This is represented in the "bias = True" setting for np.cov().
|
Generating data with a given sample covariance matrix
I am very late to the party here, but I recently had to do this in Python. Here is how you can generate a dataset with exact means and covariances/correlations:
import numpy as np
# Define a vector o
|
7,451
|
Relative importance of a set of predictors in a random forests classification in R
|
First I would like to clarify what the importance metric actually measures.
MeanDecreaseGini is a measure of variable importance based on the Gini impurity index used for the calculation of splits during training. A common misconception is that the variable importance metric refers to the Gini used for asserting model performance which is closely related to AUC, but this is wrong. Here is the explanation from the randomForest package written by Breiman and Cutler:
Gini importance
Every time a split of a node is made on variable m the gini impurity criterion for the two descendent nodes is less than the parent node. Adding up the gini decreases for each individual variable over all trees in the forest gives a fast variable importance that is often very consistent with the permutation importance measure.
The Gini impurity index is defined as
$$
G = \sum_{i=1}^{n_c} p_i(1-p_i)
$$
Where $n_c$ is the number of classes in the target variable and $p_i$ is the ratio of this class.
For a two class problem, this results in the following curve which is maximized for the 50-50 sample and minimized for the homogeneous sets:
The importance is then calculated as
$$
I = G_{parent} - G_{split1} - G_{split2}
$$
averaged over all splits in the forest involving the predictor in question. As this is an average it could easily be extended to be averaged over all splits on variables contained in a group.
Looking closer we know each variable importance is an average conditional on the variable used and the meanDecreaseGini of the group would just be the mean of these importances weighted on the share this variable is used in the forest compared to the other variables in the same group. This holds because the the tower property
$$
\mathbb{E}[\mathbb{E}[X|Y]] = \mathbb{E}[X]
$$
Now, to answer your question directly it is not as simple as just summing up all importances in each group to get the combined MeanDecreaseGini but computing the weighted average will get you the answer you are looking for. We just need to find the variable frequencies within each group.
Here is a simple script to get these from a random forest object in R:
var.share <- function(rf.obj, members) {
count <- table(rf.obj$forest$bestvar)[-1]
names(count) <- names(rf.obj$forest$ncat)
share <- count[members] / sum(count[members])
return(share)
}
Just pass in the names of the variables in the group as the members parameter.
I hope this answers your question. I can write up a function to get the group importances directly if it is of interest.
EDIT:
Here is a function that gives the group importance given a randomForest object and a list of vectors with variable names. It uses var.share as previously defined. I have not done any input checking so you need to make sure you use the right variable names.
group.importance <- function(rf.obj, groups) {
var.imp <- as.matrix(sapply(groups, function(g) {
sum(importance(rf.obj, 2)[g, ]*var.share(rf.obj, g))
}))
colnames(var.imp) <- "MeanDecreaseGini"
return(var.imp)
}
Example of usage:
library(randomForest)
data(iris)
rf.obj <- randomForest(Species ~ ., data=iris)
groups <- list(Sepal=c("Sepal.Width", "Sepal.Length"),
Petal=c("Petal.Width", "Petal.Length"))
group.importance(rf.obj, groups)
>
MeanDecreaseGini
Sepal 6.187198
Petal 43.913020
It also works for overlapping groups:
overlapping.groups <- list(Sepal=c("Sepal.Width", "Sepal.Length"),
Petal=c("Petal.Width", "Petal.Length"),
Width=c("Sepal.Width", "Petal.Width"),
Length=c("Sepal.Length", "Petal.Length"))
group.importance(rf.obj, overlapping.groups)
>
MeanDecreaseGini
Sepal 6.187198
Petal 43.913020
Width 30.513776
Length 30.386706
|
Relative importance of a set of predictors in a random forests classification in R
|
First I would like to clarify what the importance metric actually measures.
MeanDecreaseGini is a measure of variable importance based on the Gini impurity index used for the calculation of splits dur
|
Relative importance of a set of predictors in a random forests classification in R
First I would like to clarify what the importance metric actually measures.
MeanDecreaseGini is a measure of variable importance based on the Gini impurity index used for the calculation of splits during training. A common misconception is that the variable importance metric refers to the Gini used for asserting model performance which is closely related to AUC, but this is wrong. Here is the explanation from the randomForest package written by Breiman and Cutler:
Gini importance
Every time a split of a node is made on variable m the gini impurity criterion for the two descendent nodes is less than the parent node. Adding up the gini decreases for each individual variable over all trees in the forest gives a fast variable importance that is often very consistent with the permutation importance measure.
The Gini impurity index is defined as
$$
G = \sum_{i=1}^{n_c} p_i(1-p_i)
$$
Where $n_c$ is the number of classes in the target variable and $p_i$ is the ratio of this class.
For a two class problem, this results in the following curve which is maximized for the 50-50 sample and minimized for the homogeneous sets:
The importance is then calculated as
$$
I = G_{parent} - G_{split1} - G_{split2}
$$
averaged over all splits in the forest involving the predictor in question. As this is an average it could easily be extended to be averaged over all splits on variables contained in a group.
Looking closer we know each variable importance is an average conditional on the variable used and the meanDecreaseGini of the group would just be the mean of these importances weighted on the share this variable is used in the forest compared to the other variables in the same group. This holds because the the tower property
$$
\mathbb{E}[\mathbb{E}[X|Y]] = \mathbb{E}[X]
$$
Now, to answer your question directly it is not as simple as just summing up all importances in each group to get the combined MeanDecreaseGini but computing the weighted average will get you the answer you are looking for. We just need to find the variable frequencies within each group.
Here is a simple script to get these from a random forest object in R:
var.share <- function(rf.obj, members) {
count <- table(rf.obj$forest$bestvar)[-1]
names(count) <- names(rf.obj$forest$ncat)
share <- count[members] / sum(count[members])
return(share)
}
Just pass in the names of the variables in the group as the members parameter.
I hope this answers your question. I can write up a function to get the group importances directly if it is of interest.
EDIT:
Here is a function that gives the group importance given a randomForest object and a list of vectors with variable names. It uses var.share as previously defined. I have not done any input checking so you need to make sure you use the right variable names.
group.importance <- function(rf.obj, groups) {
var.imp <- as.matrix(sapply(groups, function(g) {
sum(importance(rf.obj, 2)[g, ]*var.share(rf.obj, g))
}))
colnames(var.imp) <- "MeanDecreaseGini"
return(var.imp)
}
Example of usage:
library(randomForest)
data(iris)
rf.obj <- randomForest(Species ~ ., data=iris)
groups <- list(Sepal=c("Sepal.Width", "Sepal.Length"),
Petal=c("Petal.Width", "Petal.Length"))
group.importance(rf.obj, groups)
>
MeanDecreaseGini
Sepal 6.187198
Petal 43.913020
It also works for overlapping groups:
overlapping.groups <- list(Sepal=c("Sepal.Width", "Sepal.Length"),
Petal=c("Petal.Width", "Petal.Length"),
Width=c("Sepal.Width", "Petal.Width"),
Length=c("Sepal.Length", "Petal.Length"))
group.importance(rf.obj, overlapping.groups)
>
MeanDecreaseGini
Sepal 6.187198
Petal 43.913020
Width 30.513776
Length 30.386706
|
Relative importance of a set of predictors in a random forests classification in R
First I would like to clarify what the importance metric actually measures.
MeanDecreaseGini is a measure of variable importance based on the Gini impurity index used for the calculation of splits dur
|
7,452
|
Relative importance of a set of predictors in a random forests classification in R
|
The function defined above as G=sum over classes[pi(1−pi)] is actually the entropy, which is another way of evaluating a split. The difference between the entropy in children nodes and the parent node is the Information Gain.
The GINI impurity function is G = 1- sum over classes[pi^2].
|
Relative importance of a set of predictors in a random forests classification in R
|
The function defined above as G=sum over classes[pi(1−pi)] is actually the entropy, which is another way of evaluating a split. The difference between the entropy in children nodes and the parent node
|
Relative importance of a set of predictors in a random forests classification in R
The function defined above as G=sum over classes[pi(1−pi)] is actually the entropy, which is another way of evaluating a split. The difference between the entropy in children nodes and the parent node is the Information Gain.
The GINI impurity function is G = 1- sum over classes[pi^2].
|
Relative importance of a set of predictors in a random forests classification in R
The function defined above as G=sum over classes[pi(1−pi)] is actually the entropy, which is another way of evaluating a split. The difference between the entropy in children nodes and the parent node
|
7,453
|
How to make a reward function in reinforcement learning?
|
Reward functions describe how the agent "ought" to behave. In other words, they have "normative" content, stipulating what you want the agent to accomplish. For example, some rewarding state $s$ might represent the taste of food. Or perhaps, $(s,a)$ might represent the act of tasting the food. So, to the extent that the reward function determines what the agent's motivations are, yes, you have to make it up!
There are no absolute restrictions, but if your reward function is "better behaved", the the agent will learn better. Practically, this means speed of convergence, and not getting stuck in local minima. But further specifications will depend strongly on the species of reinforcement learning you are using. For example, is the state/action space continuous or discrete? Is the world or the action selection stochastic? Is reward continuously harvested, or only at the end?
One way to view the problem is that the reward function determines the hardness of the problem. For example, traditionally, we might specify a single state to be rewarded:
$$
R(s_1)=1
$$
$$
R(s_{2..n})=0
$$
In this case, the problem to be solved is quite a hard one, compared to, say, $R(s_i)=1/i^2$, where there is a reward gradient over states. For hard problems, specifying more detail, e.g. $R(s,a)$ or $R(s,a,s^\prime)$ can help some algorithms by providing extra clues, but potentially at the expense of requiring more exploration. You might well need to include costs as negative terms in $R$ (e.g. energetic costs), to make the problem well-specified.
For the case of a continuous state space, if you want an agent to learn easily, the reward function should be continuous and differentiable. So polynomials can work well for many algorithms. Further, try to remove localised minima. There are a number of examples of how NOT to make a reward function -- like the Rastrigin function. Having said this, several RL algorithms (e.g. Boltzmann machines) are somewhat robust to these.
If you are using RL to solve a real-world problem, you will probably find that although finding the reward function is the hardest part of the problem, it is intimately tied up with how you specify the state space. For example, in a time-dependent problem, the distance to the goal often makes a poor reward function (e.g. in the mountain car problem). Such situations can be solved by using higher dimensional state spaces (hidden states or memory traces), or by hierarchical RL.
At an abstract level, unsupervised learning was supposed to obviate stipulating "right and wrong" performance. But we can see now that RL simply shifts the responsibility from the teacher/critic to the reward function. There is a less circular way to solve the problem: that is, to infer the best reward function. One method is called inverse RL or "apprenticeship learning", which generates a reward function that would reproduce observed behaviours. Finding the best reward function to reproduce a set of observations can also be implemented by MLE, Bayesian, or information theoretic methods - if you google for "inverse reinforcement learning".
|
How to make a reward function in reinforcement learning?
|
Reward functions describe how the agent "ought" to behave. In other words, they have "normative" content, stipulating what you want the agent to accomplish. For example, some rewarding state $s$ might
|
How to make a reward function in reinforcement learning?
Reward functions describe how the agent "ought" to behave. In other words, they have "normative" content, stipulating what you want the agent to accomplish. For example, some rewarding state $s$ might represent the taste of food. Or perhaps, $(s,a)$ might represent the act of tasting the food. So, to the extent that the reward function determines what the agent's motivations are, yes, you have to make it up!
There are no absolute restrictions, but if your reward function is "better behaved", the the agent will learn better. Practically, this means speed of convergence, and not getting stuck in local minima. But further specifications will depend strongly on the species of reinforcement learning you are using. For example, is the state/action space continuous or discrete? Is the world or the action selection stochastic? Is reward continuously harvested, or only at the end?
One way to view the problem is that the reward function determines the hardness of the problem. For example, traditionally, we might specify a single state to be rewarded:
$$
R(s_1)=1
$$
$$
R(s_{2..n})=0
$$
In this case, the problem to be solved is quite a hard one, compared to, say, $R(s_i)=1/i^2$, where there is a reward gradient over states. For hard problems, specifying more detail, e.g. $R(s,a)$ or $R(s,a,s^\prime)$ can help some algorithms by providing extra clues, but potentially at the expense of requiring more exploration. You might well need to include costs as negative terms in $R$ (e.g. energetic costs), to make the problem well-specified.
For the case of a continuous state space, if you want an agent to learn easily, the reward function should be continuous and differentiable. So polynomials can work well for many algorithms. Further, try to remove localised minima. There are a number of examples of how NOT to make a reward function -- like the Rastrigin function. Having said this, several RL algorithms (e.g. Boltzmann machines) are somewhat robust to these.
If you are using RL to solve a real-world problem, you will probably find that although finding the reward function is the hardest part of the problem, it is intimately tied up with how you specify the state space. For example, in a time-dependent problem, the distance to the goal often makes a poor reward function (e.g. in the mountain car problem). Such situations can be solved by using higher dimensional state spaces (hidden states or memory traces), or by hierarchical RL.
At an abstract level, unsupervised learning was supposed to obviate stipulating "right and wrong" performance. But we can see now that RL simply shifts the responsibility from the teacher/critic to the reward function. There is a less circular way to solve the problem: that is, to infer the best reward function. One method is called inverse RL or "apprenticeship learning", which generates a reward function that would reproduce observed behaviours. Finding the best reward function to reproduce a set of observations can also be implemented by MLE, Bayesian, or information theoretic methods - if you google for "inverse reinforcement learning".
|
How to make a reward function in reinforcement learning?
Reward functions describe how the agent "ought" to behave. In other words, they have "normative" content, stipulating what you want the agent to accomplish. For example, some rewarding state $s$ might
|
7,454
|
How to make a reward function in reinforcement learning?
|
Designing reward functions is a hard problem indeed. Generally, sparse reward functions are easier to define (e.g., get +1 if you win the game, else 0). However, sparse rewards also slow down learning because the agent needs to take many actions before getting any reward. This problem is also known as the credit assignment problem.
Rather then having a table representation for rewards, you can use continuous functions as well (such as a polynomial). This is the case usually when state space and action space is continuous rather then discrete.
|
How to make a reward function in reinforcement learning?
|
Designing reward functions is a hard problem indeed. Generally, sparse reward functions are easier to define (e.g., get +1 if you win the game, else 0). However, sparse rewards also slow down learning
|
How to make a reward function in reinforcement learning?
Designing reward functions is a hard problem indeed. Generally, sparse reward functions are easier to define (e.g., get +1 if you win the game, else 0). However, sparse rewards also slow down learning because the agent needs to take many actions before getting any reward. This problem is also known as the credit assignment problem.
Rather then having a table representation for rewards, you can use continuous functions as well (such as a polynomial). This is the case usually when state space and action space is continuous rather then discrete.
|
How to make a reward function in reinforcement learning?
Designing reward functions is a hard problem indeed. Generally, sparse reward functions are easier to define (e.g., get +1 if you win the game, else 0). However, sparse rewards also slow down learning
|
7,455
|
How to perform isometric log-ratio transformation
|
The ILR (Isometric Log-Ratio) transformation is used in the analysis of compositional data. Any given observation is a set of positive values summing to unity, such as the proportions of chemicals in a mixture or proportions of total time spent in various activities. The sum-to-unity invariant implies that although there may be $k\ge 2$ components to each observation, there are only $k-1$ functionally independent values. (Geometrically, the observations lie on a $k-1$-dimensional simplex in $k$-dimensional Euclidean space $\mathbb{R}^k$. This simplicial nature is manifest in the triangular shapes of the scatterplots of simulated data shown below.)
Typically, the distributions of the components become "nicer" when log transformed. This transformation can be scaled by dividing all values in an observation by their geometric mean before taking the logs. (Equivalently, the logs of the data in any observation are centered by subtracting their mean.) This is known as the "Centered Log-Ratio" transformation, or CLR. The resulting values still lie within a hyperplane in $\mathbb{R}^k$, because the scaling causes the sum of the logs to be zero. The ILR consists of choosing any orthonormal basis for this hyperplane: the $k-1$ coordinates of each transformed observation become its new data. Equivalently, the hyperplane is rotated (or reflected) to coincide with the plane with vanishing $k^\text{th}$ coordinate and one uses the first $k-1$ coordinates. (Because rotations and reflections preserve distance they are isometries, whence the name of this procedure.)
Tsagris, Preston, and Wood state that "a standard choice of [the rotation matrix] $H$ is the Helmert sub-matrix obtained by removing the first row from the Helmert matrix."
The Helmert matrix of order $k$ is constructed in a simple manner (see Harville p. 86 for instance). Its first row is all $1$s. The next row is one of the the simplest that can be made orthogonal to the first row, namely $(1, -1, 0, \ldots, 0)$. Row $j$ is among the simplest that is orthogonal to all preceding rows: its first $j-1$ entries are $1$s, which guarantees it is orthogonal to rows $2, 3, \ldots, j-1$, and its $j^\text{th}$ entry is set to $1-j$ to make it orthogonal to the first row (that is, its entries must sum to zero). All rows are then rescaled to unit length.
Here, to illustrate the pattern, is the $4\times 4$ Helmert matrix before its rows have been rescaled:
$$\pmatrix{1&1&1&1 \\ 1&-1&0&0 \\ 1&1&-2&0 \\ 1&1&1&-3}.$$
(Edit added August 2017) One particularly nice aspect of these "contrasts" (which are read row by row) is their interpretability. The first row is dropped, leaving $k-1$ remaining rows to represent the data. The second row is proportional to the difference between the second variable and the first. The third row is proportional to the difference between the third variable and the first two. Generally, row $j$ ($2\le j \le k$) reflects the difference between variable $j$ and all those that precede it, variables $1, 2, \ldots, j-1$. This leaves the first variable $j=1$ as a "base" for all contrasts. I have found these interpretations helpful when following the ILR by Principal Components Analysis (PCA): it enables the loadings to be interpreted, at least roughly, in terms of comparisons among the original variables. I have inserted a line into the R implementation of ilr below that gives the output variables suitable names to help with this interpretation. (End of edit.)
Since R provides a function contr.helmert to create such matrices (albeit without the scaling, and with rows and columns negated and transposed), you don't even have to write the (simple) code to do it. Using this, I implemented the ILR (see below). To exercise and test it, I generated $1000$ independent draws from a Dirichlet distribution (with parameters $1,2,3,4$) and plotted their scatterplot matrix. Here, $k=4$.
The points all clump near the lower left corners and fill triangular patches of their plotting areas, as is characteristic of compositional data.
Their ILR has just three variables, again plotted as a scatterplot matrix:
This does indeed look nicer: the scatterplots have acquired more characteristic "elliptical cloud" shapes, better amenable to second-order analyses such as linear regression and PCA.
Tsagris et al. generalize the CLR by using a Box-Cox transformation, which generalizes the logarithm. (The log is a Box-Cox transformation with parameter $0$.) It is useful because, as the authors (correctly IMHO) argue, in many applications the data ought to determine their transformation. For these Dirichlet data a parameter of $1/2$ (which is halfway between no transformation and a log transformation) works beautifully:
"Beautiful" refers to the simple description this picture permits: instead of having to specify the location, shape, size, and orientation of each point cloud, we need only observe that (to an excellent approximation) all the clouds are circular with similar radii. In effect, the CLR has simplified an initial description requiring at least 16 numbers into one that requires only 12 numbers and the ILR has reduced that to just four numbers (three univariate locations and one radius), at a price of specifying the ILR parameter of $1/2$--a fifth number. When such dramatic simplifications happen with real data, we usually figure we're on to something: we have made a discovery or achieved an insight.
This generalization is implemented in the ilr function below. The command to produce these "Z" variables was simply
z <- ilr(x, 1/2)
One advantage of the Box-Cox transformation is its applicability to observations that include true zeros: it is still defined provided the parameter is positive.
References
Michail T. Tsagris, Simon Preston and Andrew T.A. Wood, A data-based power transformation for compositional data. arXiv:1106.1451v2 [stat.ME] 16 Jun 2011.
David A. Harville, Matrix Algebra From a Statistician's Perspective. Springer Science & Business Media, Jun 27, 2008.
Here is the R code.
#
# ILR (Isometric log-ratio) transformation.
# `x` is an `n` by `k` matrix of positive observations with k >= 2.
#
ilr <- function(x, p=0) {
y <- log(x)
if (p != 0) y <- (exp(p * y) - 1) / p # Box-Cox transformation
y <- y - rowMeans(y, na.rm=TRUE) # Recentered values
k <- dim(y)[2]
H <- contr.helmert(k) # Dimensions k by k-1
H <- t(H) / sqrt((2:k)*(2:k-1)) # Dimensions k-1 by k
if(!is.null(colnames(x))) # (Helps with interpreting output)
colnames(z) <- paste0(colnames(x)[-1], ".ILR")
return(y %*% t(H)) # Rotated/reflected values
}
#
# Specify a Dirichlet(alpha) distribution for testing.
#
alpha <- c(1,2,3,4)
#
# Simulate and plot compositional data.
#
n <- 1000
k <- length(alpha)
x <- matrix(rgamma(n*k, alpha), nrow=n, byrow=TRUE)
x <- x / rowSums(x)
colnames(x) <- paste0("X.", 1:k)
pairs(x, pch=19, col="#00000040", cex=0.6)
#
# Obtain the ILR.
#
y <- ilr(x)
colnames(y) <- paste0("Y.", 1:(k-1))
#
# Plot the ILR.
#
pairs(y, pch=19, col="#00000040", cex=0.6)
|
How to perform isometric log-ratio transformation
|
The ILR (Isometric Log-Ratio) transformation is used in the analysis of compositional data. Any given observation is a set of positive values summing to unity, such as the proportions of chemicals in
|
How to perform isometric log-ratio transformation
The ILR (Isometric Log-Ratio) transformation is used in the analysis of compositional data. Any given observation is a set of positive values summing to unity, such as the proportions of chemicals in a mixture or proportions of total time spent in various activities. The sum-to-unity invariant implies that although there may be $k\ge 2$ components to each observation, there are only $k-1$ functionally independent values. (Geometrically, the observations lie on a $k-1$-dimensional simplex in $k$-dimensional Euclidean space $\mathbb{R}^k$. This simplicial nature is manifest in the triangular shapes of the scatterplots of simulated data shown below.)
Typically, the distributions of the components become "nicer" when log transformed. This transformation can be scaled by dividing all values in an observation by their geometric mean before taking the logs. (Equivalently, the logs of the data in any observation are centered by subtracting their mean.) This is known as the "Centered Log-Ratio" transformation, or CLR. The resulting values still lie within a hyperplane in $\mathbb{R}^k$, because the scaling causes the sum of the logs to be zero. The ILR consists of choosing any orthonormal basis for this hyperplane: the $k-1$ coordinates of each transformed observation become its new data. Equivalently, the hyperplane is rotated (or reflected) to coincide with the plane with vanishing $k^\text{th}$ coordinate and one uses the first $k-1$ coordinates. (Because rotations and reflections preserve distance they are isometries, whence the name of this procedure.)
Tsagris, Preston, and Wood state that "a standard choice of [the rotation matrix] $H$ is the Helmert sub-matrix obtained by removing the first row from the Helmert matrix."
The Helmert matrix of order $k$ is constructed in a simple manner (see Harville p. 86 for instance). Its first row is all $1$s. The next row is one of the the simplest that can be made orthogonal to the first row, namely $(1, -1, 0, \ldots, 0)$. Row $j$ is among the simplest that is orthogonal to all preceding rows: its first $j-1$ entries are $1$s, which guarantees it is orthogonal to rows $2, 3, \ldots, j-1$, and its $j^\text{th}$ entry is set to $1-j$ to make it orthogonal to the first row (that is, its entries must sum to zero). All rows are then rescaled to unit length.
Here, to illustrate the pattern, is the $4\times 4$ Helmert matrix before its rows have been rescaled:
$$\pmatrix{1&1&1&1 \\ 1&-1&0&0 \\ 1&1&-2&0 \\ 1&1&1&-3}.$$
(Edit added August 2017) One particularly nice aspect of these "contrasts" (which are read row by row) is their interpretability. The first row is dropped, leaving $k-1$ remaining rows to represent the data. The second row is proportional to the difference between the second variable and the first. The third row is proportional to the difference between the third variable and the first two. Generally, row $j$ ($2\le j \le k$) reflects the difference between variable $j$ and all those that precede it, variables $1, 2, \ldots, j-1$. This leaves the first variable $j=1$ as a "base" for all contrasts. I have found these interpretations helpful when following the ILR by Principal Components Analysis (PCA): it enables the loadings to be interpreted, at least roughly, in terms of comparisons among the original variables. I have inserted a line into the R implementation of ilr below that gives the output variables suitable names to help with this interpretation. (End of edit.)
Since R provides a function contr.helmert to create such matrices (albeit without the scaling, and with rows and columns negated and transposed), you don't even have to write the (simple) code to do it. Using this, I implemented the ILR (see below). To exercise and test it, I generated $1000$ independent draws from a Dirichlet distribution (with parameters $1,2,3,4$) and plotted their scatterplot matrix. Here, $k=4$.
The points all clump near the lower left corners and fill triangular patches of their plotting areas, as is characteristic of compositional data.
Their ILR has just three variables, again plotted as a scatterplot matrix:
This does indeed look nicer: the scatterplots have acquired more characteristic "elliptical cloud" shapes, better amenable to second-order analyses such as linear regression and PCA.
Tsagris et al. generalize the CLR by using a Box-Cox transformation, which generalizes the logarithm. (The log is a Box-Cox transformation with parameter $0$.) It is useful because, as the authors (correctly IMHO) argue, in many applications the data ought to determine their transformation. For these Dirichlet data a parameter of $1/2$ (which is halfway between no transformation and a log transformation) works beautifully:
"Beautiful" refers to the simple description this picture permits: instead of having to specify the location, shape, size, and orientation of each point cloud, we need only observe that (to an excellent approximation) all the clouds are circular with similar radii. In effect, the CLR has simplified an initial description requiring at least 16 numbers into one that requires only 12 numbers and the ILR has reduced that to just four numbers (three univariate locations and one radius), at a price of specifying the ILR parameter of $1/2$--a fifth number. When such dramatic simplifications happen with real data, we usually figure we're on to something: we have made a discovery or achieved an insight.
This generalization is implemented in the ilr function below. The command to produce these "Z" variables was simply
z <- ilr(x, 1/2)
One advantage of the Box-Cox transformation is its applicability to observations that include true zeros: it is still defined provided the parameter is positive.
References
Michail T. Tsagris, Simon Preston and Andrew T.A. Wood, A data-based power transformation for compositional data. arXiv:1106.1451v2 [stat.ME] 16 Jun 2011.
David A. Harville, Matrix Algebra From a Statistician's Perspective. Springer Science & Business Media, Jun 27, 2008.
Here is the R code.
#
# ILR (Isometric log-ratio) transformation.
# `x` is an `n` by `k` matrix of positive observations with k >= 2.
#
ilr <- function(x, p=0) {
y <- log(x)
if (p != 0) y <- (exp(p * y) - 1) / p # Box-Cox transformation
y <- y - rowMeans(y, na.rm=TRUE) # Recentered values
k <- dim(y)[2]
H <- contr.helmert(k) # Dimensions k by k-1
H <- t(H) / sqrt((2:k)*(2:k-1)) # Dimensions k-1 by k
if(!is.null(colnames(x))) # (Helps with interpreting output)
colnames(z) <- paste0(colnames(x)[-1], ".ILR")
return(y %*% t(H)) # Rotated/reflected values
}
#
# Specify a Dirichlet(alpha) distribution for testing.
#
alpha <- c(1,2,3,4)
#
# Simulate and plot compositional data.
#
n <- 1000
k <- length(alpha)
x <- matrix(rgamma(n*k, alpha), nrow=n, byrow=TRUE)
x <- x / rowSums(x)
colnames(x) <- paste0("X.", 1:k)
pairs(x, pch=19, col="#00000040", cex=0.6)
#
# Obtain the ILR.
#
y <- ilr(x)
colnames(y) <- paste0("Y.", 1:(k-1))
#
# Plot the ILR.
#
pairs(y, pch=19, col="#00000040", cex=0.6)
|
How to perform isometric log-ratio transformation
The ILR (Isometric Log-Ratio) transformation is used in the analysis of compositional data. Any given observation is a set of positive values summing to unity, such as the proportions of chemicals in
|
7,456
|
How to perform isometric log-ratio transformation
|
For your use case, it is probably ok to just scale everything down to one. The fact the numbers don't add up exactly to 24 will add a little extra noise to the data, but it shouldn't mess things up that much.
As @whuber correctly stated, since we are dealing with proportions, we have to account for dependencies between the variables (since they add up to one). The ilr transform appropriately deals with this, since it transforms the variables into $\mathbb{R}^{D-1}$ for $D$ proportions.
All of the technical details aside, it is important to know how to properly interpret the ilr transformed data. In the end, the ilr transform just refers to the log ratios of groups. But it defines it with respect to some predefined hierarchy. If you define a hierarchy like as follows
each transformed variable can be calculated as
$b_i = \sqrt{\frac{rs}{r + s}}\ln \frac{g(R_i)}{g(S_i)}$
where $i$ represents an internal node in the hierarchy, $R_i$ defines one partition of variables corresponding to $i$, $S_i$ defines the other partition of variables corresponding to $i$ and $g(...)$ refers to the geometric mean. These transformed variables are also known as balances.
So the next question is, how do you define your hierarchy of variables?
This is really up to you, but if you have three variables, there aren't too many combinations to mess with. For instance, you could just define the hierarchy to be
/-A
/(A|B)-----|
-(AB|C)----| \-B
|
\-C
where A represents the time spent sleeping, B represents time spent with sedentary, C represents time spent doing physical activity (A|B) represents the normalized log ratio between $A$ $B$ (i.e. $\frac{1}{\sqrt{2}}\ln \frac{A}{B}$ ), and $(AB|C)$ refers to the normalized log ratio between $A$, $B$ and $C$ (i.e. $\frac{\sqrt{2}}{\sqrt{3}} \ln \frac{AB}{C}$). If there are many variables, I check out some of the work done with principal balances
But going back to your original question, how can you use this information to actually perform the ilr transformation?
If you are using R, I'd checkout the compositions package
To use that package, you'll need to understand how to create a sequential binary partition (SBP), which is how you define the hierarchy. For the hierarchy defined above, you can represent the SBP with the following matrix.
A B C
(A|B) 1 -1 0
(AB|C) 1 1 -1
where the positive values represent the variables in the numerator, the negative values represent the variables in the denominator, and zeros represent the absence of that variable in the balance. You can build the orthonormal basis using balanceBase from the SBP that you defined.
Once you have this you should be able to pass in your table of proportions along with the basis that you calculated above.
I'd check out this reference for the original definition of balances
|
How to perform isometric log-ratio transformation
|
For your use case, it is probably ok to just scale everything down to one. The fact the numbers don't add up exactly to 24 will add a little extra noise to the data, but it shouldn't mess things up t
|
How to perform isometric log-ratio transformation
For your use case, it is probably ok to just scale everything down to one. The fact the numbers don't add up exactly to 24 will add a little extra noise to the data, but it shouldn't mess things up that much.
As @whuber correctly stated, since we are dealing with proportions, we have to account for dependencies between the variables (since they add up to one). The ilr transform appropriately deals with this, since it transforms the variables into $\mathbb{R}^{D-1}$ for $D$ proportions.
All of the technical details aside, it is important to know how to properly interpret the ilr transformed data. In the end, the ilr transform just refers to the log ratios of groups. But it defines it with respect to some predefined hierarchy. If you define a hierarchy like as follows
each transformed variable can be calculated as
$b_i = \sqrt{\frac{rs}{r + s}}\ln \frac{g(R_i)}{g(S_i)}$
where $i$ represents an internal node in the hierarchy, $R_i$ defines one partition of variables corresponding to $i$, $S_i$ defines the other partition of variables corresponding to $i$ and $g(...)$ refers to the geometric mean. These transformed variables are also known as balances.
So the next question is, how do you define your hierarchy of variables?
This is really up to you, but if you have three variables, there aren't too many combinations to mess with. For instance, you could just define the hierarchy to be
/-A
/(A|B)-----|
-(AB|C)----| \-B
|
\-C
where A represents the time spent sleeping, B represents time spent with sedentary, C represents time spent doing physical activity (A|B) represents the normalized log ratio between $A$ $B$ (i.e. $\frac{1}{\sqrt{2}}\ln \frac{A}{B}$ ), and $(AB|C)$ refers to the normalized log ratio between $A$, $B$ and $C$ (i.e. $\frac{\sqrt{2}}{\sqrt{3}} \ln \frac{AB}{C}$). If there are many variables, I check out some of the work done with principal balances
But going back to your original question, how can you use this information to actually perform the ilr transformation?
If you are using R, I'd checkout the compositions package
To use that package, you'll need to understand how to create a sequential binary partition (SBP), which is how you define the hierarchy. For the hierarchy defined above, you can represent the SBP with the following matrix.
A B C
(A|B) 1 -1 0
(AB|C) 1 1 -1
where the positive values represent the variables in the numerator, the negative values represent the variables in the denominator, and zeros represent the absence of that variable in the balance. You can build the orthonormal basis using balanceBase from the SBP that you defined.
Once you have this you should be able to pass in your table of proportions along with the basis that you calculated above.
I'd check out this reference for the original definition of balances
|
How to perform isometric log-ratio transformation
For your use case, it is probably ok to just scale everything down to one. The fact the numbers don't add up exactly to 24 will add a little extra noise to the data, but it shouldn't mess things up t
|
7,457
|
How to perform isometric log-ratio transformation
|
The above posts answer the question about how to construct an ILR basis and get your ILR balances. To add to this, the choice of which basis can ease the interpretation of your results.
You may be interested in a partition the following partition:
(1) (sleeping,sedentary|physical_activity)
(2) (sleeping|sedentary).
Since you have three parts in your composition, you will obtain two ILR balances to analyze. By setting up the partition as above, you can obtain balances corresponding to "active or not" (1) and "which form of inactivity" (2).
If you analyze each ILR balance separately, for instance performing regression against time-of-day or time-of-year to see if there are any changes, you can interpret the results in terms of changes in "active or not" and changes in "which form of inactivity".
If, on the other hand, you will perform techniques like PCA which obtain a new basis in ILR space, your results will not depend on your choice of partition. This is because your data exist in CLR-space, the D-1 plane orthogonal to the one-vector, and the ILR balances are different choices of unit-norm axes to describe the data's position on the CLR plane.
|
How to perform isometric log-ratio transformation
|
The above posts answer the question about how to construct an ILR basis and get your ILR balances. To add to this, the choice of which basis can ease the interpretation of your results.
You may be int
|
How to perform isometric log-ratio transformation
The above posts answer the question about how to construct an ILR basis and get your ILR balances. To add to this, the choice of which basis can ease the interpretation of your results.
You may be interested in a partition the following partition:
(1) (sleeping,sedentary|physical_activity)
(2) (sleeping|sedentary).
Since you have three parts in your composition, you will obtain two ILR balances to analyze. By setting up the partition as above, you can obtain balances corresponding to "active or not" (1) and "which form of inactivity" (2).
If you analyze each ILR balance separately, for instance performing regression against time-of-day or time-of-year to see if there are any changes, you can interpret the results in terms of changes in "active or not" and changes in "which form of inactivity".
If, on the other hand, you will perform techniques like PCA which obtain a new basis in ILR space, your results will not depend on your choice of partition. This is because your data exist in CLR-space, the D-1 plane orthogonal to the one-vector, and the ILR balances are different choices of unit-norm axes to describe the data's position on the CLR plane.
|
How to perform isometric log-ratio transformation
The above posts answer the question about how to construct an ILR basis and get your ILR balances. To add to this, the choice of which basis can ease the interpretation of your results.
You may be int
|
7,458
|
"Frequency" value for seconds/minutes intervals data in R
|
The "frequency" is the number of observations per "cycle" (normally a year, but sometimes a week, a day, an hour, etc). This is the opposite of the definition of frequency in physics, or in Fourier analysis, where "period" is the length of the cycle, and "frequency" is the inverse of period. When using the ts() function in R, the following choices should be used.
Data frequency
Annual 1
Quarterly 4
Monthly 12
Weekly 52
Actually, there are not 52 weeks in a year, but 365.25/7 = 52.18 on average. But most functions which use ts objects require integer frequency.
Once the frequency of observations is smaller than a week, then there is usually more than one way of handling the frequency. For example, data observed every minute might have an hourly seasonality (frequency=60), a daily seasonality (frequency=24x60=1440), a weekly seasonality (frequency=24x60x7=10080) and an annual seasonality (frequency=24x60x365.25=525960). If you want to use a ts object, then you need to decide which of these is the most important.
An alternative is to use a msts object (defined in the forecast package) which handles multiple seasonality time series. Then you can specify all the frequencies that might be relevant. It is also flexible enough to handle non-integer frequencies.
You won't necessarily want to include all of these frequencies --- just the ones that are likely to be present in the data. As you have only 180 days of data, you can probably ignore the annual seasonality. If the data are measurements of a natural phenomenon (e.g., temperature), you might also be able to ignore the weekly seasonality.
With multiple seasonalities, you could use a TBATS model, or Fourier terms in a regression or ARIMA model. The fourier function from the forecast package will handle msts objects.
|
"Frequency" value for seconds/minutes intervals data in R
|
The "frequency" is the number of observations per "cycle" (normally a year, but sometimes a week, a day, an hour, etc). This is the opposite of the definition of frequency in physics, or in Fourier an
|
"Frequency" value for seconds/minutes intervals data in R
The "frequency" is the number of observations per "cycle" (normally a year, but sometimes a week, a day, an hour, etc). This is the opposite of the definition of frequency in physics, or in Fourier analysis, where "period" is the length of the cycle, and "frequency" is the inverse of period. When using the ts() function in R, the following choices should be used.
Data frequency
Annual 1
Quarterly 4
Monthly 12
Weekly 52
Actually, there are not 52 weeks in a year, but 365.25/7 = 52.18 on average. But most functions which use ts objects require integer frequency.
Once the frequency of observations is smaller than a week, then there is usually more than one way of handling the frequency. For example, data observed every minute might have an hourly seasonality (frequency=60), a daily seasonality (frequency=24x60=1440), a weekly seasonality (frequency=24x60x7=10080) and an annual seasonality (frequency=24x60x365.25=525960). If you want to use a ts object, then you need to decide which of these is the most important.
An alternative is to use a msts object (defined in the forecast package) which handles multiple seasonality time series. Then you can specify all the frequencies that might be relevant. It is also flexible enough to handle non-integer frequencies.
You won't necessarily want to include all of these frequencies --- just the ones that are likely to be present in the data. As you have only 180 days of data, you can probably ignore the annual seasonality. If the data are measurements of a natural phenomenon (e.g., temperature), you might also be able to ignore the weekly seasonality.
With multiple seasonalities, you could use a TBATS model, or Fourier terms in a regression or ARIMA model. The fourier function from the forecast package will handle msts objects.
|
"Frequency" value for seconds/minutes intervals data in R
The "frequency" is the number of observations per "cycle" (normally a year, but sometimes a week, a day, an hour, etc). This is the opposite of the definition of frequency in physics, or in Fourier an
|
7,459
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
|
There is a definitive answer to this question which is "yes, it is certainly possible to overfit a cross-validation based model selection criterion and end up with a model that generalises poorly!". In my view, this appears not to be widely appreciated, but is a substantial pitfall in the application of machine learning methods, and is the main focus of my current research; I have written two papers on the subject so far
G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. (www)
which demonstrates that over-fitting in model selection is a substantial problem in machine learning (and you can get severely biased performance estimates if you cut corners in model selection during performance evaluation) and
G. C. Cawley and N. L. C. Talbot, Preventing over-fitting in model selection via Bayesian regularisation of the hyper-parameters, Journal of Machine Learning Research, volume 8, pages 841-861, April 2007. (www)
where the cross-validation based model selection criterion is regularised to try an ameliorate over-fitting in model selection (which is a key problem if you use a kernel with many hyper-parameters).
I am writing up a paper on grid-search based model selection at the moment, which shows that it is certainly possible to use a grid that is too fine where you end up with a model that is statistically inferior to a model selected by a much coarser grid (it was a question on StackExchange that inspired me to look into grid-search).
Hope this helps.
P.S. Unbiased performance evaluation and reliable model selection can indeed be computationally expensive, but in my experience it is well worthwhile. Nested cross-validation, where the outer cross-validation is used for performance estimation and the inner crossvalidation for model selection is a good basic approach.
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
|
There is a definitive answer to this question which is "yes, it is certainly possible to overfit a cross-validation based model selection criterion and end up with a model that generalises poorly!".
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
There is a definitive answer to this question which is "yes, it is certainly possible to overfit a cross-validation based model selection criterion and end up with a model that generalises poorly!". In my view, this appears not to be widely appreciated, but is a substantial pitfall in the application of machine learning methods, and is the main focus of my current research; I have written two papers on the subject so far
G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. (www)
which demonstrates that over-fitting in model selection is a substantial problem in machine learning (and you can get severely biased performance estimates if you cut corners in model selection during performance evaluation) and
G. C. Cawley and N. L. C. Talbot, Preventing over-fitting in model selection via Bayesian regularisation of the hyper-parameters, Journal of Machine Learning Research, volume 8, pages 841-861, April 2007. (www)
where the cross-validation based model selection criterion is regularised to try an ameliorate over-fitting in model selection (which is a key problem if you use a kernel with many hyper-parameters).
I am writing up a paper on grid-search based model selection at the moment, which shows that it is certainly possible to use a grid that is too fine where you end up with a model that is statistically inferior to a model selected by a much coarser grid (it was a question on StackExchange that inspired me to look into grid-search).
Hope this helps.
P.S. Unbiased performance evaluation and reliable model selection can indeed be computationally expensive, but in my experience it is well worthwhile. Nested cross-validation, where the outer cross-validation is used for performance estimation and the inner crossvalidation for model selection is a good basic approach.
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
There is a definitive answer to this question which is "yes, it is certainly possible to overfit a cross-validation based model selection criterion and end up with a model that generalises poorly!".
|
7,460
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
|
Cross validation and bootstrap have been shown to give estimates of error rate that are nearly unbiased and in some cases more accurately by the bootstrap over cross-validation. The problem with other methods like resubstitution is that by estimating error on the same data set that you fit the classifier with you can grossly underestimate the error rate and may be led to algorithms that include too many parameters and will not predict future values as accurately as an algorithm fit to a small set of parameters.
The key to the use of statistical methods is that the data you have totrain the classifier is typical of the data you will see in the future where the classes are missing and must be predicted by the classifier. If you think that the future data could be very different then statistical methods can't help and I don't know what could.
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
|
Cross validation and bootstrap have been shown to give estimates of error rate that are nearly unbiased and in some cases more accurately by the bootstrap over cross-validation. The problem with othe
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
Cross validation and bootstrap have been shown to give estimates of error rate that are nearly unbiased and in some cases more accurately by the bootstrap over cross-validation. The problem with other methods like resubstitution is that by estimating error on the same data set that you fit the classifier with you can grossly underestimate the error rate and may be led to algorithms that include too many parameters and will not predict future values as accurately as an algorithm fit to a small set of parameters.
The key to the use of statistical methods is that the data you have totrain the classifier is typical of the data you will see in the future where the classes are missing and must be predicted by the classifier. If you think that the future data could be very different then statistical methods can't help and I don't know what could.
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
Cross validation and bootstrap have been shown to give estimates of error rate that are nearly unbiased and in some cases more accurately by the bootstrap over cross-validation. The problem with othe
|
7,461
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
|
I suspect one answer here is that, in the context of optimisation, what you are trying to find is a global minimum on a noisy cost function. So you have all the challenges of a multi-dimensional global optimistation plus a stochastic component added to the cost function.
Many of the approaches to deal with challenges of local minima and an expensive search space themselves have parameters which may need tuning, such as simulated annealing or monte carlo methods.
In an ideal, computationally unbounded universe, I suspect you could attempt to find a global minimum of your parameter space with suitably tight limits on the bias and variance of your estimate of the error function. Is this scenario regularisation wouldn't be an issue as you could re-sample ad infinitum.
In the real world I suspect you may easily find yourself in a local minimum.
As you mention, it is a separate issue, but this still leaves you open to overfitting due to sampling issues associated with the data available to you and it's relationship to the real underlying distribution of the sample space.
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
|
I suspect one answer here is that, in the context of optimisation, what you are trying to find is a global minimum on a noisy cost function. So you have all the challenges of a multi-dimensional globa
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
I suspect one answer here is that, in the context of optimisation, what you are trying to find is a global minimum on a noisy cost function. So you have all the challenges of a multi-dimensional global optimistation plus a stochastic component added to the cost function.
Many of the approaches to deal with challenges of local minima and an expensive search space themselves have parameters which may need tuning, such as simulated annealing or monte carlo methods.
In an ideal, computationally unbounded universe, I suspect you could attempt to find a global minimum of your parameter space with suitably tight limits on the bias and variance of your estimate of the error function. Is this scenario regularisation wouldn't be an issue as you could re-sample ad infinitum.
In the real world I suspect you may easily find yourself in a local minimum.
As you mention, it is a separate issue, but this still leaves you open to overfitting due to sampling issues associated with the data available to you and it's relationship to the real underlying distribution of the sample space.
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
I suspect one answer here is that, in the context of optimisation, what you are trying to find is a global minimum on a noisy cost function. So you have all the challenges of a multi-dimensional globa
|
7,462
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
|
It strongly depends on the algorithm, but you certainly can -- though in most cases it will be just a benign waste of effort.
The core of this problem is that this is not a strict optimization -- you don't have any $f(\mathbf{x})$ defined on some domain which simply has an extremum for at least one value of $\mathbf{x}$, say $\mathbf{x}_{\text{opt}}$, and all you have to do is to find it. Instead, you have $f(\mathbf{x})+\epsilon$, where $\epsilon$ has some crazy distribution, is often stochastic and depends not only on $\mathbf{x}$, but also your training data and CV/bootstrap details. This way, the only reasonable thing you can search for is some subspace of $f$s domain, say $X_\text{opt}\ni \textbf{x}_\text{opt}$, on which all the values of $f+\epsilon$ are insignificantly different (statistically speaking, if you wish).
Now, while you can't find $\textbf{x}_\text{opt}$, in practice any value from $X_\text{opt}$ will do -- and usually it is just a search grid point from $X_\text{opt}$ selected at random, to minimize computational load, to maximize some sub-$f$ performance measure, you name it.
The serious overfitting can happen if the $f$ landscape has a sharp extrema -- yet, this "shouldn't happen", i.e. it is a characteristic of very badly selected algorithm/data pair and a bad prognosis for the generalization power.
Thus, well, (based on a practices present in good journals) full, external validation of parameter selection is not something you rigorously have to do (unlike validating feature selection), but only if the optimization is cursory and the classifier is rather insensitive to the parameters.
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
|
It strongly depends on the algorithm, but you certainly can -- though in most cases it will be just a benign waste of effort.
The core of this problem is that this is not a strict optimization -- you
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
It strongly depends on the algorithm, but you certainly can -- though in most cases it will be just a benign waste of effort.
The core of this problem is that this is not a strict optimization -- you don't have any $f(\mathbf{x})$ defined on some domain which simply has an extremum for at least one value of $\mathbf{x}$, say $\mathbf{x}_{\text{opt}}$, and all you have to do is to find it. Instead, you have $f(\mathbf{x})+\epsilon$, where $\epsilon$ has some crazy distribution, is often stochastic and depends not only on $\mathbf{x}$, but also your training data and CV/bootstrap details. This way, the only reasonable thing you can search for is some subspace of $f$s domain, say $X_\text{opt}\ni \textbf{x}_\text{opt}$, on which all the values of $f+\epsilon$ are insignificantly different (statistically speaking, if you wish).
Now, while you can't find $\textbf{x}_\text{opt}$, in practice any value from $X_\text{opt}$ will do -- and usually it is just a search grid point from $X_\text{opt}$ selected at random, to minimize computational load, to maximize some sub-$f$ performance measure, you name it.
The serious overfitting can happen if the $f$ landscape has a sharp extrema -- yet, this "shouldn't happen", i.e. it is a characteristic of very badly selected algorithm/data pair and a bad prognosis for the generalization power.
Thus, well, (based on a practices present in good journals) full, external validation of parameter selection is not something you rigorously have to do (unlike validating feature selection), but only if the optimization is cursory and the classifier is rather insensitive to the parameters.
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
It strongly depends on the algorithm, but you certainly can -- though in most cases it will be just a benign waste of effort.
The core of this problem is that this is not a strict optimization -- you
|
7,463
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
|
Yes, the parameters can be „overfitted” onto training and test set during crossvalidation or bootstrapping. However, there are some methods to prevent this.
First simple method is, you divide your dataset into 3 partitions, one for testing (~20%), one for testing optimized parameters (~20%) and one for fitting the classifier with set parameters. It is only possible if you have quite large dataset. In other cases double crossvalidation is suggested.
Romain François and Florent Langrognet, "Double Cross Validation for Model Based Classification", 2006
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
|
Yes, the parameters can be „overfitted” onto training and test set during crossvalidation or bootstrapping. However, there are some methods to prevent this.
First simple method is, you divide your dat
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
Yes, the parameters can be „overfitted” onto training and test set during crossvalidation or bootstrapping. However, there are some methods to prevent this.
First simple method is, you divide your dataset into 3 partitions, one for testing (~20%), one for testing optimized parameters (~20%) and one for fitting the classifier with set parameters. It is only possible if you have quite large dataset. In other cases double crossvalidation is suggested.
Romain François and Florent Langrognet, "Double Cross Validation for Model Based Classification", 2006
|
Can you overfit by training machine learning algorithms using CV/Bootstrap?
Yes, the parameters can be „overfitted” onto training and test set during crossvalidation or bootstrapping. However, there are some methods to prevent this.
First simple method is, you divide your dat
|
7,464
|
Comparing hierarchical clustering dendrograms obtained by different distances & methods
|
To compare the similarity of two hierarchical (tree-like) structures, measures based on cophenetic correlation idea are used. But is it correct to perform comparison of dendrograms in order to select the "right" method or distance measure in hierarchical clustering?
There are some points - hidden snags - regarding hierarchical cluster analysis that I would hold quite important:
Never compare (in order to select the method giving stronger partition) dendrograms obtained by different agglomeration
methods visually. It won't tell which method is "better" at that. Each method has its
own "prototypical" tree look: the trees will differ consistently even when
the data have no cluster structure or have random cluster structure. (And I don' think there exist a standardization or measure that would take off these intrinsic differences.). You may, however, compare dendrogram looks of results produced by the same method but different data. Maxim: direct, appearance comparing of dendrograms after different methods is unacceptable.
Do not decide on the number of clusters (i.e. where to cut the tree) by looking at the dendrogram of Ward method. In Ward, the tree shows the growth of the summative, not the averaged, coefficient of colligation; and the consequence is that since later clusters are bigger by the number of points, the later clusters look misleadingly "better" on the tree. To standardize Ward's dendrogramm appropriately, divide the coefficient growth at each step by the overall number of points in the two clusters being combined (such standardized Ward dendrogram, though, may be hard to implement graphically).$^1$ Maxim: choosing a cut level by contemplating a dendrogram appearance, while possible, is not the best method to select the partition, and for some methods may be misleading. It is recommended to rely on some formal internal clustering criterion instead (see also here).
Albeit no-one can forbid you "experimenting" with distance measures or agglomerative methods, it is better to select the distance and the method consciously, not blind trying. The distance should reflect the aspects of difference you are interested in, and the method - one must be aware - implies a specific archetype of a cluster (e.g. the metaphor of a Ward cluster is, I would say, type; cluster after complete linkage would be circle [by hobby or plot]; cluster after single linkage would be spectrum [chain]; cluster after centroid method would be proximity of platforms [politics]; an average linkage cluster is conceptually most undifferentiated and would be generally united class).
Some methods call for right distance measures and/or right type of data. Ward and centroid, for example, logically require (squared) euclidean distance - because these methods engage in computation of centroids in euclidean space. And computation of geometric centroids is incongruous with, for example, binary data; the data should be scale/continuous. Maxim: data/distance/method assumptions and correspondence is very important and not so easy question.
Preprocessing (such as centering, scaling and other forms of transformation of variables/features) prior computation of a distance matrix and doing the clustering is extremely important question, too. It can dramatically influence the results. Think over what preprocessing may help you and will make sense from the interpretation point of view. Also, never be shy to carefully inspect you data graphically before attempting to do cluster analysis.
Not all methods of agglomerative clustering can be equally seen as giving you hierarchical classification... on philosophical grounds. For example, centroid method do gives hierarchy in a sense, because cluster centre is an emergent and defining feature of a cluster as a whole, and merging clusters is driven by that feature. Complete linkage, on the other hand, "dismisses" both subclusters when it merges them - by virtue of distancing among individual objects of the two. Thus, complete linkage dendrogram is merely a history of collection and not a parent-child sort of taxonomy. Maxim: hierarchical agglomerative cluster analysis, generally, expects that you make a partition based on its result, rather than see the result as hierarchical taxonomy.
Hierarchical clustering is typical greedy algorithm that makes the best choice among alternatives appearing on each step in the hope to get close to optimal solution in the end. However, the "best" choice appearing on a high level step is likely to be poorer than global optimum theoretically possible on that step. The greater is the step, the greater is the suboptimality, as a rule. Given that we usually want few clusters last steps are important; and, as just said, they are expected to be relatively poor if the steps' number is high (say, thousandth step). That's why hierarchical clustering is generally not recommended for large samples of objects (numbering thousands of objects) even if the program could handle such a big distance matrix.
If after the above precautions you continue to think that you want a measure of similarity between hierarchical classifications you might google on 'comparing of dendrograms' and 'comparing hierarchical classifications'. One most suggesting itself idea may be based on the cophenetic correlation: having two dendrograms for the same dataset of n objects, let $X_{ij}$ be coefficient of colligation (or maybe its rank, the step number) between every pair of objects ij in one dendrogram, and $Y_{ij}$ likewise be the same in the other dendrogram. Compute correlation or cosine.
$^1$ Later update on the problem of dendrogram of Wards's method. Different clustering programs may output differently transformed aglomeration coefficients for Ward's method. Hence their dendrograms will look somewhat differently despite that the clustering history and results are the same. For example, SPSS doesn't take the root from the ultrametric coefficients, and it cumulates them in the output. Another tradition (found in some R packages, for example) is to take the root (so called "Ward-2" implementations) and not to cumulate. To repeat again, such differences affect only the general shape/looks of the dendrogram, not the clustering results. But the looks of the dendrogram might influence your decision about the number of clusters. The moral is that it would be safe not to rely on dendrogram in Ward's method at all, unless you know exactly what are these coefficients out of your program and how to interpret them correctly.
|
Comparing hierarchical clustering dendrograms obtained by different distances & methods
|
To compare the similarity of two hierarchical (tree-like) structures, measures based on cophenetic correlation idea are used. But is it correct to perform comparison of dendrograms in order to select
|
Comparing hierarchical clustering dendrograms obtained by different distances & methods
To compare the similarity of two hierarchical (tree-like) structures, measures based on cophenetic correlation idea are used. But is it correct to perform comparison of dendrograms in order to select the "right" method or distance measure in hierarchical clustering?
There are some points - hidden snags - regarding hierarchical cluster analysis that I would hold quite important:
Never compare (in order to select the method giving stronger partition) dendrograms obtained by different agglomeration
methods visually. It won't tell which method is "better" at that. Each method has its
own "prototypical" tree look: the trees will differ consistently even when
the data have no cluster structure or have random cluster structure. (And I don' think there exist a standardization or measure that would take off these intrinsic differences.). You may, however, compare dendrogram looks of results produced by the same method but different data. Maxim: direct, appearance comparing of dendrograms after different methods is unacceptable.
Do not decide on the number of clusters (i.e. where to cut the tree) by looking at the dendrogram of Ward method. In Ward, the tree shows the growth of the summative, not the averaged, coefficient of colligation; and the consequence is that since later clusters are bigger by the number of points, the later clusters look misleadingly "better" on the tree. To standardize Ward's dendrogramm appropriately, divide the coefficient growth at each step by the overall number of points in the two clusters being combined (such standardized Ward dendrogram, though, may be hard to implement graphically).$^1$ Maxim: choosing a cut level by contemplating a dendrogram appearance, while possible, is not the best method to select the partition, and for some methods may be misleading. It is recommended to rely on some formal internal clustering criterion instead (see also here).
Albeit no-one can forbid you "experimenting" with distance measures or agglomerative methods, it is better to select the distance and the method consciously, not blind trying. The distance should reflect the aspects of difference you are interested in, and the method - one must be aware - implies a specific archetype of a cluster (e.g. the metaphor of a Ward cluster is, I would say, type; cluster after complete linkage would be circle [by hobby or plot]; cluster after single linkage would be spectrum [chain]; cluster after centroid method would be proximity of platforms [politics]; an average linkage cluster is conceptually most undifferentiated and would be generally united class).
Some methods call for right distance measures and/or right type of data. Ward and centroid, for example, logically require (squared) euclidean distance - because these methods engage in computation of centroids in euclidean space. And computation of geometric centroids is incongruous with, for example, binary data; the data should be scale/continuous. Maxim: data/distance/method assumptions and correspondence is very important and not so easy question.
Preprocessing (such as centering, scaling and other forms of transformation of variables/features) prior computation of a distance matrix and doing the clustering is extremely important question, too. It can dramatically influence the results. Think over what preprocessing may help you and will make sense from the interpretation point of view. Also, never be shy to carefully inspect you data graphically before attempting to do cluster analysis.
Not all methods of agglomerative clustering can be equally seen as giving you hierarchical classification... on philosophical grounds. For example, centroid method do gives hierarchy in a sense, because cluster centre is an emergent and defining feature of a cluster as a whole, and merging clusters is driven by that feature. Complete linkage, on the other hand, "dismisses" both subclusters when it merges them - by virtue of distancing among individual objects of the two. Thus, complete linkage dendrogram is merely a history of collection and not a parent-child sort of taxonomy. Maxim: hierarchical agglomerative cluster analysis, generally, expects that you make a partition based on its result, rather than see the result as hierarchical taxonomy.
Hierarchical clustering is typical greedy algorithm that makes the best choice among alternatives appearing on each step in the hope to get close to optimal solution in the end. However, the "best" choice appearing on a high level step is likely to be poorer than global optimum theoretically possible on that step. The greater is the step, the greater is the suboptimality, as a rule. Given that we usually want few clusters last steps are important; and, as just said, they are expected to be relatively poor if the steps' number is high (say, thousandth step). That's why hierarchical clustering is generally not recommended for large samples of objects (numbering thousands of objects) even if the program could handle such a big distance matrix.
If after the above precautions you continue to think that you want a measure of similarity between hierarchical classifications you might google on 'comparing of dendrograms' and 'comparing hierarchical classifications'. One most suggesting itself idea may be based on the cophenetic correlation: having two dendrograms for the same dataset of n objects, let $X_{ij}$ be coefficient of colligation (or maybe its rank, the step number) between every pair of objects ij in one dendrogram, and $Y_{ij}$ likewise be the same in the other dendrogram. Compute correlation or cosine.
$^1$ Later update on the problem of dendrogram of Wards's method. Different clustering programs may output differently transformed aglomeration coefficients for Ward's method. Hence their dendrograms will look somewhat differently despite that the clustering history and results are the same. For example, SPSS doesn't take the root from the ultrametric coefficients, and it cumulates them in the output. Another tradition (found in some R packages, for example) is to take the root (so called "Ward-2" implementations) and not to cumulate. To repeat again, such differences affect only the general shape/looks of the dendrogram, not the clustering results. But the looks of the dendrogram might influence your decision about the number of clusters. The moral is that it would be safe not to rely on dendrogram in Ward's method at all, unless you know exactly what are these coefficients out of your program and how to interpret them correctly.
|
Comparing hierarchical clustering dendrograms obtained by different distances & methods
To compare the similarity of two hierarchical (tree-like) structures, measures based on cophenetic correlation idea are used. But is it correct to perform comparison of dendrograms in order to select
|
7,465
|
Raw or orthogonal polynomial regression?
|
I feel like several of these answers miss the point. Haitao's answer addresses the computational problems with fitting raw polynomials, but it's clear that OP is asking about the statistical differences between the two approaches. That is, if we had a perfect computer that could represent all values exactly, why would we prefer one approach over the other?
user5957401 argues that orthogonal polynomials reduce the collinearity among the polynomial functions, which makes their estimation more stable. I agree with Jake Westfall's critique; the coefficients in orthogonal polynomials represent completely different quantities from the coefficients on raw polynomials. The model-implied dose-response function, $R^2$, MSE, predicted values, and the standard errors of the predicted values will all be identical regardless of whether you use orthogonal or raw polynomials.
The coefficient on $X$ in a raw polynomial regression of order 2 has the interpretation of "the instantaneous change in $Y$ when $X=0$." If you performed a marginal effects procedure on the orthogonal polynomial where $X=0$, you would get exactly the same slope and standard error, even though the coefficient and standard error on the first-order term in the orthogonal polynomial regression are completely different from their values in the raw polynomial regression. That is, when trying to get the same quantities from both regressions (i.e., quantities that can be interpreted the same way), the estimates and standard errors will be identical. Using orthogonal polynomials doesn't mean you magically have more certainty of the slope of $X$ at any given point. The stability of the models is identical.
In the example below, I fit a raw polynomial model and an orthogonal polynomial model on the same data using polynomials of order 3. To get a parameter with the same interpretation as the slope on the second-order (squared) term in the raw model, I used a marginal effects procedure on the orthogonal model, requesting the slope when the predictor is equal to 0.
data("iris")
#Raw:
fit.raw <- lm(Petal.Length ~ Petal.Width + I(Petal.Width^2) +
I(Petal.Width^3), data = iris)
summary(fit.raw)
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 1.1034 0.1304 8.464 2.50e-14 ***
#> Petal.Width 1.1527 0.5836 1.975 0.05013 .
#> I(Petal.Width^2) 1.7100 0.5487 3.116 0.00221 **
#> I(Petal.Width^3) -0.5788 0.1408 -4.110 6.57e-05 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.3898 on 146 degrees of freedom
#> Multiple R-squared: 0.9522, Adjusted R-squared: 0.9512
#> F-statistic: 969.9 on 3 and 146 DF, p-value: < 2.2e-16
#Orthogonal
fit.orth <- lm(Petal.Length ~ stats::poly(Petal.Width, 3), data = iris)
#Marginal effect of X at X=0 from orthogonal model
library(margins)
summary(margins(fit.orth, variables = "Petal.Width",
at = data.frame(Petal.Width = 0)))
#> Warning in check_values(data, at): A 'at' value for 'Petal.Width' is
#> outside observed data range (0.1,2.5)!
#> factor Petal.Width AME SE z p lower upper
#> Petal.Width 0.0000 1.1527 0.5836 1.9752 0.0482 0.0089 2.2965
Created on 2019-10-25 by the reprex package (v0.3.0)
The marginal effect of Petal.Width at 0 from the orthogonal fit and its standard error are exactly equal to those from the raw polynomial fit (i.e., 1.1527). Using orthogonal polynomials doesn't improve the precision of estimates of the same quantity between the two models.
The key is the following: using orthogonal polynomials allows you to isolate the contribution of each term to explaining variance in the outcome, e.g., as measured by the squared semipartial correlation. If you fit an orthogonal polynomial of order 3, the squared semipartial correlation for each term represents the variance in the outcome explained by that term in the model. So, if you wanted to answer "How much of the variance in $Y$ is explained by the linear component of $X$?" you could fit an orthogonal polynomial regression, and the squared semipartial correlation on the linear term would represent this quantity. This is not so with raw polynomials. If you fit a raw polynomial model of the same order, the squared partial correlation on the linear term does not represent the proportion of variance in $Y$ explained by the linear component of $X$. See below.
library(jtools)
data("iris")
fit.raw3 <- lm(Petal.Length ~ Petal.Width + I(Petal.Width^2) +
I(Petal.Width^3), data = iris)
fit.raw1 <- lm(Petal.Length ~ Petal.Width, data = iris)
round(summ(fit.raw3, part.corr = T)$coef, 3)
#> Est. S.E. t val. p partial.r part.r
#> (Intercept) 1.103 0.130 8.464 0.000 NA NA
#> Petal.Width 1.153 0.584 1.975 0.050 0.161 0.036
#> I(Petal.Width^2) 1.710 0.549 3.116 0.002 0.250 0.056
#> I(Petal.Width^3) -0.579 0.141 -4.110 0.000 -0.322 -0.074
round(summ(fit.raw1, part.corr = T)$coef, 3)
#> Est. S.E. t val. p partial.r part.r
#> (Intercept) 1.084 0.073 14.850 0 NA NA
#> Petal.Width 2.230 0.051 43.387 0 0.963 0.963
fit.orth3 <- lm(Petal.Length ~ stats::poly(Petal.Width, 3),
data = iris)
fit.orth1 <- lm(Petal.Length ~ stats::poly(Petal.Width, 3)[,1],
data = iris)
round(summ(fit.orth3, part.corr = T)$coef, 3)
#> Est. S.E. t val. p partial.r part.r
#> (Intercept) 3.758 0.032 118.071 0 NA NA
#> stats::poly(Petal.Width, 3)1 20.748 0.390 53.225 0 0.975 0.963
#> stats::poly(Petal.Width, 3)2 -3.015 0.390 -7.735 0 -0.539 -0.140
#> stats::poly(Petal.Width, 3)3 -1.602 0.390 -4.110 0 -0.322 -0.074
round(summ(fit.orth1, part.corr = T)$coef, 3)
#> Est. S.E. t val. p partial.r part.r
#> (Intercept) 3.758 0.039 96.247 0 NA NA
#> stats::poly(Petal.Width, 3)[, 1] 20.748 0.478 43.387 0 0.963 0.963
Created on 2019-10-25 by the reprex package (v0.3.0)
The squared semipartial correlations for the raw polynomials when the polynomial of order 3 is fit are $0.001$, $0.003$, and $0.005$. When only the linear term is fit, the squared semipartial correlation is $0.927$. The squared semipartial correlations for the orthogonal polynomials when the polynomial of order 3 is fit are $0.927$, $0.020$, and $0.005$. When only the linear term is fit, the squared semipartial correlation is still $0.927$. From the orthogonal polynomial model but not the raw polynomial model, we know that most of the variance explained in the outcome is due to the linear term, with very little coming from the square term and even less from the cubic term. The raw polynomial values don't tell that story.
Now, if you want this interpretational benefit over the interpretational benefit of actually being able to understand the coefficients of the model, then you should use orthogonal polynomials. If you would prefer to look at the coefficients and know exactly what they mean (though I doubt one typically does), then you should use the raw polynomials. If you don't care (i.e., you only want to control for confounding or generate predicted values), then it truly doesn't matter; both forms carry the same information with respect to those goals. I would also argue that orthogonal polynomials should be preferred in regularization (e.g., lasso), because removing higher-order terms doesn't affect the coefficients of the lower order terms, which is not true with raw polynomials, and regularization techniques often care about the size of each coefficient.
|
Raw or orthogonal polynomial regression?
|
I feel like several of these answers miss the point. Haitao's answer addresses the computational problems with fitting raw polynomials, but it's clear that OP is asking about the statistical differenc
|
Raw or orthogonal polynomial regression?
I feel like several of these answers miss the point. Haitao's answer addresses the computational problems with fitting raw polynomials, but it's clear that OP is asking about the statistical differences between the two approaches. That is, if we had a perfect computer that could represent all values exactly, why would we prefer one approach over the other?
user5957401 argues that orthogonal polynomials reduce the collinearity among the polynomial functions, which makes their estimation more stable. I agree with Jake Westfall's critique; the coefficients in orthogonal polynomials represent completely different quantities from the coefficients on raw polynomials. The model-implied dose-response function, $R^2$, MSE, predicted values, and the standard errors of the predicted values will all be identical regardless of whether you use orthogonal or raw polynomials.
The coefficient on $X$ in a raw polynomial regression of order 2 has the interpretation of "the instantaneous change in $Y$ when $X=0$." If you performed a marginal effects procedure on the orthogonal polynomial where $X=0$, you would get exactly the same slope and standard error, even though the coefficient and standard error on the first-order term in the orthogonal polynomial regression are completely different from their values in the raw polynomial regression. That is, when trying to get the same quantities from both regressions (i.e., quantities that can be interpreted the same way), the estimates and standard errors will be identical. Using orthogonal polynomials doesn't mean you magically have more certainty of the slope of $X$ at any given point. The stability of the models is identical.
In the example below, I fit a raw polynomial model and an orthogonal polynomial model on the same data using polynomials of order 3. To get a parameter with the same interpretation as the slope on the second-order (squared) term in the raw model, I used a marginal effects procedure on the orthogonal model, requesting the slope when the predictor is equal to 0.
data("iris")
#Raw:
fit.raw <- lm(Petal.Length ~ Petal.Width + I(Petal.Width^2) +
I(Petal.Width^3), data = iris)
summary(fit.raw)
#> Coefficients:
#> Estimate Std. Error t value Pr(>|t|)
#> (Intercept) 1.1034 0.1304 8.464 2.50e-14 ***
#> Petal.Width 1.1527 0.5836 1.975 0.05013 .
#> I(Petal.Width^2) 1.7100 0.5487 3.116 0.00221 **
#> I(Petal.Width^3) -0.5788 0.1408 -4.110 6.57e-05 ***
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> Residual standard error: 0.3898 on 146 degrees of freedom
#> Multiple R-squared: 0.9522, Adjusted R-squared: 0.9512
#> F-statistic: 969.9 on 3 and 146 DF, p-value: < 2.2e-16
#Orthogonal
fit.orth <- lm(Petal.Length ~ stats::poly(Petal.Width, 3), data = iris)
#Marginal effect of X at X=0 from orthogonal model
library(margins)
summary(margins(fit.orth, variables = "Petal.Width",
at = data.frame(Petal.Width = 0)))
#> Warning in check_values(data, at): A 'at' value for 'Petal.Width' is
#> outside observed data range (0.1,2.5)!
#> factor Petal.Width AME SE z p lower upper
#> Petal.Width 0.0000 1.1527 0.5836 1.9752 0.0482 0.0089 2.2965
Created on 2019-10-25 by the reprex package (v0.3.0)
The marginal effect of Petal.Width at 0 from the orthogonal fit and its standard error are exactly equal to those from the raw polynomial fit (i.e., 1.1527). Using orthogonal polynomials doesn't improve the precision of estimates of the same quantity between the two models.
The key is the following: using orthogonal polynomials allows you to isolate the contribution of each term to explaining variance in the outcome, e.g., as measured by the squared semipartial correlation. If you fit an orthogonal polynomial of order 3, the squared semipartial correlation for each term represents the variance in the outcome explained by that term in the model. So, if you wanted to answer "How much of the variance in $Y$ is explained by the linear component of $X$?" you could fit an orthogonal polynomial regression, and the squared semipartial correlation on the linear term would represent this quantity. This is not so with raw polynomials. If you fit a raw polynomial model of the same order, the squared partial correlation on the linear term does not represent the proportion of variance in $Y$ explained by the linear component of $X$. See below.
library(jtools)
data("iris")
fit.raw3 <- lm(Petal.Length ~ Petal.Width + I(Petal.Width^2) +
I(Petal.Width^3), data = iris)
fit.raw1 <- lm(Petal.Length ~ Petal.Width, data = iris)
round(summ(fit.raw3, part.corr = T)$coef, 3)
#> Est. S.E. t val. p partial.r part.r
#> (Intercept) 1.103 0.130 8.464 0.000 NA NA
#> Petal.Width 1.153 0.584 1.975 0.050 0.161 0.036
#> I(Petal.Width^2) 1.710 0.549 3.116 0.002 0.250 0.056
#> I(Petal.Width^3) -0.579 0.141 -4.110 0.000 -0.322 -0.074
round(summ(fit.raw1, part.corr = T)$coef, 3)
#> Est. S.E. t val. p partial.r part.r
#> (Intercept) 1.084 0.073 14.850 0 NA NA
#> Petal.Width 2.230 0.051 43.387 0 0.963 0.963
fit.orth3 <- lm(Petal.Length ~ stats::poly(Petal.Width, 3),
data = iris)
fit.orth1 <- lm(Petal.Length ~ stats::poly(Petal.Width, 3)[,1],
data = iris)
round(summ(fit.orth3, part.corr = T)$coef, 3)
#> Est. S.E. t val. p partial.r part.r
#> (Intercept) 3.758 0.032 118.071 0 NA NA
#> stats::poly(Petal.Width, 3)1 20.748 0.390 53.225 0 0.975 0.963
#> stats::poly(Petal.Width, 3)2 -3.015 0.390 -7.735 0 -0.539 -0.140
#> stats::poly(Petal.Width, 3)3 -1.602 0.390 -4.110 0 -0.322 -0.074
round(summ(fit.orth1, part.corr = T)$coef, 3)
#> Est. S.E. t val. p partial.r part.r
#> (Intercept) 3.758 0.039 96.247 0 NA NA
#> stats::poly(Petal.Width, 3)[, 1] 20.748 0.478 43.387 0 0.963 0.963
Created on 2019-10-25 by the reprex package (v0.3.0)
The squared semipartial correlations for the raw polynomials when the polynomial of order 3 is fit are $0.001$, $0.003$, and $0.005$. When only the linear term is fit, the squared semipartial correlation is $0.927$. The squared semipartial correlations for the orthogonal polynomials when the polynomial of order 3 is fit are $0.927$, $0.020$, and $0.005$. When only the linear term is fit, the squared semipartial correlation is still $0.927$. From the orthogonal polynomial model but not the raw polynomial model, we know that most of the variance explained in the outcome is due to the linear term, with very little coming from the square term and even less from the cubic term. The raw polynomial values don't tell that story.
Now, if you want this interpretational benefit over the interpretational benefit of actually being able to understand the coefficients of the model, then you should use orthogonal polynomials. If you would prefer to look at the coefficients and know exactly what they mean (though I doubt one typically does), then you should use the raw polynomials. If you don't care (i.e., you only want to control for confounding or generate predicted values), then it truly doesn't matter; both forms carry the same information with respect to those goals. I would also argue that orthogonal polynomials should be preferred in regularization (e.g., lasso), because removing higher-order terms doesn't affect the coefficients of the lower order terms, which is not true with raw polynomials, and regularization techniques often care about the size of each coefficient.
|
Raw or orthogonal polynomial regression?
I feel like several of these answers miss the point. Haitao's answer addresses the computational problems with fitting raw polynomials, but it's clear that OP is asking about the statistical differenc
|
7,466
|
Raw or orthogonal polynomial regression?
|
I believe the answer is less about numeric stability (though that plays a role) and more about reducing correlation.
In essence -- the issue boils down to the fact that when we regress against a bunch of high order polynomials, the covariates we are regressing against become highly correlated. Example code below:
x = rnorm(1000)
raw.poly = poly(x,6,raw=T)
orthogonal.poly = poly(x,6)
cor(raw.poly)
cor(orthogonal.poly)
This is tremendously important. As the covariates become more correlated, our ability to determine which are important (and what the size of their effects are) erodes rapidly. This is typically referred to as the problem of multicollinearity. At the limit, if we had two variables that were fully correlated, when we regress them against something, its impossible to distinguish between the two -- you can think of this as an extreme version of the problem, but this problem affects our estimates for lesser degrees of correlation as well. Thus in a real sense -- even if numerical instability wasn't a problem -- the correlation from higher order polynomials does tremendous damage to our inference routines. This will manifest as larger standard errors (and thus smaller t-stats) that you would otherwise see (see example regression below). For this reason, we might choose to orthogonalize our polynomials before regressing them.
y = x*2 + 5*x**3 - 3*x**2 + rnorm(1000)
raw.mod = lm(y~poly(x,6,raw=T))
orthogonal.mod = lm(y~poly(x,6))
summary(raw.mod)
summary(orthogonal.mod)
If you run this code, interpretation is a touch hard because the coefficients all change and so things are hard to compare. Looking at the T-stats though, we can see that the ability to determine the coefficients was MUCH larger with orthogonal polynomials. For the 3 relevant coefficients, I got t-stats of (560,21,449) for the orthogonal model, and only (28,-38,121) for the raw polynomial model. This is a huge difference for a simple model with only a few relatively low order polynomial terms that mattered.
That is not to say that this comes without costs. There are two primary costs to bear in mind. 1) we lose some interpretability with orthogonal polynomials. We might understand what the coefficient on x**3 means, but interpreting the coefficient on x**3-3x (the third hermite poly -- not necessarily what you will use) can be much harder.
Second -- when we say that these are polynomials are orthogonal -- we mean that they are orthogonal with respect to some measure of distance. Picking a measure of distance that is relevant to your situation can be difficult. However, having said that, I believe that the poly function is designed to choose such that it is orthogonal with respect to covariance -- which is useful for linear regressions.
|
Raw or orthogonal polynomial regression?
|
I believe the answer is less about numeric stability (though that plays a role) and more about reducing correlation.
In essence -- the issue boils down to the fact that when we regress against a bunc
|
Raw or orthogonal polynomial regression?
I believe the answer is less about numeric stability (though that plays a role) and more about reducing correlation.
In essence -- the issue boils down to the fact that when we regress against a bunch of high order polynomials, the covariates we are regressing against become highly correlated. Example code below:
x = rnorm(1000)
raw.poly = poly(x,6,raw=T)
orthogonal.poly = poly(x,6)
cor(raw.poly)
cor(orthogonal.poly)
This is tremendously important. As the covariates become more correlated, our ability to determine which are important (and what the size of their effects are) erodes rapidly. This is typically referred to as the problem of multicollinearity. At the limit, if we had two variables that were fully correlated, when we regress them against something, its impossible to distinguish between the two -- you can think of this as an extreme version of the problem, but this problem affects our estimates for lesser degrees of correlation as well. Thus in a real sense -- even if numerical instability wasn't a problem -- the correlation from higher order polynomials does tremendous damage to our inference routines. This will manifest as larger standard errors (and thus smaller t-stats) that you would otherwise see (see example regression below). For this reason, we might choose to orthogonalize our polynomials before regressing them.
y = x*2 + 5*x**3 - 3*x**2 + rnorm(1000)
raw.mod = lm(y~poly(x,6,raw=T))
orthogonal.mod = lm(y~poly(x,6))
summary(raw.mod)
summary(orthogonal.mod)
If you run this code, interpretation is a touch hard because the coefficients all change and so things are hard to compare. Looking at the T-stats though, we can see that the ability to determine the coefficients was MUCH larger with orthogonal polynomials. For the 3 relevant coefficients, I got t-stats of (560,21,449) for the orthogonal model, and only (28,-38,121) for the raw polynomial model. This is a huge difference for a simple model with only a few relatively low order polynomial terms that mattered.
That is not to say that this comes without costs. There are two primary costs to bear in mind. 1) we lose some interpretability with orthogonal polynomials. We might understand what the coefficient on x**3 means, but interpreting the coefficient on x**3-3x (the third hermite poly -- not necessarily what you will use) can be much harder.
Second -- when we say that these are polynomials are orthogonal -- we mean that they are orthogonal with respect to some measure of distance. Picking a measure of distance that is relevant to your situation can be difficult. However, having said that, I believe that the poly function is designed to choose such that it is orthogonal with respect to covariance -- which is useful for linear regressions.
|
Raw or orthogonal polynomial regression?
I believe the answer is less about numeric stability (though that plays a role) and more about reducing correlation.
In essence -- the issue boils down to the fact that when we regress against a bunc
|
7,467
|
Raw or orthogonal polynomial regression?
|
Why can't I just do a "normal" regression to get the coefficients?
Because it is not numerically stable. Remember computer is using fixed number of bits to represent a float number. Check IEEE754 for details, you may surprised that even simple number $0.4$, computer need to store it as $0.4000000059604644775390625$. You can try other numbers here
Using raw polynomial will cause problem because we will have huge number. Here is a small proof: we are comparing matrix condition number with raw and orthogonal polynomial.
> kappa(model.matrix(mpg~poly(wt,10),mtcars))
[1] 5.575962
> kappa(model.matrix(mpg~poly(wt,10, raw = T),mtcars))
[1] 2.119183e+13
You can also check my answer here for an example.
Why are there large coefficents for higher-order polynomial
|
Raw or orthogonal polynomial regression?
|
Why can't I just do a "normal" regression to get the coefficients?
Because it is not numerically stable. Remember computer is using fixed number of bits to represent a float number. Check IEEE754 for
|
Raw or orthogonal polynomial regression?
Why can't I just do a "normal" regression to get the coefficients?
Because it is not numerically stable. Remember computer is using fixed number of bits to represent a float number. Check IEEE754 for details, you may surprised that even simple number $0.4$, computer need to store it as $0.4000000059604644775390625$. You can try other numbers here
Using raw polynomial will cause problem because we will have huge number. Here is a small proof: we are comparing matrix condition number with raw and orthogonal polynomial.
> kappa(model.matrix(mpg~poly(wt,10),mtcars))
[1] 5.575962
> kappa(model.matrix(mpg~poly(wt,10, raw = T),mtcars))
[1] 2.119183e+13
You can also check my answer here for an example.
Why are there large coefficents for higher-order polynomial
|
Raw or orthogonal polynomial regression?
Why can't I just do a "normal" regression to get the coefficients?
Because it is not numerically stable. Remember computer is using fixed number of bits to represent a float number. Check IEEE754 for
|
7,468
|
Raw or orthogonal polynomial regression?
|
I would have just commented to mention this, but I do not have enough rep, so I'll try to expand into an answer. You might be interested to see that in Lab Section 7.8.1 in "Introduction to Statistical Learning" (James et. al., 2017, corrected 8th printing), they do discuss some differences between using orthogonal polynomials or not, which is using the raw=TRUE or raw=FALSE in the poly() function. For example, the coefficient estimates will change, but the fitted values do not:
# using the Wage dataset in the ISLR library
fit1 <- lm(wage ~ poly(age, 4, raw=FALSE), data=Wage)
fit2 <- lm(wage ~ poly(age, 4, raw=TRUE), data=Wage)
print(coef(fit1)) # coefficient estimates differ
print(coef(fit2))
all.equal(predict(fit1), predict(fit2)) #returns TRUE
The book also discusses how when orthogonal polynomials are used, the p-values obtained using the anova() nested F-test (to explore what degree polynomial might be warranted) are the same as those obtained when using the standard t-test, output by summary(fit). This illustrates that the F-statistic is equal to the square of the t-statistic in certain situations.
|
Raw or orthogonal polynomial regression?
|
I would have just commented to mention this, but I do not have enough rep, so I'll try to expand into an answer. You might be interested to see that in Lab Section 7.8.1 in "Introduction to Statistica
|
Raw or orthogonal polynomial regression?
I would have just commented to mention this, but I do not have enough rep, so I'll try to expand into an answer. You might be interested to see that in Lab Section 7.8.1 in "Introduction to Statistical Learning" (James et. al., 2017, corrected 8th printing), they do discuss some differences between using orthogonal polynomials or not, which is using the raw=TRUE or raw=FALSE in the poly() function. For example, the coefficient estimates will change, but the fitted values do not:
# using the Wage dataset in the ISLR library
fit1 <- lm(wage ~ poly(age, 4, raw=FALSE), data=Wage)
fit2 <- lm(wage ~ poly(age, 4, raw=TRUE), data=Wage)
print(coef(fit1)) # coefficient estimates differ
print(coef(fit2))
all.equal(predict(fit1), predict(fit2)) #returns TRUE
The book also discusses how when orthogonal polynomials are used, the p-values obtained using the anova() nested F-test (to explore what degree polynomial might be warranted) are the same as those obtained when using the standard t-test, output by summary(fit). This illustrates that the F-statistic is equal to the square of the t-statistic in certain situations.
|
Raw or orthogonal polynomial regression?
I would have just commented to mention this, but I do not have enough rep, so I'll try to expand into an answer. You might be interested to see that in Lab Section 7.8.1 in "Introduction to Statistica
|
7,469
|
When to choose SARSA vs. Q Learning
|
They mostly look the same except that in SARSA we take actual action and in Q Learning we take the action with highest reward.
Actually in both you "take" the actual single generated action $a_{t+1}$ next. In Q learning, you update the estimate from the maximum estimate of possible next actions, regardless of which action you took. Whilst in SARSA, you update estimates based on and take the same action.
This is probably what you meant by "take" in the question, but in the literature, taking an action means that it becomes the value of e.g. $a_{t}$, and influences $r_{t+1}$, $s_{t+1}$.
Are there any theoretical or practical settings in which one should the prefer one over the other?
Q-learning has the following advantages and disadvantages compared to SARSA:
Q-learning directly learns the optimal policy, whilst SARSA learns a near-optimal policy whilst exploring. If you want to learn an optimal policy using SARSA, then you will need to decide on a strategy to decay $\epsilon$ in $\epsilon$-greedy action choice, which may become a fiddly hyperparameter to tune.
Q-learning (and off-policy learning in general) has higher per-sample variance than SARSA, and may suffer from problems converging as a result. This turns up as a problem when training neural networks via Q-learning.
SARSA will approach convergence allowing for possible penalties from exploratory moves, whilst Q-learning will ignore them. That makes SARSA more conservative - if there is risk of a large negative reward close to the optimal path, Q-learning will tend to trigger that reward whilst exploring, whilst SARSA will tend to avoid a dangerous optimal path and only slowly learn to use it when the exploration parameters are reduced. The classic toy problem that demonstrates this effect is called cliff walking.
In practice the last point can make a big difference if mistakes are costly - e.g. you are training a robot not in simulation, but in the real world. You may prefer a more conservative learning algorithm that avoids high risk, if there was real time and money at stake if the robot was damaged.
If your goal is to train an optimal agent in simulation, or in a low-cost and fast-iterating environment, then Q-learning is a good choice, due to the first point (learning optimal policy directly). If your agent learns online, and you care about rewards gained whilst learning, then SARSA may be a better choice.
|
When to choose SARSA vs. Q Learning
|
They mostly look the same except that in SARSA we take actual action and in Q Learning we take the action with highest reward.
Actually in both you "take" the actual single generated action $a_{t+1}$
|
When to choose SARSA vs. Q Learning
They mostly look the same except that in SARSA we take actual action and in Q Learning we take the action with highest reward.
Actually in both you "take" the actual single generated action $a_{t+1}$ next. In Q learning, you update the estimate from the maximum estimate of possible next actions, regardless of which action you took. Whilst in SARSA, you update estimates based on and take the same action.
This is probably what you meant by "take" in the question, but in the literature, taking an action means that it becomes the value of e.g. $a_{t}$, and influences $r_{t+1}$, $s_{t+1}$.
Are there any theoretical or practical settings in which one should the prefer one over the other?
Q-learning has the following advantages and disadvantages compared to SARSA:
Q-learning directly learns the optimal policy, whilst SARSA learns a near-optimal policy whilst exploring. If you want to learn an optimal policy using SARSA, then you will need to decide on a strategy to decay $\epsilon$ in $\epsilon$-greedy action choice, which may become a fiddly hyperparameter to tune.
Q-learning (and off-policy learning in general) has higher per-sample variance than SARSA, and may suffer from problems converging as a result. This turns up as a problem when training neural networks via Q-learning.
SARSA will approach convergence allowing for possible penalties from exploratory moves, whilst Q-learning will ignore them. That makes SARSA more conservative - if there is risk of a large negative reward close to the optimal path, Q-learning will tend to trigger that reward whilst exploring, whilst SARSA will tend to avoid a dangerous optimal path and only slowly learn to use it when the exploration parameters are reduced. The classic toy problem that demonstrates this effect is called cliff walking.
In practice the last point can make a big difference if mistakes are costly - e.g. you are training a robot not in simulation, but in the real world. You may prefer a more conservative learning algorithm that avoids high risk, if there was real time and money at stake if the robot was damaged.
If your goal is to train an optimal agent in simulation, or in a low-cost and fast-iterating environment, then Q-learning is a good choice, due to the first point (learning optimal policy directly). If your agent learns online, and you care about rewards gained whilst learning, then SARSA may be a better choice.
|
When to choose SARSA vs. Q Learning
They mostly look the same except that in SARSA we take actual action and in Q Learning we take the action with highest reward.
Actually in both you "take" the actual single generated action $a_{t+1}$
|
7,470
|
Interpretation of biplots in principal components analysis
|
PCA is one of the many ways to analyse the structure of a given correlation matrix. By construction, the first principal axis is the one which maximizes the variance (reflected by its eigenvalue) when data are projected onto a line (which stands for a direction in the $p$-dimensional space, assuming you have $p$ variables) and the second one is orthogonal to it, and still maximizes the remaining variance. This is the reason why using the first two axes should yield the better approximation of the original variables space (say, a matrix $X$ of dim $n \times p$) when it is projected onto a plane.
Principal components are just linear combinations of the original variables. Therefore, plotting individual factor scores (defined as $Xu$, where $u$ is the vector of loadings of any principal component) may help to highlight groups of homogeneous individuals, for example, or to interpret one's overall scoring when considering all variables at the same time. In other words, this is a way to summarize one's location with respect to his value on the $p$ variables, or a combination thereof. In your case, Fig. 13.3 in HSAUR shows that Joyner-Kersee (Jy-K) has a high (negative) score on the 1st axis, suggesting he performed overall quite good on all events. The same line of reasoning applies for interpreting the second axis. I take a very short look at the figure so I will not go into details and my interpretation is certainly superficial. I assume that you will find further information in the HSAUR textbook. Here it is worth noting that both variables and individuals are shown on the same diagram (this is called a biplot), which helps to interpret the factorial axes while looking at individuals' location. Usually, we plot the variables into a so-called correlation circle (where the angle formed by any two variables, represented here as vectors, reflects their actual pairwise correlation, since the cosine of the angle between pairs of vectors amounts to the correlation between the variables.
I think, however, you'd better start reading some introductory book on multivariate analysis to get deep insight into PCA-based methods. For example, B.S. Everitt wrote an excellent textbook on this topic, An R and S-Plus® Companion to Multivariate Analysis, and you can check the companion website for illustration. There are other great R packages for applied multivariate data analysis, like ade4 and FactoMineR.
|
Interpretation of biplots in principal components analysis
|
PCA is one of the many ways to analyse the structure of a given correlation matrix. By construction, the first principal axis is the one which maximizes the variance (reflected by its eigenvalue) when
|
Interpretation of biplots in principal components analysis
PCA is one of the many ways to analyse the structure of a given correlation matrix. By construction, the first principal axis is the one which maximizes the variance (reflected by its eigenvalue) when data are projected onto a line (which stands for a direction in the $p$-dimensional space, assuming you have $p$ variables) and the second one is orthogonal to it, and still maximizes the remaining variance. This is the reason why using the first two axes should yield the better approximation of the original variables space (say, a matrix $X$ of dim $n \times p$) when it is projected onto a plane.
Principal components are just linear combinations of the original variables. Therefore, plotting individual factor scores (defined as $Xu$, where $u$ is the vector of loadings of any principal component) may help to highlight groups of homogeneous individuals, for example, or to interpret one's overall scoring when considering all variables at the same time. In other words, this is a way to summarize one's location with respect to his value on the $p$ variables, or a combination thereof. In your case, Fig. 13.3 in HSAUR shows that Joyner-Kersee (Jy-K) has a high (negative) score on the 1st axis, suggesting he performed overall quite good on all events. The same line of reasoning applies for interpreting the second axis. I take a very short look at the figure so I will not go into details and my interpretation is certainly superficial. I assume that you will find further information in the HSAUR textbook. Here it is worth noting that both variables and individuals are shown on the same diagram (this is called a biplot), which helps to interpret the factorial axes while looking at individuals' location. Usually, we plot the variables into a so-called correlation circle (where the angle formed by any two variables, represented here as vectors, reflects their actual pairwise correlation, since the cosine of the angle between pairs of vectors amounts to the correlation between the variables.
I think, however, you'd better start reading some introductory book on multivariate analysis to get deep insight into PCA-based methods. For example, B.S. Everitt wrote an excellent textbook on this topic, An R and S-Plus® Companion to Multivariate Analysis, and you can check the companion website for illustration. There are other great R packages for applied multivariate data analysis, like ade4 and FactoMineR.
|
Interpretation of biplots in principal components analysis
PCA is one of the many ways to analyse the structure of a given correlation matrix. By construction, the first principal axis is the one which maximizes the variance (reflected by its eigenvalue) when
|
7,471
|
Interpretation of biplots in principal components analysis
|
The plot is showing:
the score of each case (i.e., athlete) on the first two principal components
the loading of each variable (i.e., each sporting event) on the first two principal components.
The left and bottom axes are showing [normalized] principal component scores; the top and right axes are showing the loadings.
In general it assumes that two components explain a sufficient amount of the variance to provide
a meaningful visual representation of the structure of cases and variables.
You can look to see which events are close together in the space. Where this applies, this may suggest that athletes who are good at one event are likely also to be good at the other proximal events. Alternatively you can use the plot to see which events are distant. For example, javelin appears to be bit of an outlier and a major event defining the second principal component. Perhaps a different kind of athlete is good at javelin than is good at most of the other events.
Of course, more could be said about substantive interpretation.
|
Interpretation of biplots in principal components analysis
|
The plot is showing:
the score of each case (i.e., athlete) on the first two principal components
the loading of each variable (i.e., each sporting event) on the first two principal components.
The
|
Interpretation of biplots in principal components analysis
The plot is showing:
the score of each case (i.e., athlete) on the first two principal components
the loading of each variable (i.e., each sporting event) on the first two principal components.
The left and bottom axes are showing [normalized] principal component scores; the top and right axes are showing the loadings.
In general it assumes that two components explain a sufficient amount of the variance to provide
a meaningful visual representation of the structure of cases and variables.
You can look to see which events are close together in the space. Where this applies, this may suggest that athletes who are good at one event are likely also to be good at the other proximal events. Alternatively you can use the plot to see which events are distant. For example, javelin appears to be bit of an outlier and a major event defining the second principal component. Perhaps a different kind of athlete is good at javelin than is good at most of the other events.
Of course, more could be said about substantive interpretation.
|
Interpretation of biplots in principal components analysis
The plot is showing:
the score of each case (i.e., athlete) on the first two principal components
the loading of each variable (i.e., each sporting event) on the first two principal components.
The
|
7,472
|
Is this the state of art regression methodology?
|
It is well-known, at least from the late 1960', that if you take several forecasts† and average them, then the resulting aggregate forecast in many cases will outperform the individual forecasts. Bagging, boosting and stacking are all based exactly on this idea. So yes, if your aim is purely prediction then in most cases this is the best you can do. What is problematic about this method is that it is a black-box approach that returns the result but does not help you to understand and interpret it. Obviously, it is also more computationally intensive than any other method since you have to compute few forecasts instead of single one.
† This concerns about any predictions in general, but it is often described in forecasting literature.
Winkler, RL. and Makridakis, S. (1983). The Combination of Forecasts. J. R. Statis. Soc. A. 146(2), 150-157.
Makridakis, S. and Winkler, R.L. (1983). Averages of Forecasts: Some Empirical Results. Management Science, 29(9) 987-996.
Clemen, R.T. (1989). Combining forecasts: A review and annotated bibliography. International Journal of Forecasting, 5, 559-583.
Bates, J.M. and Granger, C.W. (1969). The combination of forecasts. Or, 451-468.
Makridakis, S. and Hibon, M. (2000). The M3-Competition: results, conclusions and implications. International journal of forecasting, 16(4), 451-476.
Reid, D.J. (1968). Combining three estimates of gross domestic product. Economica, 431-444.
Makridakis, S., Spiliotis, E., and Assimakopoulos, V. (2018). The M4 Competition: Results, findings, conclusion and way forward. International Journal of Forecasting.
|
Is this the state of art regression methodology?
|
It is well-known, at least from the late 1960', that if you take several forecasts† and average them, then the resulting aggregate forecast in many cases will outperform the individual forecasts. Bagg
|
Is this the state of art regression methodology?
It is well-known, at least from the late 1960', that if you take several forecasts† and average them, then the resulting aggregate forecast in many cases will outperform the individual forecasts. Bagging, boosting and stacking are all based exactly on this idea. So yes, if your aim is purely prediction then in most cases this is the best you can do. What is problematic about this method is that it is a black-box approach that returns the result but does not help you to understand and interpret it. Obviously, it is also more computationally intensive than any other method since you have to compute few forecasts instead of single one.
† This concerns about any predictions in general, but it is often described in forecasting literature.
Winkler, RL. and Makridakis, S. (1983). The Combination of Forecasts. J. R. Statis. Soc. A. 146(2), 150-157.
Makridakis, S. and Winkler, R.L. (1983). Averages of Forecasts: Some Empirical Results. Management Science, 29(9) 987-996.
Clemen, R.T. (1989). Combining forecasts: A review and annotated bibliography. International Journal of Forecasting, 5, 559-583.
Bates, J.M. and Granger, C.W. (1969). The combination of forecasts. Or, 451-468.
Makridakis, S. and Hibon, M. (2000). The M3-Competition: results, conclusions and implications. International journal of forecasting, 16(4), 451-476.
Reid, D.J. (1968). Combining three estimates of gross domestic product. Economica, 431-444.
Makridakis, S., Spiliotis, E., and Assimakopoulos, V. (2018). The M4 Competition: Results, findings, conclusion and way forward. International Journal of Forecasting.
|
Is this the state of art regression methodology?
It is well-known, at least from the late 1960', that if you take several forecasts† and average them, then the resulting aggregate forecast in many cases will outperform the individual forecasts. Bagg
|
7,473
|
Is this the state of art regression methodology?
|
Arthur (1994) has a nice short paper/thought experiment that is well-known in the complexity literature.
One of the conclusions there is that agents cannot select better predictive models (even if they have a "forest" of these) under non-equilibrium conditions. For example, if the question is applied to stock market performance, the setting of Arthur (1994) might be applicable.
|
Is this the state of art regression methodology?
|
Arthur (1994) has a nice short paper/thought experiment that is well-known in the complexity literature.
One of the conclusions there is that agents cannot select better predictive models (even if the
|
Is this the state of art regression methodology?
Arthur (1994) has a nice short paper/thought experiment that is well-known in the complexity literature.
One of the conclusions there is that agents cannot select better predictive models (even if they have a "forest" of these) under non-equilibrium conditions. For example, if the question is applied to stock market performance, the setting of Arthur (1994) might be applicable.
|
Is this the state of art regression methodology?
Arthur (1994) has a nice short paper/thought experiment that is well-known in the complexity literature.
One of the conclusions there is that agents cannot select better predictive models (even if the
|
7,474
|
Diagnostics for generalized linear (mixed) models (specifically residuals)
|
This answer is not based on my knowledge but rather quotes what Bolker et al. (2009) wrote in an influential paper in the journal Trends in Ecology and Evolution. Since the article is not open access (although searching for it on Google scholar may prove successful, I thought I cite important passages that may be helpful to address parts of the questions. So again, it's not what I came up with myself but I think it represents the best condensed information on GLMMs (inlcuding diagnostics) out there in a very straight forward and easy to understand style of writing. If by any means this answer is not suitable for whatever reason, I will simply delete it. Things that I find useful with respect to questions regarding diagnostics are highlighted in bold.
Page 127:
Researchers faced with nonnormal data often try shortcuts
such as transforming data to achieve normality and
homogeneity of variance, using nonparametric tests or relying
on the robustness of classical ANOVA to nonnormality
for balanced designs [15]. They might ignore random effects
altogether (thus committing pseudoreplication) or treat
them as fixed factors [16]. However, such shortcuts can fail
(e.g. count data with many zero values cannot be made
normal by transformation). Even when they succeed, they
might violate statistical assumptions (even nonparametric
tests make assumptions, e.g. of homogeneity of variance
across groups) or limit the scope of inference (one cannot
extrapolate estimates of fixed effects to new groups).
Instead of shoehorning their data into classical statistical
frameworks, researchers should use statistical
approaches that match their data. Generalized linear
mixed models (GLMMs) combine the properties of two
statistical frameworks that are widely used in ecology and evolution, linear
mixed models (which incorporate random effects) and
generalized linear models (which handle nonnormal data
by using link functions and exponential family [e.g. normal,
Poisson or binomial] distributions). GLMMs are the
best tool for analyzing nonnormal data that involve random
effects: all one has to do, in principle, is specify a
distribution, link function and structure of the random
effects.
Page 129, Box 1:
The residuals indicated overdispersion, so we refitted the data with
a quasi-Poisson model. Despite the large estimated scale parameter
(10.8), exploratory graphs found no evidence of outliers at the level of
individuals, genotypes or populations. We used quasi-AIC (QAIC),
using one degree of freedom for random effects [49], for randomeffect
and then for fixed-effect model selection.
Page 133, Box 4:
Here we outline a general framework for constructing a full (most
complex) model, the first step in GLMM analysis. Following this
process, one can then evaluate parameters and compare submodels
as described in the main text and in Figure 1.
Specify fixed (treatments or covariates) and random effects
(experimental, spatial or temporal blocks, individuals, etc.). Include
only important interactions. Restrict the model a priori to a feasible
level of complexity, based on rules of thumb (>5–6 random-effect
levels per random effect and >10–20 samples per treatment level
or experimental unit) and knowledge of adequate sample sizes
gained from previous studies [64,65].
Choose an error distribution and link function (e.g. Poisson
distribution and log link for count data, binomial distribution and
logit link for proportion data).
Graphical checking: are variances of data (transformed by the link
function) homogeneous across categories? Are responses of
transformed data linear with respect to continuous predictors?
Are there outlier individuals or groups? Do distributions within
groups match the assumed distribution?
Fit fixed-effect GLMs both to the full (pooled) data set and within
each level of the random factors [28,50]. Estimated parameters
should be approximately normally distributed across groups
(group-level parameters can have large uncertainties, especially
for groups with small sample sizes). Adjust model as necessary
(e.g. change link function or add covariates).
Fit the full GLMM.
Insufficient computer memory o r too slow: reduce
model complexity. If estimation succeeds on a subset of the data,
try a more efficient estimation algorithm (e.g. PQL if appropriate).
Failure to converge (warnings or errors): reduce model complexity
or change optimization settings (make sure the resulting answers
make sense). Try other estimation algorithms.
Zero variance components or singularity (warnings or errors):
check that the model is properly defined and identifiable (i.e. all
components can theoretically be estimated). Reduce model complexity.
Adding information to the model (additional covariates, or new
groupings for random effects) can alleviate problems, as will
centering continuous covariates by subtracting their mean [50]. If
necessary, eliminate random effects from the full model, dropping
(i) terms of less intrinsic biological interest, (ii) terms with very
small estimated variances and/or large uncertainty, or (iii) interaction
terms. (Convergence errors or zero variances could indicate
insufficient data.)
Recheck assumptions for the final model (as in step 3) and check
that parameter estimates and confidence intervals are reasonable
(gigantic confidence intervals could indicate fitting problems). The
magnitude of the standardized residuals should be independent of
the fitted values. Assess overdispersion (the sum of the squared
Pearson residuals should be $\chi^2$ distributed [66,67]). If necessary,
change distributions or estimate a scale parameter. Check that a
full model that includes dropped random effects with small
standard deviations gives similar results to the final model. If
different models lead to substantially different parameter estimates,
consider model averaging.
Residuals plots should be used to assess overdispersion and transformed variances should be homogeneous across categories. Nowhere in the article was mentioned that residuals are supposed to be normally distributed.
I think the reason why there are contrasting statements reflects that GLMMs (page 127-128)...
...are surprisingly challenging to use even for statisticians. Although several software packages can handle GLMMs (Table 1), few ecologists and evolutionary biologists are aware of the range of options or of the possible pitfalls. In reviewing papers in ecology and evolution since 2005 found by Google Scholar, 311 out of 537 GLMM analyses (58%) used these tools inappropriately in some way (see online supplementary material).
And here are a few full worked examples using GLMMs including diagnostics.
I realize that this answer is more like a comment and should be treated as such. But the comment section doesn't allow me to add such a long comment. Also since I believe this paper is of value for this discussion (but unfortunately behind a pay-wall), I thought it would be useful to quote important passages here.
Cited papers:
[15] - G.P. Quinn, M.J. Keough (2002): Experimental Design and Data Analysis for Biologists, Cambridge University Press.
[16] - M.J. Crawley (2002): Statistical Computing: An Introduction to Data Analysis Using S-PLUS, John Wiley & Sons.
[28] - J.C. Pinheiro, D.M. Bates (2000): Mixed-Effects Models in S and S-PLUS, Springer.
[49] - F. Vaida, S. Blanchard (2005): Conditional Akaike information for mixed-effects models. Biometrika, 92, pp. 351–370.
[50] - A. Gelman, J. Hill (2006): Data Analysis Using Regression and Multilevel/Hierarchical Models, Cambridge University Press.
[64] - N.J. Gotelli, A.M. Ellison (2004): A Primer of Ecological Statistics, Sinauer Associates.
[65] - F.J. Harrell (2001): Regression Modeling Strategies, Springer.
[66] - J.K. Lindsey (1997): Applying Generalized Linear Models, Springer.
[67] - W. Venables, B.D. Ripley (2002): Modern Applied Statistics with S, Springer.
|
Diagnostics for generalized linear (mixed) models (specifically residuals)
|
This answer is not based on my knowledge but rather quotes what Bolker et al. (2009) wrote in an influential paper in the journal Trends in Ecology and Evolution. Since the article is not open access
|
Diagnostics for generalized linear (mixed) models (specifically residuals)
This answer is not based on my knowledge but rather quotes what Bolker et al. (2009) wrote in an influential paper in the journal Trends in Ecology and Evolution. Since the article is not open access (although searching for it on Google scholar may prove successful, I thought I cite important passages that may be helpful to address parts of the questions. So again, it's not what I came up with myself but I think it represents the best condensed information on GLMMs (inlcuding diagnostics) out there in a very straight forward and easy to understand style of writing. If by any means this answer is not suitable for whatever reason, I will simply delete it. Things that I find useful with respect to questions regarding diagnostics are highlighted in bold.
Page 127:
Researchers faced with nonnormal data often try shortcuts
such as transforming data to achieve normality and
homogeneity of variance, using nonparametric tests or relying
on the robustness of classical ANOVA to nonnormality
for balanced designs [15]. They might ignore random effects
altogether (thus committing pseudoreplication) or treat
them as fixed factors [16]. However, such shortcuts can fail
(e.g. count data with many zero values cannot be made
normal by transformation). Even when they succeed, they
might violate statistical assumptions (even nonparametric
tests make assumptions, e.g. of homogeneity of variance
across groups) or limit the scope of inference (one cannot
extrapolate estimates of fixed effects to new groups).
Instead of shoehorning their data into classical statistical
frameworks, researchers should use statistical
approaches that match their data. Generalized linear
mixed models (GLMMs) combine the properties of two
statistical frameworks that are widely used in ecology and evolution, linear
mixed models (which incorporate random effects) and
generalized linear models (which handle nonnormal data
by using link functions and exponential family [e.g. normal,
Poisson or binomial] distributions). GLMMs are the
best tool for analyzing nonnormal data that involve random
effects: all one has to do, in principle, is specify a
distribution, link function and structure of the random
effects.
Page 129, Box 1:
The residuals indicated overdispersion, so we refitted the data with
a quasi-Poisson model. Despite the large estimated scale parameter
(10.8), exploratory graphs found no evidence of outliers at the level of
individuals, genotypes or populations. We used quasi-AIC (QAIC),
using one degree of freedom for random effects [49], for randomeffect
and then for fixed-effect model selection.
Page 133, Box 4:
Here we outline a general framework for constructing a full (most
complex) model, the first step in GLMM analysis. Following this
process, one can then evaluate parameters and compare submodels
as described in the main text and in Figure 1.
Specify fixed (treatments or covariates) and random effects
(experimental, spatial or temporal blocks, individuals, etc.). Include
only important interactions. Restrict the model a priori to a feasible
level of complexity, based on rules of thumb (>5–6 random-effect
levels per random effect and >10–20 samples per treatment level
or experimental unit) and knowledge of adequate sample sizes
gained from previous studies [64,65].
Choose an error distribution and link function (e.g. Poisson
distribution and log link for count data, binomial distribution and
logit link for proportion data).
Graphical checking: are variances of data (transformed by the link
function) homogeneous across categories? Are responses of
transformed data linear with respect to continuous predictors?
Are there outlier individuals or groups? Do distributions within
groups match the assumed distribution?
Fit fixed-effect GLMs both to the full (pooled) data set and within
each level of the random factors [28,50]. Estimated parameters
should be approximately normally distributed across groups
(group-level parameters can have large uncertainties, especially
for groups with small sample sizes). Adjust model as necessary
(e.g. change link function or add covariates).
Fit the full GLMM.
Insufficient computer memory o r too slow: reduce
model complexity. If estimation succeeds on a subset of the data,
try a more efficient estimation algorithm (e.g. PQL if appropriate).
Failure to converge (warnings or errors): reduce model complexity
or change optimization settings (make sure the resulting answers
make sense). Try other estimation algorithms.
Zero variance components or singularity (warnings or errors):
check that the model is properly defined and identifiable (i.e. all
components can theoretically be estimated). Reduce model complexity.
Adding information to the model (additional covariates, or new
groupings for random effects) can alleviate problems, as will
centering continuous covariates by subtracting their mean [50]. If
necessary, eliminate random effects from the full model, dropping
(i) terms of less intrinsic biological interest, (ii) terms with very
small estimated variances and/or large uncertainty, or (iii) interaction
terms. (Convergence errors or zero variances could indicate
insufficient data.)
Recheck assumptions for the final model (as in step 3) and check
that parameter estimates and confidence intervals are reasonable
(gigantic confidence intervals could indicate fitting problems). The
magnitude of the standardized residuals should be independent of
the fitted values. Assess overdispersion (the sum of the squared
Pearson residuals should be $\chi^2$ distributed [66,67]). If necessary,
change distributions or estimate a scale parameter. Check that a
full model that includes dropped random effects with small
standard deviations gives similar results to the final model. If
different models lead to substantially different parameter estimates,
consider model averaging.
Residuals plots should be used to assess overdispersion and transformed variances should be homogeneous across categories. Nowhere in the article was mentioned that residuals are supposed to be normally distributed.
I think the reason why there are contrasting statements reflects that GLMMs (page 127-128)...
...are surprisingly challenging to use even for statisticians. Although several software packages can handle GLMMs (Table 1), few ecologists and evolutionary biologists are aware of the range of options or of the possible pitfalls. In reviewing papers in ecology and evolution since 2005 found by Google Scholar, 311 out of 537 GLMM analyses (58%) used these tools inappropriately in some way (see online supplementary material).
And here are a few full worked examples using GLMMs including diagnostics.
I realize that this answer is more like a comment and should be treated as such. But the comment section doesn't allow me to add such a long comment. Also since I believe this paper is of value for this discussion (but unfortunately behind a pay-wall), I thought it would be useful to quote important passages here.
Cited papers:
[15] - G.P. Quinn, M.J. Keough (2002): Experimental Design and Data Analysis for Biologists, Cambridge University Press.
[16] - M.J. Crawley (2002): Statistical Computing: An Introduction to Data Analysis Using S-PLUS, John Wiley & Sons.
[28] - J.C. Pinheiro, D.M. Bates (2000): Mixed-Effects Models in S and S-PLUS, Springer.
[49] - F. Vaida, S. Blanchard (2005): Conditional Akaike information for mixed-effects models. Biometrika, 92, pp. 351–370.
[50] - A. Gelman, J. Hill (2006): Data Analysis Using Regression and Multilevel/Hierarchical Models, Cambridge University Press.
[64] - N.J. Gotelli, A.M. Ellison (2004): A Primer of Ecological Statistics, Sinauer Associates.
[65] - F.J. Harrell (2001): Regression Modeling Strategies, Springer.
[66] - J.K. Lindsey (1997): Applying Generalized Linear Models, Springer.
[67] - W. Venables, B.D. Ripley (2002): Modern Applied Statistics with S, Springer.
|
Diagnostics for generalized linear (mixed) models (specifically residuals)
This answer is not based on my knowledge but rather quotes what Bolker et al. (2009) wrote in an influential paper in the journal Trends in Ecology and Evolution. Since the article is not open access
|
7,475
|
Diagnostics for generalized linear (mixed) models (specifically residuals)
|
This is an old question, but I thought it would be useful to add that option 4 suggested by the OP is now available in the DHARMa R package (available from CRAN, see here).
The package makes the visual residual checks suggested by the accepted answer a lot more reliable / easy.
From the package description:
The DHARMa package uses a simulation-based approach to create readily
interpretable scaled residuals from fitted generalized linear mixed
models. Currently supported are all 'merMod' classes from 'lme4'
('lmerMod', 'glmerMod'), 'glm' (including 'negbin' from 'MASS', but
excluding quasi-distributions) and 'lm' model classes. Alternatively,
externally created simulations, e.g. posterior predictive simulations
from Bayesian software such as 'JAGS', 'STAN', or 'BUGS' can be
processed as well. The resulting residuals are standardized to values
between 0 and 1 and can be interpreted as intuitively as residuals
from a linear regression. The package also provides a number of plot
and test functions for typical model mispecification problem, such as
over/underdispersion, zero-inflation, and spatial / temporal
autocorrelation.
|
Diagnostics for generalized linear (mixed) models (specifically residuals)
|
This is an old question, but I thought it would be useful to add that option 4 suggested by the OP is now available in the DHARMa R package (available from CRAN, see here).
The package makes the visua
|
Diagnostics for generalized linear (mixed) models (specifically residuals)
This is an old question, but I thought it would be useful to add that option 4 suggested by the OP is now available in the DHARMa R package (available from CRAN, see here).
The package makes the visual residual checks suggested by the accepted answer a lot more reliable / easy.
From the package description:
The DHARMa package uses a simulation-based approach to create readily
interpretable scaled residuals from fitted generalized linear mixed
models. Currently supported are all 'merMod' classes from 'lme4'
('lmerMod', 'glmerMod'), 'glm' (including 'negbin' from 'MASS', but
excluding quasi-distributions) and 'lm' model classes. Alternatively,
externally created simulations, e.g. posterior predictive simulations
from Bayesian software such as 'JAGS', 'STAN', or 'BUGS' can be
processed as well. The resulting residuals are standardized to values
between 0 and 1 and can be interpreted as intuitively as residuals
from a linear regression. The package also provides a number of plot
and test functions for typical model mispecification problem, such as
over/underdispersion, zero-inflation, and spatial / temporal
autocorrelation.
|
Diagnostics for generalized linear (mixed) models (specifically residuals)
This is an old question, but I thought it would be useful to add that option 4 suggested by the OP is now available in the DHARMa R package (available from CRAN, see here).
The package makes the visua
|
7,476
|
Good sources for learning Markov chain Monte Carlo (MCMC)
|
For online tutorials, there are
A tutorial in MCMC, by Sahut (2000)
Tutorial on Markov Chain Monte Carlo, by Hanson (2000)
Markov Chain Monte Carlo for Computer Vision, by Zhu et al. (2005)
Introduction to Markov Chain Monte Carlo simulations and their statistical analysis, by Berg (2004).
A Tutorial on Markov Chain Monte-Carlo and Bayesian Modeling by Martin B. Haugh (2021).
Practical Markov Chain Monte Carlo, by Geyer (Stat. Science, 1992), is also a good starting point, and you can look at the MCMCpack or mcmc R packages for illustrations.
|
Good sources for learning Markov chain Monte Carlo (MCMC)
|
For online tutorials, there are
A tutorial in MCMC, by Sahut (2000)
Tutorial on Markov Chain Monte Carlo, by Hanson (2000)
Markov Chain Monte Carlo for Computer Vision, by Zhu et al. (2005)
Introduct
|
Good sources for learning Markov chain Monte Carlo (MCMC)
For online tutorials, there are
A tutorial in MCMC, by Sahut (2000)
Tutorial on Markov Chain Monte Carlo, by Hanson (2000)
Markov Chain Monte Carlo for Computer Vision, by Zhu et al. (2005)
Introduction to Markov Chain Monte Carlo simulations and their statistical analysis, by Berg (2004).
A Tutorial on Markov Chain Monte-Carlo and Bayesian Modeling by Martin B. Haugh (2021).
Practical Markov Chain Monte Carlo, by Geyer (Stat. Science, 1992), is also a good starting point, and you can look at the MCMCpack or mcmc R packages for illustrations.
|
Good sources for learning Markov chain Monte Carlo (MCMC)
For online tutorials, there are
A tutorial in MCMC, by Sahut (2000)
Tutorial on Markov Chain Monte Carlo, by Hanson (2000)
Markov Chain Monte Carlo for Computer Vision, by Zhu et al. (2005)
Introduct
|
7,477
|
Good sources for learning Markov chain Monte Carlo (MCMC)
|
I haven't read it (yet), but if you're into R, there is Christian P. Robert's and George Casella's book:
Introducing Monte Carlo Methods with R (Use R)
I know of it from following his (very good) blog
|
Good sources for learning Markov chain Monte Carlo (MCMC)
|
I haven't read it (yet), but if you're into R, there is Christian P. Robert's and George Casella's book:
Introducing Monte Carlo Methods with R (Use R)
I know of it from following his (very good) blog
|
Good sources for learning Markov chain Monte Carlo (MCMC)
I haven't read it (yet), but if you're into R, there is Christian P. Robert's and George Casella's book:
Introducing Monte Carlo Methods with R (Use R)
I know of it from following his (very good) blog
|
Good sources for learning Markov chain Monte Carlo (MCMC)
I haven't read it (yet), but if you're into R, there is Christian P. Robert's and George Casella's book:
Introducing Monte Carlo Methods with R (Use R)
I know of it from following his (very good) blog
|
7,478
|
Good sources for learning Markov chain Monte Carlo (MCMC)
|
Gilks W.R., Richardson S., Spiegelhalter D.J. Markov Chain Monte Carlo in Practice. Chapman & Hall/CRC, 1996.
A relative oldie now, but still a goodie.
|
Good sources for learning Markov chain Monte Carlo (MCMC)
|
Gilks W.R., Richardson S., Spiegelhalter D.J. Markov Chain Monte Carlo in Practice. Chapman & Hall/CRC, 1996.
A relative oldie now, but still a goodie.
|
Good sources for learning Markov chain Monte Carlo (MCMC)
Gilks W.R., Richardson S., Spiegelhalter D.J. Markov Chain Monte Carlo in Practice. Chapman & Hall/CRC, 1996.
A relative oldie now, but still a goodie.
|
Good sources for learning Markov chain Monte Carlo (MCMC)
Gilks W.R., Richardson S., Spiegelhalter D.J. Markov Chain Monte Carlo in Practice. Chapman & Hall/CRC, 1996.
A relative oldie now, but still a goodie.
|
7,479
|
Good sources for learning Markov chain Monte Carlo (MCMC)
|
Handbook of Markov Chain Monte Carlo, Steve Brooks, Andrew Gelman, Galin Jones and Xiao-Li Meng, eds. 2011 CRC Press.
Chapter 4, 'Inference from simulations and monitoring convergence' by Gelman and Shirley, is available online.
|
Good sources for learning Markov chain Monte Carlo (MCMC)
|
Handbook of Markov Chain Monte Carlo, Steve Brooks, Andrew Gelman, Galin Jones and Xiao-Li Meng, eds. 2011 CRC Press.
Chapter 4, 'Inference from simulations and monitoring convergence' by Gelman and
|
Good sources for learning Markov chain Monte Carlo (MCMC)
Handbook of Markov Chain Monte Carlo, Steve Brooks, Andrew Gelman, Galin Jones and Xiao-Li Meng, eds. 2011 CRC Press.
Chapter 4, 'Inference from simulations and monitoring convergence' by Gelman and Shirley, is available online.
|
Good sources for learning Markov chain Monte Carlo (MCMC)
Handbook of Markov Chain Monte Carlo, Steve Brooks, Andrew Gelman, Galin Jones and Xiao-Li Meng, eds. 2011 CRC Press.
Chapter 4, 'Inference from simulations and monitoring convergence' by Gelman and
|
7,480
|
Good sources for learning Markov chain Monte Carlo (MCMC)
|
Dani Gamerman & Hedibert F. Lopes. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference (2nd ed.). Boca Raton, FL: Champan & Hall/CRC, 2006. 344 pp. ISBN 0-412-81820-5.
-- a more recently updated book than Gilks, Richardson & Spiegelhalter. I haven't read it myself, but it was well reviewed in Technometrics in 2008, and the first edition also got a good review in The Statistician back in 1998.
|
Good sources for learning Markov chain Monte Carlo (MCMC)
|
Dani Gamerman & Hedibert F. Lopes. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference (2nd ed.). Boca Raton, FL: Champan & Hall/CRC, 2006. 344 pp. ISBN 0-412-81820-5.
-- a more rec
|
Good sources for learning Markov chain Monte Carlo (MCMC)
Dani Gamerman & Hedibert F. Lopes. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference (2nd ed.). Boca Raton, FL: Champan & Hall/CRC, 2006. 344 pp. ISBN 0-412-81820-5.
-- a more recently updated book than Gilks, Richardson & Spiegelhalter. I haven't read it myself, but it was well reviewed in Technometrics in 2008, and the first edition also got a good review in The Statistician back in 1998.
|
Good sources for learning Markov chain Monte Carlo (MCMC)
Dani Gamerman & Hedibert F. Lopes. Markov Chain Monte Carlo: Stochastic Simulation for Bayesian Inference (2nd ed.). Boca Raton, FL: Champan & Hall/CRC, 2006. 344 pp. ISBN 0-412-81820-5.
-- a more rec
|
7,481
|
Good sources for learning Markov chain Monte Carlo (MCMC)
|
Another classic position (as accompanied to already mentioned Introducing Monte Carlo Methods with R):
Monte Carlo Statistical Methods by Robert and Casella (2004)
in the Use R! series there is also:
Introduction to Probability Simulation and Gibbs Sampling with R by Suess and Trumbo (2010)
|
Good sources for learning Markov chain Monte Carlo (MCMC)
|
Another classic position (as accompanied to already mentioned Introducing Monte Carlo Methods with R):
Monte Carlo Statistical Methods by Robert and Casella (2004)
in the Use R! series there is also:
|
Good sources for learning Markov chain Monte Carlo (MCMC)
Another classic position (as accompanied to already mentioned Introducing Monte Carlo Methods with R):
Monte Carlo Statistical Methods by Robert and Casella (2004)
in the Use R! series there is also:
Introduction to Probability Simulation and Gibbs Sampling with R by Suess and Trumbo (2010)
|
Good sources for learning Markov chain Monte Carlo (MCMC)
Another classic position (as accompanied to already mentioned Introducing Monte Carlo Methods with R):
Monte Carlo Statistical Methods by Robert and Casella (2004)
in the Use R! series there is also:
|
7,482
|
Good sources for learning Markov chain Monte Carlo (MCMC)
|
The text I have found most accessible is Bayesian Cognitive Modeling: A Practical Course. Very clear exposition. The book has great examples in BUGS, and they have been ported to Stan on its github examples page.
|
Good sources for learning Markov chain Monte Carlo (MCMC)
|
The text I have found most accessible is Bayesian Cognitive Modeling: A Practical Course. Very clear exposition. The book has great examples in BUGS, and they have been ported to Stan on its github
|
Good sources for learning Markov chain Monte Carlo (MCMC)
The text I have found most accessible is Bayesian Cognitive Modeling: A Practical Course. Very clear exposition. The book has great examples in BUGS, and they have been ported to Stan on its github examples page.
|
Good sources for learning Markov chain Monte Carlo (MCMC)
The text I have found most accessible is Bayesian Cognitive Modeling: A Practical Course. Very clear exposition. The book has great examples in BUGS, and they have been ported to Stan on its github
|
7,483
|
Outlier Detection on skewed Distributions
|
Under a classical definition of an outlier as a data point outide the 1.5* IQR from the upper or lower quartile,
This is the rule for identifying points outside the ends of the whiskers in a boxplot. Tukey himself would no doubt object to calling them outliers on this basis (he didn't necessarily regard points outside those limits as outliers). These would rather be points which - if your data was expected to be from a distribution somewhat similar to a normal distribution - one might subject to further investigation (such as checking you didn't transpose two digits, for example) -- at most these could be potential outliers. As Nick Cox points out in comments under this answer, a tail of many such points would be taken more as a indicator that a re-expression might be suitable than an indication of the need to regard the points as outliers.
there is an assumption of a non-skewed distribution.
I assumed by 'non-skewed' you mean symmetric. Then the assumption is more than just that. A heavy-tailed but symmetric distribution might have many points outside the bounds on that rule.
For skewed distributions (Exponential, Poisson, Geometric, etc) is the best way to detect an outlier by analyzing a transform of the original function?
That depends on what constitutes an outlier for your purposes. There's no single definition that's suitable for each purpose - indeed, generally you're probably better off doing other things that (say) picking outliers and omitting them.
For the exponential or geometric, you might do a similar calculation to that for a boxplot, but which would identify a similar fraction in the right tail only (you won't have low-end points identified in an exponential or geometric)$^{\dagger}$ ... or you might do something else.
$\dagger$ In large samples from a normal distribution, the boxplot marks about 0.35% of points at each end, or about 0.7% in total. For an exponential you might mark some multiple of the median, for example. If you wanted to tag roughly 0.7% of points in total for an actual exponential, that would suggest marking points beyond about 7.1 times the median.
Marking points above 7.1 times the median for n=1000 will typically hit between 0.4% to 1.1% of values:
ae <- rexp(1000)
table( ae > 7.1*median(ae) )
FALSE TRUE
993 7
For example, distributions loosely governed by an exponential distribution, could be transformed with a log function - at which point is it acceptable to look for outliers based on the same IQR definition?
That totally depends on what you mean by "acceptable". Note, however that -
i) the resulting distribution isn't actually symmetric, but distinctly left-skew.
As a result, you'll usually only mark points in the left end (i.e. close to zero, where you expect exponential values to be anyway) rather than in the right (where the "outliers" might be), unless they're really extreme.
ii) suitability of such a rule is going to be heavily dependent on what you're doing.
If you're concerned about the odd strange value affecting your inference, in general, you're probably better off using robust procedures than formally identifying outliers.
If you really do want to use a normal-based rule for transformed exponential or Poisson data, I'd at least suggest applying it to the square root$^{\ddagger}$ for a Poisson (as long as the mean isn't too small, it should be roughly normalish) and to cube root or even fourth root for the exponential (and perhaps, by extension, the geometric).
$\ddagger$ or perhaps $\sqrt{X+\frac{3}{8}}$, as in the Anscombe transform
For an exponential, in large samples the cube-root approach will tend to mark points only in the upper tail (at roughly the same rate it marks them in the upper tail for a normal) and the fourth-root approach marks points in both tails (slightly more in the lower tail, in total at something near 40% of the rate it does so for a normal). Of the possibilities, the cube root makes more sense to me than the other two, but I wouldn't necessarily advise using this as some hard and fast rule.
|
Outlier Detection on skewed Distributions
|
Under a classical definition of an outlier as a data point outide the 1.5* IQR from the upper or lower quartile,
This is the rule for identifying points outside the ends of the whiskers in a boxplot.
|
Outlier Detection on skewed Distributions
Under a classical definition of an outlier as a data point outide the 1.5* IQR from the upper or lower quartile,
This is the rule for identifying points outside the ends of the whiskers in a boxplot. Tukey himself would no doubt object to calling them outliers on this basis (he didn't necessarily regard points outside those limits as outliers). These would rather be points which - if your data was expected to be from a distribution somewhat similar to a normal distribution - one might subject to further investigation (such as checking you didn't transpose two digits, for example) -- at most these could be potential outliers. As Nick Cox points out in comments under this answer, a tail of many such points would be taken more as a indicator that a re-expression might be suitable than an indication of the need to regard the points as outliers.
there is an assumption of a non-skewed distribution.
I assumed by 'non-skewed' you mean symmetric. Then the assumption is more than just that. A heavy-tailed but symmetric distribution might have many points outside the bounds on that rule.
For skewed distributions (Exponential, Poisson, Geometric, etc) is the best way to detect an outlier by analyzing a transform of the original function?
That depends on what constitutes an outlier for your purposes. There's no single definition that's suitable for each purpose - indeed, generally you're probably better off doing other things that (say) picking outliers and omitting them.
For the exponential or geometric, you might do a similar calculation to that for a boxplot, but which would identify a similar fraction in the right tail only (you won't have low-end points identified in an exponential or geometric)$^{\dagger}$ ... or you might do something else.
$\dagger$ In large samples from a normal distribution, the boxplot marks about 0.35% of points at each end, or about 0.7% in total. For an exponential you might mark some multiple of the median, for example. If you wanted to tag roughly 0.7% of points in total for an actual exponential, that would suggest marking points beyond about 7.1 times the median.
Marking points above 7.1 times the median for n=1000 will typically hit between 0.4% to 1.1% of values:
ae <- rexp(1000)
table( ae > 7.1*median(ae) )
FALSE TRUE
993 7
For example, distributions loosely governed by an exponential distribution, could be transformed with a log function - at which point is it acceptable to look for outliers based on the same IQR definition?
That totally depends on what you mean by "acceptable". Note, however that -
i) the resulting distribution isn't actually symmetric, but distinctly left-skew.
As a result, you'll usually only mark points in the left end (i.e. close to zero, where you expect exponential values to be anyway) rather than in the right (where the "outliers" might be), unless they're really extreme.
ii) suitability of such a rule is going to be heavily dependent on what you're doing.
If you're concerned about the odd strange value affecting your inference, in general, you're probably better off using robust procedures than formally identifying outliers.
If you really do want to use a normal-based rule for transformed exponential or Poisson data, I'd at least suggest applying it to the square root$^{\ddagger}$ for a Poisson (as long as the mean isn't too small, it should be roughly normalish) and to cube root or even fourth root for the exponential (and perhaps, by extension, the geometric).
$\ddagger$ or perhaps $\sqrt{X+\frac{3}{8}}$, as in the Anscombe transform
For an exponential, in large samples the cube-root approach will tend to mark points only in the upper tail (at roughly the same rate it marks them in the upper tail for a normal) and the fourth-root approach marks points in both tails (slightly more in the lower tail, in total at something near 40% of the rate it does so for a normal). Of the possibilities, the cube root makes more sense to me than the other two, but I wouldn't necessarily advise using this as some hard and fast rule.
|
Outlier Detection on skewed Distributions
Under a classical definition of an outlier as a data point outide the 1.5* IQR from the upper or lower quartile,
This is the rule for identifying points outside the ends of the whiskers in a boxplot.
|
7,484
|
Outlier Detection on skewed Distributions
|
I will answer your questions in the opposite order in which you asked them, so that the exposition proceeds from the specific to the general.
First, let us consider a situation where you can assume that except for a minority of outliers, the bulk of your data can be well described by
a known distribution (in your case the exponential).
If $x$ has pdf:
$$p_X(x)=\sigma^{-1}\mbox{exp}\left(\frac{-(x-\theta)}{\sigma}\right),\;x>0;\sigma>0$$
then $x$ is said to follow an exponential distribution (the special case where we set $\theta=0$ is called the one-parameter or standard exponential distribution).
The usual MLE estimator of the parameters are [0,p 506]:
$$\hat{\theta}=\min_i x_i$$
and
$$\hat{\sigma}=\mbox{ave}_ix_i-\min_i x_i$$
Here is an example in R:
n<-100
theta<-1
sigma<-2
set.seed(123) #for reproducibility
x<-rexp(n,rate=1/sigma)+theta
mean(x)-min(x)
the MLE of $\sigma$ is $\approx2.08$.
Unfortunately, the MLE estimates are very sensitive to the presence of outliers. For example, if I corrupt the sample by replacing 20% of the $x_i$'s by $-x_i$:
m<-floor(0.2*n)
y<-x
y[1:m]<--y[1:m]
mean(y)-min(y)
the MLE of $\sigma$ based on the corrupted sample is now
$\approx11.12$(!).
As a second example, if I corrupt the sample by replacing 20% of the $x_i$'s by $100x_i$ (say if the decimal place was accidentally misplaced):
m<-floor(0.2*n)
z<-x
z[1:m]<-100*z[1:m]
mean(z)-min(z)
the MLE of $\sigma$ based on this second corrupted sample is now
$\approx54$(!).
An alternative to the raw MLE is to (a) find the outliers using a robust outlier identification rule, (b) set them aside as spurious data and (c) compute the MLE on the non spurious part of the sample.
The most well known of these robust outlier identification rule is the med/mad rule proposed by Hampel[3] who attributed it to Gauss (I illustrated this rule here). In the med/mad rule, the rejection threshold are based on the assumption that the genuine observations in your sample are well approximated by a normal distribution.
Of course, if you have extra information (such as knowing that the distribution of the genuine observations is well approximated by a poisson distribution as in this example)
there is nothing to prevent you from transforming your data and using the baseline outlier rejection rule (the med/mad) but this strikes me as a bit awkward to transform the data to preserve what is after all an ad-hoc rule.
It seems much more logical to me to preserve the data but adapt the rejection
rules. Then, you would still use the 3 step procedure I described in the first link above, but with rejection threshold adapted to the distribution you suspect the good part of the data has. Below, I give the rejection rule in situations where the genuine observations are well fitted by an exponential distribution. In this case, you can construct good rejection thresholds using the following rule:
1) estimate $\theta$ using [1]:
$$\hat{\theta}'=\mbox{med}_ix_i-3.476\mbox{Qn}(x)\ln2$$
The Qn is a robust estimate of scatter that is not geared towards symmetric data. It is widely implemented, for example in the R package robustbase. For exponential distributed data, the Qn is multiplied by consistency factor of $\approx3.476$, see [1] for more details.
2) reject as spurious all observations outside of [2,p 188]
$$[\hat{\theta}',9(1+2/n)\mbox{med}_ix_i+\hat{\theta}']$$
(the factor 9 in the rule above is obtained as the 7.1 in Glen_b's answer above, but using a higher cut-off. The factor (1+2/n) is small sample correction factor that was derived by simulations in [2]. For large enough sample sizes, it is essentially equal to 1).
3) use the MLE on the non spurious data to estimate $\sigma$:
$$\hat{\sigma}'=\mbox{ave}_{i\in H}x_i-\mbox{min}_{i\in H}x_i$$
where $H=\{i:\hat{\theta}'\leq x_i \leq 9(1+2/n)\mbox{med}_ix_i+\hat{\theta}'\}$.
using this rule on the previous examples, you would get:
library(robustbase)
theta<-median(x)-Qn(x,constant=3.476)*log(2)
clean<-which(x>=theta & x<=9*(1+2/n)*median(x)+theta)
mean(x[clean])-min(x[clean])
the robust estimate of $\sigma$ is now
$\approx2.05$ (very close to the MLE value when the data is clean).
On the second example:
theta<-median(y)-Qn(y,constant=3.476)*log(2)
clean<-which(y>=theta & y<=9*(1+2/n)*median(y)+theta)
mean(y[clean])-min(y[clean])
The robust estimate of $\sigma$ is now
$\approx2.2$ (very close to the value we would have gotten without the outliers).
On the third example:
theta<-median(z)-Qn(z,constant=3.476)*log(2)
clean<-which(z>=theta & z<=9*(1+2/n)*median(z)+theta)
mean(z[clean])-min(z[clean])
The robust estimate of $\sigma$ is now
$\approx2.2$ (very close to the value we would have gotten without the outliers).
A side benefit of this approach is that it yields a subset of indexes of
suspect observations which should be set aside from the rest of the data, perhaps to be studied as object of interest in their own right (the members
of $\{i:i\notin H\}$).
Now, for the general case where you do not have a good candidate distribution
to fit the bulk of your observations beyond knowing that a symmetric distribution won't do, you can use the adjusted boxplot[4]. This is a generalization of the boxplot that takes into account a (non parametric and outlier robust) measure of skewness of your data (so that when the bulk of the data is symmetric is collapses down to the usual boxplot). You can also check this answer for an illustration.
[0] Johnson N. L., Kotz S., Balakrishnan N. (1994). Continuous Univariate Distributions, Volume 1, 2nd Edition.
[1] Rousseeuw P. J. and Croux C. (1993).
Alternatives to the Median Absolute Deviation.
Journal of the American Statistical Association, Vol. 88, No. 424, pp. 1273--1283.
[2] J. K. Patel, C. H. Kapadia, and D. B. Owen, Dekker (1976).
Handbook of statistical distributions.
[3] Hampel (1974). The Influence Curve and Its Role in Robust Estimation. Journal of the American Statistical Association
Vol. 69, No. 346 (Jun., 1974), pp. 383-393.
[4] Vandervieren, E., Hubert, M. (2004) "An adjusted boxplot for skewed distributions". Computational Statistics & Data Analysis
Volume 52, Issue 12, 15 August 2008, Pages 5186–5201.
|
Outlier Detection on skewed Distributions
|
I will answer your questions in the opposite order in which you asked them, so that the exposition proceeds from the specific to the general.
First, let us consider a situation where you can assume t
|
Outlier Detection on skewed Distributions
I will answer your questions in the opposite order in which you asked them, so that the exposition proceeds from the specific to the general.
First, let us consider a situation where you can assume that except for a minority of outliers, the bulk of your data can be well described by
a known distribution (in your case the exponential).
If $x$ has pdf:
$$p_X(x)=\sigma^{-1}\mbox{exp}\left(\frac{-(x-\theta)}{\sigma}\right),\;x>0;\sigma>0$$
then $x$ is said to follow an exponential distribution (the special case where we set $\theta=0$ is called the one-parameter or standard exponential distribution).
The usual MLE estimator of the parameters are [0,p 506]:
$$\hat{\theta}=\min_i x_i$$
and
$$\hat{\sigma}=\mbox{ave}_ix_i-\min_i x_i$$
Here is an example in R:
n<-100
theta<-1
sigma<-2
set.seed(123) #for reproducibility
x<-rexp(n,rate=1/sigma)+theta
mean(x)-min(x)
the MLE of $\sigma$ is $\approx2.08$.
Unfortunately, the MLE estimates are very sensitive to the presence of outliers. For example, if I corrupt the sample by replacing 20% of the $x_i$'s by $-x_i$:
m<-floor(0.2*n)
y<-x
y[1:m]<--y[1:m]
mean(y)-min(y)
the MLE of $\sigma$ based on the corrupted sample is now
$\approx11.12$(!).
As a second example, if I corrupt the sample by replacing 20% of the $x_i$'s by $100x_i$ (say if the decimal place was accidentally misplaced):
m<-floor(0.2*n)
z<-x
z[1:m]<-100*z[1:m]
mean(z)-min(z)
the MLE of $\sigma$ based on this second corrupted sample is now
$\approx54$(!).
An alternative to the raw MLE is to (a) find the outliers using a robust outlier identification rule, (b) set them aside as spurious data and (c) compute the MLE on the non spurious part of the sample.
The most well known of these robust outlier identification rule is the med/mad rule proposed by Hampel[3] who attributed it to Gauss (I illustrated this rule here). In the med/mad rule, the rejection threshold are based on the assumption that the genuine observations in your sample are well approximated by a normal distribution.
Of course, if you have extra information (such as knowing that the distribution of the genuine observations is well approximated by a poisson distribution as in this example)
there is nothing to prevent you from transforming your data and using the baseline outlier rejection rule (the med/mad) but this strikes me as a bit awkward to transform the data to preserve what is after all an ad-hoc rule.
It seems much more logical to me to preserve the data but adapt the rejection
rules. Then, you would still use the 3 step procedure I described in the first link above, but with rejection threshold adapted to the distribution you suspect the good part of the data has. Below, I give the rejection rule in situations where the genuine observations are well fitted by an exponential distribution. In this case, you can construct good rejection thresholds using the following rule:
1) estimate $\theta$ using [1]:
$$\hat{\theta}'=\mbox{med}_ix_i-3.476\mbox{Qn}(x)\ln2$$
The Qn is a robust estimate of scatter that is not geared towards symmetric data. It is widely implemented, for example in the R package robustbase. For exponential distributed data, the Qn is multiplied by consistency factor of $\approx3.476$, see [1] for more details.
2) reject as spurious all observations outside of [2,p 188]
$$[\hat{\theta}',9(1+2/n)\mbox{med}_ix_i+\hat{\theta}']$$
(the factor 9 in the rule above is obtained as the 7.1 in Glen_b's answer above, but using a higher cut-off. The factor (1+2/n) is small sample correction factor that was derived by simulations in [2]. For large enough sample sizes, it is essentially equal to 1).
3) use the MLE on the non spurious data to estimate $\sigma$:
$$\hat{\sigma}'=\mbox{ave}_{i\in H}x_i-\mbox{min}_{i\in H}x_i$$
where $H=\{i:\hat{\theta}'\leq x_i \leq 9(1+2/n)\mbox{med}_ix_i+\hat{\theta}'\}$.
using this rule on the previous examples, you would get:
library(robustbase)
theta<-median(x)-Qn(x,constant=3.476)*log(2)
clean<-which(x>=theta & x<=9*(1+2/n)*median(x)+theta)
mean(x[clean])-min(x[clean])
the robust estimate of $\sigma$ is now
$\approx2.05$ (very close to the MLE value when the data is clean).
On the second example:
theta<-median(y)-Qn(y,constant=3.476)*log(2)
clean<-which(y>=theta & y<=9*(1+2/n)*median(y)+theta)
mean(y[clean])-min(y[clean])
The robust estimate of $\sigma$ is now
$\approx2.2$ (very close to the value we would have gotten without the outliers).
On the third example:
theta<-median(z)-Qn(z,constant=3.476)*log(2)
clean<-which(z>=theta & z<=9*(1+2/n)*median(z)+theta)
mean(z[clean])-min(z[clean])
The robust estimate of $\sigma$ is now
$\approx2.2$ (very close to the value we would have gotten without the outliers).
A side benefit of this approach is that it yields a subset of indexes of
suspect observations which should be set aside from the rest of the data, perhaps to be studied as object of interest in their own right (the members
of $\{i:i\notin H\}$).
Now, for the general case where you do not have a good candidate distribution
to fit the bulk of your observations beyond knowing that a symmetric distribution won't do, you can use the adjusted boxplot[4]. This is a generalization of the boxplot that takes into account a (non parametric and outlier robust) measure of skewness of your data (so that when the bulk of the data is symmetric is collapses down to the usual boxplot). You can also check this answer for an illustration.
[0] Johnson N. L., Kotz S., Balakrishnan N. (1994). Continuous Univariate Distributions, Volume 1, 2nd Edition.
[1] Rousseeuw P. J. and Croux C. (1993).
Alternatives to the Median Absolute Deviation.
Journal of the American Statistical Association, Vol. 88, No. 424, pp. 1273--1283.
[2] J. K. Patel, C. H. Kapadia, and D. B. Owen, Dekker (1976).
Handbook of statistical distributions.
[3] Hampel (1974). The Influence Curve and Its Role in Robust Estimation. Journal of the American Statistical Association
Vol. 69, No. 346 (Jun., 1974), pp. 383-393.
[4] Vandervieren, E., Hubert, M. (2004) "An adjusted boxplot for skewed distributions". Computational Statistics & Data Analysis
Volume 52, Issue 12, 15 August 2008, Pages 5186–5201.
|
Outlier Detection on skewed Distributions
I will answer your questions in the opposite order in which you asked them, so that the exposition proceeds from the specific to the general.
First, let us consider a situation where you can assume t
|
7,485
|
Outlier Detection on skewed Distributions
|
First, I'd question the definition, classical or otherwise. An "outlier" is a surprising point. Using any particular rule (even for symmetric distributions) is a flawed idea, especially nowadays when there are so many huge data sets. In a data set of (say) one million observations (not all that big, in some fields), there will be many many cases beyond the 1.5 IQR limit you cite, even if the distribution is perfectly normal.
Second, I'd suggest looking for outliers on the original data. It will nearly always be more intuitive. For instance, with income data, it is quite common to take logs. But even here I'd look for outliers on the original scale (dollars or euros or whatever) because we have a better feel for such numbers. (If you do take logs, I'd suggest log base 10, at least for outlier detection, because it is at least a little intuitive).
Third, when looking for outliers, beware of masking.
Finally, I am currently researching the "forward search" algorithm proposed by Atkinson and Riani for various sorts of data and problems. This looks very promising.
|
Outlier Detection on skewed Distributions
|
First, I'd question the definition, classical or otherwise. An "outlier" is a surprising point. Using any particular rule (even for symmetric distributions) is a flawed idea, especially nowadays when
|
Outlier Detection on skewed Distributions
First, I'd question the definition, classical or otherwise. An "outlier" is a surprising point. Using any particular rule (even for symmetric distributions) is a flawed idea, especially nowadays when there are so many huge data sets. In a data set of (say) one million observations (not all that big, in some fields), there will be many many cases beyond the 1.5 IQR limit you cite, even if the distribution is perfectly normal.
Second, I'd suggest looking for outliers on the original data. It will nearly always be more intuitive. For instance, with income data, it is quite common to take logs. But even here I'd look for outliers on the original scale (dollars or euros or whatever) because we have a better feel for such numbers. (If you do take logs, I'd suggest log base 10, at least for outlier detection, because it is at least a little intuitive).
Third, when looking for outliers, beware of masking.
Finally, I am currently researching the "forward search" algorithm proposed by Atkinson and Riani for various sorts of data and problems. This looks very promising.
|
Outlier Detection on skewed Distributions
First, I'd question the definition, classical or otherwise. An "outlier" is a surprising point. Using any particular rule (even for symmetric distributions) is a flawed idea, especially nowadays when
|
7,486
|
How are the standard errors computed for the fitted values from a logistic regression?
|
The prediction is just a linear combination of the estimated coefficients. The coefficients are asymptotically normal so a linear combination of those coefficients will be asymptotically normal as well. So if we can obtain the covariance matrix for the parameter estimates we can obtain the standard error for a linear combination of those estimates easily. If I denote the covariance matrix as $\Sigma$ and and write the coefficients for my linear combination in a vector as $C$ then the standard error is just $\sqrt{C' \Sigma C}$
# Making fake data and fitting the model and getting a prediction
set.seed(500)
dat <- data.frame(x = runif(20), y = rbinom(20, 1, .5))
o <- glm(y ~ x, data = dat)
pred <- predict(o, newdata = data.frame(x=1.5), se.fit = TRUE)
# To obtain a prediction for x=1.5 I'm really
# asking for yhat = b0 + 1.5*b1 so my
# C = c(1, 1.5)
# and vcov applied to the glm object gives me
# the covariance matrix for the estimates
C <- c(1, 1.5)
std.er <- sqrt(t(C) %*% vcov(o) %*% C)
> pred$se.fit
[1] 0.4246289
> std.er
[,1]
[1,] 0.4246289
We see that the 'by hand' method I show gives the same standard error as reported via predict
|
How are the standard errors computed for the fitted values from a logistic regression?
|
The prediction is just a linear combination of the estimated coefficients. The coefficients are asymptotically normal so a linear combination of those coefficients will be asymptotically normal as we
|
How are the standard errors computed for the fitted values from a logistic regression?
The prediction is just a linear combination of the estimated coefficients. The coefficients are asymptotically normal so a linear combination of those coefficients will be asymptotically normal as well. So if we can obtain the covariance matrix for the parameter estimates we can obtain the standard error for a linear combination of those estimates easily. If I denote the covariance matrix as $\Sigma$ and and write the coefficients for my linear combination in a vector as $C$ then the standard error is just $\sqrt{C' \Sigma C}$
# Making fake data and fitting the model and getting a prediction
set.seed(500)
dat <- data.frame(x = runif(20), y = rbinom(20, 1, .5))
o <- glm(y ~ x, data = dat)
pred <- predict(o, newdata = data.frame(x=1.5), se.fit = TRUE)
# To obtain a prediction for x=1.5 I'm really
# asking for yhat = b0 + 1.5*b1 so my
# C = c(1, 1.5)
# and vcov applied to the glm object gives me
# the covariance matrix for the estimates
C <- c(1, 1.5)
std.er <- sqrt(t(C) %*% vcov(o) %*% C)
> pred$se.fit
[1] 0.4246289
> std.er
[,1]
[1,] 0.4246289
We see that the 'by hand' method I show gives the same standard error as reported via predict
|
How are the standard errors computed for the fitted values from a logistic regression?
The prediction is just a linear combination of the estimated coefficients. The coefficients are asymptotically normal so a linear combination of those coefficients will be asymptotically normal as we
|
7,487
|
Things to consider about masters programs in statistics
|
Here is a somewhat blunt set of general thoughts and recommendations
on masters programs in statistics. I don't intend for them to be
polemic, though some of them may sound like that.
I am going to assume that you are interested in a terminal masters
degree to later go into industry and are not interested in
potentially pursuing a doctorate. Please do not take this reply as
authoritative, though.
Below are several points of advice from my own experiences. I've
ordered them very roughly from what I think is most important to
least. As you choose a program, you might weigh each of them against
one another taking some of the points below into account.
Try to make the best choice for you personally. There are very
many factors involved in such a decision: geography, personal
relationships, job and networking opportunities, coursework,
costs of education and living, etc. The most important thing is
to weigh each of these yourself and try to use your own best
judgment. You are the one that ultimately lives with the
consequences of your choice, both positive and negative, and
you are the only one in a position to appraise your whole
situation. Act accordingly.
Learn to collaborate and manage your time. You may not believe
me, but an employer will very likely care more about your
personality, ability to collaborate with others and ability to
work efficiently than they will care about your raw technical
skills. Effective communication is crucial in statistics,
especially when communicating with nonstatisticians. Knowing how
to manage a complex project and make steady progress is very
important. Take advantage of structured statistical-consulting opportunities, if they exist, at your chosen institution.
Learn a cognate area. The greatest weakness I see in many
masters and PhD graduates in statistics, both in industry and
in academia, is that they often have very little subject-matter
knowledge. The upshot is that sometimes "standard" statistical
analyses get used due to a lack of understanding of the underlying
mechanisms of the problem they are trying to analyze. Developing
some expertise in a cognate area can, therefore, be very
enriching both statistically and professionally. But, the most
important aspect of this is the learning itself: Realizing that
incorporating subject matter knowledge can be vital to
correctly analyzing a problem. Being competent in the vocabulary
and basic knowledge can also aid greatly in communication and will
improve the perception that your nonstatistician colleagues have
of you.
Learn to work with (big) data. Data sets in virtually every
field that uses statistics have been growing tremendously in size
over the last 20 years. In an industrial setting, you will likely
spend more time manipulating data than you will analyzing
them. Learning good data-management procedures, sanity checking,
etc. is crucial to valid analysis. The more efficient you become
at it, the more time you'll spend doing the "fun" stuff. This
is something that is very heavily underemphasized and
underappreciated in academic programs. Luckily, there are now
some bigger data sets available to the academic community that
one can play with. If you can't do this within the program
itself, spend some time doing so outside of it.
Learn linear regression and the associated applied linear algebra
very, very well. It is surprising how many masters and PhD
graduates obtain their degrees (from "top" programs!), but
can't answer basic questions on linear regression or how it
works. Having this material down cold will serve you incredibly
well. It is important in its own right and is the gateway to
many, many more advanced statistical and machine-learning
techniques.
If possible, do a masters report or thesis. The masters
programs associated with some of the top U.S. statistics departments
(usually gauged more on their doctorate programs) seem to have
moved away from incorporating a report or a thesis. The fact of
the matter is that a purely course-based program usually deprives
the student of developing any real depth of knowledge in a
particular area. The area itself is not so important, in my view,
but the experience is. The persistence, time-management,
collaboration with faculty, etc. required to produce a masters
report or thesis can pay off greatly when transitioning to
industry. Even if a program doesn't advertise one, if you're
otherwise interested in it, send an email to the admissions chair
and ask about the possibility of a customized program that allows
for it.
Take the most challenging coursework you can manage. While the
most important thing is to understand the core material very,
very well, you should also use your time and money wisely by
challenging yourself as much as possible. The particular topic
matter you choose to learn may appear to be fairly "useless",
but getting some contact with the literature and challenging
yourself to learn something new and difficult will make it easier
when you have to do so later in industry. For example, learning
some of the theory behind classical statistics turns out to be
fairly useless in and of itself for the daily work of many
industrial statisticians, but the concepts conveyed are
extremely useful and provide continual guidance. It also will
make all the other statistical methods you come into contact with
seem less mysterious.
A program's reputation only matters for your first job. Way too
much emphasis is put on a school's or program's reputation.
Unfortunately, this is a time- and energy-saving heuristic for
human-resource managers. Be aware that programs are judged much
more by their research and doctoral programs than their masters
ones. In many such top departments, the M.S. students often end up
feeling a bit like second-class citizens since most of the
resources are expended on the doctoral programs.
One of the brightest young statistical
collaborators I've worked with has a doctorate from a small
foreign university you've probably never heard of. People can get
a wonderful education (sometimes a much better one, especially at
the undergraduate and masters level!) at "no-name"
institutions than at "top" programs. They're almost guaranteed
to get more interaction with core faculty at the former.
The name of the school at the top of your resume is likely to
have a role in getting you in the door for your first job and
people will care more about where your most advanced degree came
from than where any others did. After that first job, people will care substantially more about what
experience you bring to the table. Finding a school where lots of
interesting job opportunities come to you through career fairs,
circulated emails, etc., can have a big payoff and this happens
more at top programs.
A personal remark: I personally have a preference for somewhat
more theoretical programs that still allow some contact with data
and a smattering of applied courses. The fact of the matter is that
you're simply not going to become a good applied statistician by
obtaining a masters degree. That comes only with (much more) time
and experience in struggling with challenging problems and analyses
on a daily basis.
|
Things to consider about masters programs in statistics
|
Here is a somewhat blunt set of general thoughts and recommendations
on masters programs in statistics. I don't intend for them to be
polemic, though some of them may sound like that.
I am going to a
|
Things to consider about masters programs in statistics
Here is a somewhat blunt set of general thoughts and recommendations
on masters programs in statistics. I don't intend for them to be
polemic, though some of them may sound like that.
I am going to assume that you are interested in a terminal masters
degree to later go into industry and are not interested in
potentially pursuing a doctorate. Please do not take this reply as
authoritative, though.
Below are several points of advice from my own experiences. I've
ordered them very roughly from what I think is most important to
least. As you choose a program, you might weigh each of them against
one another taking some of the points below into account.
Try to make the best choice for you personally. There are very
many factors involved in such a decision: geography, personal
relationships, job and networking opportunities, coursework,
costs of education and living, etc. The most important thing is
to weigh each of these yourself and try to use your own best
judgment. You are the one that ultimately lives with the
consequences of your choice, both positive and negative, and
you are the only one in a position to appraise your whole
situation. Act accordingly.
Learn to collaborate and manage your time. You may not believe
me, but an employer will very likely care more about your
personality, ability to collaborate with others and ability to
work efficiently than they will care about your raw technical
skills. Effective communication is crucial in statistics,
especially when communicating with nonstatisticians. Knowing how
to manage a complex project and make steady progress is very
important. Take advantage of structured statistical-consulting opportunities, if they exist, at your chosen institution.
Learn a cognate area. The greatest weakness I see in many
masters and PhD graduates in statistics, both in industry and
in academia, is that they often have very little subject-matter
knowledge. The upshot is that sometimes "standard" statistical
analyses get used due to a lack of understanding of the underlying
mechanisms of the problem they are trying to analyze. Developing
some expertise in a cognate area can, therefore, be very
enriching both statistically and professionally. But, the most
important aspect of this is the learning itself: Realizing that
incorporating subject matter knowledge can be vital to
correctly analyzing a problem. Being competent in the vocabulary
and basic knowledge can also aid greatly in communication and will
improve the perception that your nonstatistician colleagues have
of you.
Learn to work with (big) data. Data sets in virtually every
field that uses statistics have been growing tremendously in size
over the last 20 years. In an industrial setting, you will likely
spend more time manipulating data than you will analyzing
them. Learning good data-management procedures, sanity checking,
etc. is crucial to valid analysis. The more efficient you become
at it, the more time you'll spend doing the "fun" stuff. This
is something that is very heavily underemphasized and
underappreciated in academic programs. Luckily, there are now
some bigger data sets available to the academic community that
one can play with. If you can't do this within the program
itself, spend some time doing so outside of it.
Learn linear regression and the associated applied linear algebra
very, very well. It is surprising how many masters and PhD
graduates obtain their degrees (from "top" programs!), but
can't answer basic questions on linear regression or how it
works. Having this material down cold will serve you incredibly
well. It is important in its own right and is the gateway to
many, many more advanced statistical and machine-learning
techniques.
If possible, do a masters report or thesis. The masters
programs associated with some of the top U.S. statistics departments
(usually gauged more on their doctorate programs) seem to have
moved away from incorporating a report or a thesis. The fact of
the matter is that a purely course-based program usually deprives
the student of developing any real depth of knowledge in a
particular area. The area itself is not so important, in my view,
but the experience is. The persistence, time-management,
collaboration with faculty, etc. required to produce a masters
report or thesis can pay off greatly when transitioning to
industry. Even if a program doesn't advertise one, if you're
otherwise interested in it, send an email to the admissions chair
and ask about the possibility of a customized program that allows
for it.
Take the most challenging coursework you can manage. While the
most important thing is to understand the core material very,
very well, you should also use your time and money wisely by
challenging yourself as much as possible. The particular topic
matter you choose to learn may appear to be fairly "useless",
but getting some contact with the literature and challenging
yourself to learn something new and difficult will make it easier
when you have to do so later in industry. For example, learning
some of the theory behind classical statistics turns out to be
fairly useless in and of itself for the daily work of many
industrial statisticians, but the concepts conveyed are
extremely useful and provide continual guidance. It also will
make all the other statistical methods you come into contact with
seem less mysterious.
A program's reputation only matters for your first job. Way too
much emphasis is put on a school's or program's reputation.
Unfortunately, this is a time- and energy-saving heuristic for
human-resource managers. Be aware that programs are judged much
more by their research and doctoral programs than their masters
ones. In many such top departments, the M.S. students often end up
feeling a bit like second-class citizens since most of the
resources are expended on the doctoral programs.
One of the brightest young statistical
collaborators I've worked with has a doctorate from a small
foreign university you've probably never heard of. People can get
a wonderful education (sometimes a much better one, especially at
the undergraduate and masters level!) at "no-name"
institutions than at "top" programs. They're almost guaranteed
to get more interaction with core faculty at the former.
The name of the school at the top of your resume is likely to
have a role in getting you in the door for your first job and
people will care more about where your most advanced degree came
from than where any others did. After that first job, people will care substantially more about what
experience you bring to the table. Finding a school where lots of
interesting job opportunities come to you through career fairs,
circulated emails, etc., can have a big payoff and this happens
more at top programs.
A personal remark: I personally have a preference for somewhat
more theoretical programs that still allow some contact with data
and a smattering of applied courses. The fact of the matter is that
you're simply not going to become a good applied statistician by
obtaining a masters degree. That comes only with (much more) time
and experience in struggling with challenging problems and analyses
on a daily basis.
|
Things to consider about masters programs in statistics
Here is a somewhat blunt set of general thoughts and recommendations
on masters programs in statistics. I don't intend for them to be
polemic, though some of them may sound like that.
I am going to a
|
7,488
|
Things to consider about masters programs in statistics
|
I would advise to either get in the best school possible with a brand name (like MIT), or the best overall deal (e.g. a decent public school with in-state tuition). I would not waste money on second rate private schools.
The brand name schools payoff. The price difference between a school like MIT and second tier schools like GWU is not big enough to justify the difference in the brand power.
On the other hand, some public schools, e.g. William and Mary, while being dirt cheap offer decent education. Some of them even have comparable brand power, e.g. Berkeley vs. Stanford. Thus due to the significant cost difference, they're an alternative to best private schools.
|
Things to consider about masters programs in statistics
|
I would advise to either get in the best school possible with a brand name (like MIT), or the best overall deal (e.g. a decent public school with in-state tuition). I would not waste money on second r
|
Things to consider about masters programs in statistics
I would advise to either get in the best school possible with a brand name (like MIT), or the best overall deal (e.g. a decent public school with in-state tuition). I would not waste money on second rate private schools.
The brand name schools payoff. The price difference between a school like MIT and second tier schools like GWU is not big enough to justify the difference in the brand power.
On the other hand, some public schools, e.g. William and Mary, while being dirt cheap offer decent education. Some of them even have comparable brand power, e.g. Berkeley vs. Stanford. Thus due to the significant cost difference, they're an alternative to best private schools.
|
Things to consider about masters programs in statistics
I would advise to either get in the best school possible with a brand name (like MIT), or the best overall deal (e.g. a decent public school with in-state tuition). I would not waste money on second r
|
7,489
|
Things to consider about masters programs in statistics
|
Take a look at Pharmacoepidemiology. In particular as it relates to Drug safety. This is a very new area of research with a lots of very interested questions.
|
Things to consider about masters programs in statistics
|
Take a look at Pharmacoepidemiology. In particular as it relates to Drug safety. This is a very new area of research with a lots of very interested questions.
|
Things to consider about masters programs in statistics
Take a look at Pharmacoepidemiology. In particular as it relates to Drug safety. This is a very new area of research with a lots of very interested questions.
|
Things to consider about masters programs in statistics
Take a look at Pharmacoepidemiology. In particular as it relates to Drug safety. This is a very new area of research with a lots of very interested questions.
|
7,490
|
Distributions other than the normal where mean and variance are independent
|
Note: Please read answer by @G. Jay Kerns, and see Carlin and Lewis 1996 or your favorite probability reference for background on the calculation of mean and variance as the expectated value and second moment of a random variable.
A quick scan of Appendix A in Carlin and Lewis (1996) provides the following distributions which are similar in this regard to the normal, in that the same distribution parameters are not used in the calculations of the mean and variance. As pointed out by @robin, when calculating parameter estimates from a sample, the sample mean is required to calculate sigma.
Multivariate Normal
$$E(X) = \mu$$
$$Var(X) = \Sigma$$
t and multivariate t:
$$E(X) = \mu$$
$$Var(X) = \nu\sigma^2/(\nu - 2)$$
Double exponential:
$$E(X) = \mu$$
$$Var(X) = 2\sigma^2$$
Cauchy:
With some qualification it could be argued that the mean and variance of the Cauchy are not dependent.
$E(X)$ and $Var(X)$ do not exist
Reference
Carlin, Bradley P., and Thomas A. Louis. 1996. Bayes and Empirical bayes Methods for Data Analysis, 2nd ed. Chapman and Hall/CRC, New York
|
Distributions other than the normal where mean and variance are independent
|
Note: Please read answer by @G. Jay Kerns, and see Carlin and Lewis 1996 or your favorite probability reference for background on the calculation of mean and variance as the expectated value and secon
|
Distributions other than the normal where mean and variance are independent
Note: Please read answer by @G. Jay Kerns, and see Carlin and Lewis 1996 or your favorite probability reference for background on the calculation of mean and variance as the expectated value and second moment of a random variable.
A quick scan of Appendix A in Carlin and Lewis (1996) provides the following distributions which are similar in this regard to the normal, in that the same distribution parameters are not used in the calculations of the mean and variance. As pointed out by @robin, when calculating parameter estimates from a sample, the sample mean is required to calculate sigma.
Multivariate Normal
$$E(X) = \mu$$
$$Var(X) = \Sigma$$
t and multivariate t:
$$E(X) = \mu$$
$$Var(X) = \nu\sigma^2/(\nu - 2)$$
Double exponential:
$$E(X) = \mu$$
$$Var(X) = 2\sigma^2$$
Cauchy:
With some qualification it could be argued that the mean and variance of the Cauchy are not dependent.
$E(X)$ and $Var(X)$ do not exist
Reference
Carlin, Bradley P., and Thomas A. Louis. 1996. Bayes and Empirical bayes Methods for Data Analysis, 2nd ed. Chapman and Hall/CRC, New York
|
Distributions other than the normal where mean and variance are independent
Note: Please read answer by @G. Jay Kerns, and see Carlin and Lewis 1996 or your favorite probability reference for background on the calculation of mean and variance as the expectated value and secon
|
7,491
|
Distributions other than the normal where mean and variance are independent
|
In fact, the answer is "no". Independence of the sample mean and variance characterizes the normal distribution. This was shown by Eugene Lukacs in "A Characterization of the Normal Distribution", The Annals of Mathematical Statistics, Vol. 13, No. 1 (Mar., 1942), pp. 91-93.
I didn't know this, but Feller, "Introduction to Probability Theory and Its Applications, Volume II" (1966, pg 86) says that R.C. Geary proved this, too.
|
Distributions other than the normal where mean and variance are independent
|
In fact, the answer is "no". Independence of the sample mean and variance characterizes the normal distribution. This was shown by Eugene Lukacs in "A Characterization of the Normal Distribution", T
|
Distributions other than the normal where mean and variance are independent
In fact, the answer is "no". Independence of the sample mean and variance characterizes the normal distribution. This was shown by Eugene Lukacs in "A Characterization of the Normal Distribution", The Annals of Mathematical Statistics, Vol. 13, No. 1 (Mar., 1942), pp. 91-93.
I didn't know this, but Feller, "Introduction to Probability Theory and Its Applications, Volume II" (1966, pg 86) says that R.C. Geary proved this, too.
|
Distributions other than the normal where mean and variance are independent
In fact, the answer is "no". Independence of the sample mean and variance characterizes the normal distribution. This was shown by Eugene Lukacs in "A Characterization of the Normal Distribution", T
|
7,492
|
How exactly is the "effectiveness" in the Moderna and Pfizer vaccine trials estimated?
|
Moderna
Based on the press release we can assume that there were 30 000 patients total and observed were 90 infections among placebo and 5 infections among the vaccinated group.
Let's assume that the vaccine group and placebo group were each of the same size 15 000.
So, calculated on the back of an envelope, instead of 90 infections you got 5 infections. The reduction due to the vaccine is 85 out of 90 patients that did not get infected (without vaccine 90 get infected with vaccine 5 get infected, so presumably the vaccine reduced it from 90 down to 5). This is the $85/90 \approx 94.4 \%$ that is the number you see in the news.
This would normally need to be adjusted. The groups may not have been the same sizes and the people may not have been exposed at the same time (you do not get everybody vaccinated at exactly the same time). So eventually you will be doing some more complicated computation of the risk and based on the ratio of those figures you get to a more exact figure (but the on the back of an envelope calculation will be reasonably close).
In addition, the $94.4\%$ is just a point estimate. Normally a range of confidence is given for an estimate (confidence interval). Roughly speaking this is a measure for how accurate/certain the measurement/estimate is. It gives some boundaries for failure of the estimate (typical are 95% boundaries).
One way to compute the confidence interval for ratio's is to express it in terms of log odds apply an approximation formula for the error use that to express the interval and then convert back to ratio's. This would give a $95\%$ confidence interval between $88.0\%$ and $97.8\%$ for the effectiveness.
$$\begin{array}{} \text{log_odds} &=& \log \frac{5}{90} \approx -2.89\\
\text{S.E.}_\text{log_odds} &\approx& \sqrt{\frac{1}{5}+\frac{1}{90}+\frac{1}{14995}+\frac{1}{14910}} \approx 0.460\\
CI_{95\%}(\text{log_odds}) &\approx& \text{log_odds}-1.96\text{S.E.}_\text{log_odds} \, , \, \text{log_odds}+1.96\text{S.E.}_\text{log_odds}\\ & \approx &-3.79,-1.99 \\
CI_{95\%}(\text{odds}) &\approx& 0.0225,\ 0.137 \\
CI_{95\%}(\text{effectivity}) &=& \frac{1}{1+CI_{95\%}(\text{odds})} \\&\approx& 88.0 \%,\ 97.8 \% \end{array}$$
These computations assume ideal situations (as if the numbers 5 and 90 stem from nicely understood causes for the variations). The assumption is not an interference that breaks the statistical model. E.g. patients that got vaccinated and had fever or other symptoms afterward may have been distancing more because of that. For them, the exposure is less and that is not taken into account in the on the back of the envelope calculation. In addition, this relates to effectivity for the total period (in which the infection pressure may not have been equally distributed). Based on these simple figures, we can not say with the same accuracy how effective the vaccination is as a function of time (especially the question whether the immunity decreases over time).
|
How exactly is the "effectiveness" in the Moderna and Pfizer vaccine trials estimated?
|
Moderna
Based on the press release we can assume that there were 30 000 patients total and observed were 90 infections among placebo and 5 infections among the vaccinated group.
Let's assume that the
|
How exactly is the "effectiveness" in the Moderna and Pfizer vaccine trials estimated?
Moderna
Based on the press release we can assume that there were 30 000 patients total and observed were 90 infections among placebo and 5 infections among the vaccinated group.
Let's assume that the vaccine group and placebo group were each of the same size 15 000.
So, calculated on the back of an envelope, instead of 90 infections you got 5 infections. The reduction due to the vaccine is 85 out of 90 patients that did not get infected (without vaccine 90 get infected with vaccine 5 get infected, so presumably the vaccine reduced it from 90 down to 5). This is the $85/90 \approx 94.4 \%$ that is the number you see in the news.
This would normally need to be adjusted. The groups may not have been the same sizes and the people may not have been exposed at the same time (you do not get everybody vaccinated at exactly the same time). So eventually you will be doing some more complicated computation of the risk and based on the ratio of those figures you get to a more exact figure (but the on the back of an envelope calculation will be reasonably close).
In addition, the $94.4\%$ is just a point estimate. Normally a range of confidence is given for an estimate (confidence interval). Roughly speaking this is a measure for how accurate/certain the measurement/estimate is. It gives some boundaries for failure of the estimate (typical are 95% boundaries).
One way to compute the confidence interval for ratio's is to express it in terms of log odds apply an approximation formula for the error use that to express the interval and then convert back to ratio's. This would give a $95\%$ confidence interval between $88.0\%$ and $97.8\%$ for the effectiveness.
$$\begin{array}{} \text{log_odds} &=& \log \frac{5}{90} \approx -2.89\\
\text{S.E.}_\text{log_odds} &\approx& \sqrt{\frac{1}{5}+\frac{1}{90}+\frac{1}{14995}+\frac{1}{14910}} \approx 0.460\\
CI_{95\%}(\text{log_odds}) &\approx& \text{log_odds}-1.96\text{S.E.}_\text{log_odds} \, , \, \text{log_odds}+1.96\text{S.E.}_\text{log_odds}\\ & \approx &-3.79,-1.99 \\
CI_{95\%}(\text{odds}) &\approx& 0.0225,\ 0.137 \\
CI_{95\%}(\text{effectivity}) &=& \frac{1}{1+CI_{95\%}(\text{odds})} \\&\approx& 88.0 \%,\ 97.8 \% \end{array}$$
These computations assume ideal situations (as if the numbers 5 and 90 stem from nicely understood causes for the variations). The assumption is not an interference that breaks the statistical model. E.g. patients that got vaccinated and had fever or other symptoms afterward may have been distancing more because of that. For them, the exposure is less and that is not taken into account in the on the back of the envelope calculation. In addition, this relates to effectivity for the total period (in which the infection pressure may not have been equally distributed). Based on these simple figures, we can not say with the same accuracy how effective the vaccination is as a function of time (especially the question whether the immunity decreases over time).
|
How exactly is the "effectiveness" in the Moderna and Pfizer vaccine trials estimated?
Moderna
Based on the press release we can assume that there were 30 000 patients total and observed were 90 infections among placebo and 5 infections among the vaccinated group.
Let's assume that the
|
7,493
|
What is difference between 'transfer learning' and 'domain adaptation'?
|
It seems that there are some disagreement between researchers on what the difference between 'transfer learning' and 'domain adaptation' is.
From {0}:
The notion of domain adaptation is closely related to transfer learning. Transfer learning is a general term that refers to a class of machine learning problems that involve different tasks or domains. In the literature, there isn't yet a standard definition of transfer learning. In some papers it's interchangeable with domain
adaptation.
From {1}:
References:
{0} Li, Qi. "Literature survey: domain adaptation algorithms for natural language processing." Department of Computer Science The Graduate Center, The City University of New York (2012): 8-10. https://scholar.google.com/scholar?cluster=2828982016930721315&hl=en&as_sdt=0,22 ; https://pdfs.semanticscholar.org/532e/3d5b1b5807771b77cac60fe8594b506fcff9.pdf ; http://nlp.cs.rpi.edu/paper/qisurvey.pdf (mirror)
{1} Pan, Sinno Jialin, and Qiang Yang. "A survey on transfer learning." IEEE Transactions on knowledge and data engineering 22, no. 10 (2010): 1345-1359. https://scholar.google.com/scholar?cluster=17771403852323259019&hl=en&as_sdt=0,22 ; http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.158.4126&rep=rep1&type=pdf (mirror) (2.6k citations)
|
What is difference between 'transfer learning' and 'domain adaptation'?
|
It seems that there are some disagreement between researchers on what the difference between 'transfer learning' and 'domain adaptation' is.
From {0}:
The notion of domain adaptation is closely relat
|
What is difference between 'transfer learning' and 'domain adaptation'?
It seems that there are some disagreement between researchers on what the difference between 'transfer learning' and 'domain adaptation' is.
From {0}:
The notion of domain adaptation is closely related to transfer learning. Transfer learning is a general term that refers to a class of machine learning problems that involve different tasks or domains. In the literature, there isn't yet a standard definition of transfer learning. In some papers it's interchangeable with domain
adaptation.
From {1}:
References:
{0} Li, Qi. "Literature survey: domain adaptation algorithms for natural language processing." Department of Computer Science The Graduate Center, The City University of New York (2012): 8-10. https://scholar.google.com/scholar?cluster=2828982016930721315&hl=en&as_sdt=0,22 ; https://pdfs.semanticscholar.org/532e/3d5b1b5807771b77cac60fe8594b506fcff9.pdf ; http://nlp.cs.rpi.edu/paper/qisurvey.pdf (mirror)
{1} Pan, Sinno Jialin, and Qiang Yang. "A survey on transfer learning." IEEE Transactions on knowledge and data engineering 22, no. 10 (2010): 1345-1359. https://scholar.google.com/scholar?cluster=17771403852323259019&hl=en&as_sdt=0,22 ; http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.158.4126&rep=rep1&type=pdf (mirror) (2.6k citations)
|
What is difference between 'transfer learning' and 'domain adaptation'?
It seems that there are some disagreement between researchers on what the difference between 'transfer learning' and 'domain adaptation' is.
From {0}:
The notion of domain adaptation is closely relat
|
7,494
|
What is difference between 'transfer learning' and 'domain adaptation'?
|
From Hal Daume's article [1]:
The standard classification setting is a input distribution p(X) and a
label distribution p(Y|X). Domain adaptation: when p(X) changes
between training and test. Transfer learning: when p(Y|X) changes
between training and test.
In other words, in DA the input distribution changes but the labels
remain the same; in TL, the input distributions stays the same, but
the labels change.
https://nlpers.blogspot.com/2007/11/domain-adaptation-vs-transfer-learning.html (mirror)
|
What is difference between 'transfer learning' and 'domain adaptation'?
|
From Hal Daume's article [1]:
The standard classification setting is a input distribution p(X) and a
label distribution p(Y|X). Domain adaptation: when p(X) changes
between training and test. T
|
What is difference between 'transfer learning' and 'domain adaptation'?
From Hal Daume's article [1]:
The standard classification setting is a input distribution p(X) and a
label distribution p(Y|X). Domain adaptation: when p(X) changes
between training and test. Transfer learning: when p(Y|X) changes
between training and test.
In other words, in DA the input distribution changes but the labels
remain the same; in TL, the input distributions stays the same, but
the labels change.
https://nlpers.blogspot.com/2007/11/domain-adaptation-vs-transfer-learning.html (mirror)
|
What is difference between 'transfer learning' and 'domain adaptation'?
From Hal Daume's article [1]:
The standard classification setting is a input distribution p(X) and a
label distribution p(Y|X). Domain adaptation: when p(X) changes
between training and test. T
|
7,495
|
What is difference between 'transfer learning' and 'domain adaptation'?
|
Throughout the literature on transfer learning, there is a number of terminology inconsistencies. Phrases such as transfer learning and domain adaptation are used to refer to similar processes. Domain adaptation is the process of adapting one or more source domains for the means of transferring information to improve the performance of a target learner. The domain adaptation process attempts to alter a source domain in an attempt to bring the distribution of the source closer to that of the target. In the Domain Adaptation setting the source and target domains have different marginal distributions p(X). According to Pan's survey, Transfer Learning is a broader term that can also include the case when there is a difference in the conditional distributions p(Y|X) of the source and target domains. In contrast, Daume discriminates the two terms [1], referring that Domain Adaptation is when p(X) changes between source and target and Transfer learning is when p(Y|X) changes between source and target domains.
https://nlpers.blogspot.com/2007/11/domain-adaptation-vs-transfer-learning.html
|
What is difference between 'transfer learning' and 'domain adaptation'?
|
Throughout the literature on transfer learning, there is a number of terminology inconsistencies. Phrases such as transfer learning and domain adaptation are used to refer to similar processes. Domain
|
What is difference between 'transfer learning' and 'domain adaptation'?
Throughout the literature on transfer learning, there is a number of terminology inconsistencies. Phrases such as transfer learning and domain adaptation are used to refer to similar processes. Domain adaptation is the process of adapting one or more source domains for the means of transferring information to improve the performance of a target learner. The domain adaptation process attempts to alter a source domain in an attempt to bring the distribution of the source closer to that of the target. In the Domain Adaptation setting the source and target domains have different marginal distributions p(X). According to Pan's survey, Transfer Learning is a broader term that can also include the case when there is a difference in the conditional distributions p(Y|X) of the source and target domains. In contrast, Daume discriminates the two terms [1], referring that Domain Adaptation is when p(X) changes between source and target and Transfer learning is when p(Y|X) changes between source and target domains.
https://nlpers.blogspot.com/2007/11/domain-adaptation-vs-transfer-learning.html
|
What is difference between 'transfer learning' and 'domain adaptation'?
Throughout the literature on transfer learning, there is a number of terminology inconsistencies. Phrases such as transfer learning and domain adaptation are used to refer to similar processes. Domain
|
7,496
|
What is difference between 'transfer learning' and 'domain adaptation'?
|
I think that "Transfer Learning" is a more general term, and "Domain Adaptation" is a scenario of "Transfer Learning".
[1] Transferable Attention for Domain Adaptation. http://ise.thss.tsinghua.edu.cn/~mlong/doc/transferable-attention-aaai19.pdf
|
What is difference between 'transfer learning' and 'domain adaptation'?
|
I think that "Transfer Learning" is a more general term, and "Domain Adaptation" is a scenario of "Transfer Learning".
[1] Transferable Attention for Domain Adaptation. http://ise.thss.tsinghua.edu.cn
|
What is difference between 'transfer learning' and 'domain adaptation'?
I think that "Transfer Learning" is a more general term, and "Domain Adaptation" is a scenario of "Transfer Learning".
[1] Transferable Attention for Domain Adaptation. http://ise.thss.tsinghua.edu.cn/~mlong/doc/transferable-attention-aaai19.pdf
|
What is difference between 'transfer learning' and 'domain adaptation'?
I think that "Transfer Learning" is a more general term, and "Domain Adaptation" is a scenario of "Transfer Learning".
[1] Transferable Attention for Domain Adaptation. http://ise.thss.tsinghua.edu.cn
|
7,497
|
What is difference between 'transfer learning' and 'domain adaptation'?
|
According to [1], domain adaptation is the transfer learning in NLP: "Transfer learning in the NLP domain is sometimes referred to as domain adaptation."
[1] Pan, S. J., and Q. Yang. “A Survey on Transfer Learning.” IEEE Transactions on Knowledge and Data Engineering 22, no. 10 (October 2010): 1345–59. https://doi.org/10.1109/TKDE.2009.191 or https://www.cse.ust.hk/~qyang/Docs/2009/tkde_transfer_learning.pdf
|
What is difference between 'transfer learning' and 'domain adaptation'?
|
According to [1], domain adaptation is the transfer learning in NLP: "Transfer learning in the NLP domain is sometimes referred to as domain adaptation."
[1] Pan, S. J., and Q. Yang. “A Survey on Tran
|
What is difference between 'transfer learning' and 'domain adaptation'?
According to [1], domain adaptation is the transfer learning in NLP: "Transfer learning in the NLP domain is sometimes referred to as domain adaptation."
[1] Pan, S. J., and Q. Yang. “A Survey on Transfer Learning.” IEEE Transactions on Knowledge and Data Engineering 22, no. 10 (October 2010): 1345–59. https://doi.org/10.1109/TKDE.2009.191 or https://www.cse.ust.hk/~qyang/Docs/2009/tkde_transfer_learning.pdf
|
What is difference between 'transfer learning' and 'domain adaptation'?
According to [1], domain adaptation is the transfer learning in NLP: "Transfer learning in the NLP domain is sometimes referred to as domain adaptation."
[1] Pan, S. J., and Q. Yang. “A Survey on Tran
|
7,498
|
What is difference between 'transfer learning' and 'domain adaptation'?
|
It seems wikipedia has the most concise answer:
Domain adaptation is a subcategory of transfer learning. In domain
adaptation, the source and target domains all have the same feature
space (but different distributions); in contrast, transfer learning
includes cases where the target domain's feature space is different
from the source feature space or spaces.
Here is the source.
|
What is difference between 'transfer learning' and 'domain adaptation'?
|
It seems wikipedia has the most concise answer:
Domain adaptation is a subcategory of transfer learning. In domain
adaptation, the source and target domains all have the same feature
space (but diffe
|
What is difference between 'transfer learning' and 'domain adaptation'?
It seems wikipedia has the most concise answer:
Domain adaptation is a subcategory of transfer learning. In domain
adaptation, the source and target domains all have the same feature
space (but different distributions); in contrast, transfer learning
includes cases where the target domain's feature space is different
from the source feature space or spaces.
Here is the source.
|
What is difference between 'transfer learning' and 'domain adaptation'?
It seems wikipedia has the most concise answer:
Domain adaptation is a subcategory of transfer learning. In domain
adaptation, the source and target domains all have the same feature
space (but diffe
|
7,499
|
Should training samples randomly drawn for mini-batch training neural nets be drawn without replacement?
|
A good theoretical analysis of with and without replacement schemas in the context of iterative algorithms based on random draws (which are how many discriminative Deep Neural Networks (DNNs) are trained against) can be found here
In short, it turns out that sampling without replacement, leads to faster convergence than sampling with replacement.
I will give a short analysis here based on the toy example that they provide: Let's say that we want to optimize the following objective function:
$$
x_{\text{opt}} = \underset{x}{\arg\min} \frac{1}{2} \sum_{i=1}^{N}(x - y_i)^2
$$
where the target $y_i \sim \mathcal{N}(\mu, \sigma^2)$. In this example, we are trying to solve for the optimal $x$, given $N$ labels of $y_i$ obviously.
Ok, so if we were to solve for the optimal $x$ in the above directly, then we would take the derivative of the loss function here, set it to 0, and solve for $x$. So for our example above, the loss is
$$L = \frac{1}{2} \sum_{i=1}^{N}(x - y_i)^2$$
and it's first derivative would be:
$$ \frac{\delta L}{\delta x} = \sum_{i=1}^{N}(x - y_i)$$
Setting $ \frac{\delta L}{\delta x}$ to 0 and solving for $x$, yields:
$$
x_{\text{opt}} = \frac{1}{N} \sum_{i=1}^{N} y_i
$$
In other words, the optimal solution is nothing but the sample mean of all the $N$ samples of $y$.
Now, if we couldn't perform the above computation all at once, we would have to do it recursively, via the gradient descent update equation below:
$$
x_i = x_{i-1} - \lambda_i \nabla(f(x_{i-1}))
$$
and simply inserting our terms here yields:
$$
x_{i} = x_{i-1} - \lambda_i (x_{i-1} - y_{i})
$$
If we run the above for all $i \in {1, 2, ... N}$, then we are effectively performing this update without replacement. The question then becomes, can we get also get the optimal value of $x$ in this way? (Remember that the optimal value of $x$ is nothing but the sample mean of $y$). The answer is yes, if you let $\lambda_i = 1/i$. To see, this we expand:
$$
x_{i} = x_{i-1} - \lambda_i (x_{i-1} - y_{i}) \\\
x_{i} = x_{i-1} - \frac{1}{i} (x_{i-1} - y_{i}) \\\
x_{i} = \frac{i x_{i-1} - (x_{i-1} - y_{i})}{i} \\\
x_{i} = \frac{(i - 1)x_{i-1} + y_{i}}{i} \\\
i x_{i} = (i - 1)x_{i-1} + y_{i} \\\
$$
The last equation however is nothing but the formula for the running average! Thus as we loop through the set from $i=1$, $i=2$, etc, all the way to $i=N$, we would have performed our updates without replacement, and our update formula gives us the optimal solution of $x$, which is the sample mean!
$$
N x_{N} = (N - 1)x_{N-1} + y_{N} ==> x_N = \frac{1}{N}\sum_{i=1}^{N} y_i = \mu
$$
In contrast however, if we actually drew with replacement, then while our draws would then be truly independent, the optimized value $x_N$ would be different from the (optimal) mean $\mu$, and the square error would be given by:
$$
\mathop{E}\{(x_N - \mu)^2\}
$$
which is going to be a positive value, and this simple toy example can be extended to higher dimensions. This has the consequence that we would want to perform sampling without replacement as a more optimal solution.
Hope this clarifies it some more!
|
Should training samples randomly drawn for mini-batch training neural nets be drawn without replacem
|
A good theoretical analysis of with and without replacement schemas in the context of iterative algorithms based on random draws (which are how many discriminative Deep Neural Networks (DNNs) are trai
|
Should training samples randomly drawn for mini-batch training neural nets be drawn without replacement?
A good theoretical analysis of with and without replacement schemas in the context of iterative algorithms based on random draws (which are how many discriminative Deep Neural Networks (DNNs) are trained against) can be found here
In short, it turns out that sampling without replacement, leads to faster convergence than sampling with replacement.
I will give a short analysis here based on the toy example that they provide: Let's say that we want to optimize the following objective function:
$$
x_{\text{opt}} = \underset{x}{\arg\min} \frac{1}{2} \sum_{i=1}^{N}(x - y_i)^2
$$
where the target $y_i \sim \mathcal{N}(\mu, \sigma^2)$. In this example, we are trying to solve for the optimal $x$, given $N$ labels of $y_i$ obviously.
Ok, so if we were to solve for the optimal $x$ in the above directly, then we would take the derivative of the loss function here, set it to 0, and solve for $x$. So for our example above, the loss is
$$L = \frac{1}{2} \sum_{i=1}^{N}(x - y_i)^2$$
and it's first derivative would be:
$$ \frac{\delta L}{\delta x} = \sum_{i=1}^{N}(x - y_i)$$
Setting $ \frac{\delta L}{\delta x}$ to 0 and solving for $x$, yields:
$$
x_{\text{opt}} = \frac{1}{N} \sum_{i=1}^{N} y_i
$$
In other words, the optimal solution is nothing but the sample mean of all the $N$ samples of $y$.
Now, if we couldn't perform the above computation all at once, we would have to do it recursively, via the gradient descent update equation below:
$$
x_i = x_{i-1} - \lambda_i \nabla(f(x_{i-1}))
$$
and simply inserting our terms here yields:
$$
x_{i} = x_{i-1} - \lambda_i (x_{i-1} - y_{i})
$$
If we run the above for all $i \in {1, 2, ... N}$, then we are effectively performing this update without replacement. The question then becomes, can we get also get the optimal value of $x$ in this way? (Remember that the optimal value of $x$ is nothing but the sample mean of $y$). The answer is yes, if you let $\lambda_i = 1/i$. To see, this we expand:
$$
x_{i} = x_{i-1} - \lambda_i (x_{i-1} - y_{i}) \\\
x_{i} = x_{i-1} - \frac{1}{i} (x_{i-1} - y_{i}) \\\
x_{i} = \frac{i x_{i-1} - (x_{i-1} - y_{i})}{i} \\\
x_{i} = \frac{(i - 1)x_{i-1} + y_{i}}{i} \\\
i x_{i} = (i - 1)x_{i-1} + y_{i} \\\
$$
The last equation however is nothing but the formula for the running average! Thus as we loop through the set from $i=1$, $i=2$, etc, all the way to $i=N$, we would have performed our updates without replacement, and our update formula gives us the optimal solution of $x$, which is the sample mean!
$$
N x_{N} = (N - 1)x_{N-1} + y_{N} ==> x_N = \frac{1}{N}\sum_{i=1}^{N} y_i = \mu
$$
In contrast however, if we actually drew with replacement, then while our draws would then be truly independent, the optimized value $x_N$ would be different from the (optimal) mean $\mu$, and the square error would be given by:
$$
\mathop{E}\{(x_N - \mu)^2\}
$$
which is going to be a positive value, and this simple toy example can be extended to higher dimensions. This has the consequence that we would want to perform sampling without replacement as a more optimal solution.
Hope this clarifies it some more!
|
Should training samples randomly drawn for mini-batch training neural nets be drawn without replacem
A good theoretical analysis of with and without replacement schemas in the context of iterative algorithms based on random draws (which are how many discriminative Deep Neural Networks (DNNs) are trai
|
7,500
|
Should training samples randomly drawn for mini-batch training neural nets be drawn without replacement?
|
According to the code in Nielsen's repository, mini-batches are drawn without replacement:
def SGD(self, training_data, epochs, mini_batch_size, eta, test_data=None):
n = len(training_data)
for j in range(epochs):
random.shuffle(training_data)
mini_batches = [
training_data[k:k+mini_batch_size]
for k in range(0, n, mini_batch_size)
]
for mini_batch in mini_batches:
self.update_mini_batch(mini_batch, eta)
We can see that there is no replacement of training samples within an epoch. Interestingly, we can also see that Nielsen chooses not to worry about adjusting eta (the learning rate) for the last mini_batch size, which may not have as many training samples as the previous mini-batches. Presumably this is an advanced modification he leaves for later chapters.**
** EDIT: Actually, this scaling occurs in the def update_mini_batch function. For example, with the weights:
self.weights = [w-(eta/len(mini_batch))*nw for w, nw in zip(self.weights, nabla_w)]
This is necessary because the last mini_batch may be smaller than the previous mini_batches if the number of training samples per mini_batch does not divide evenly into the total number of training samples available.
mylist = ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10']
n = len(mylist)
mini_batch_size = 2
mini_batches = [
mylist[k:k+mini_batch_size]
for k in range(0, n, mini_batch_size)
]
for mini_batch in mini_batches:
print(mini_batch)
Output:
['1', '2']
['3', '4']
['5', '6']
['7', '8']
['9', '10']
Changing mini_batch_size to 3, which does not divide evenly into our 10 training samples. For output we get:
['1', '2', '3']
['4', '5', '6']
['7', '8', '9']
['10']
When evaluating a range over list indices (something of the form [x:y] where x and y are some indices into the list), if our right-hand value exceeds the list length, python simply returns the items from the list up until the value goes out of index range.
So the last mini-batch might be smaller than previous mini-batches, but if it is weighted by the same eta then those training samples will contribute more to the learning than samples in the other, larger mini-batches. Since this is just the last mini-batch it's probably not worth worrying about too much, but can easily be solved by scaling eta to the length of the mini-batch.
|
Should training samples randomly drawn for mini-batch training neural nets be drawn without replacem
|
According to the code in Nielsen's repository, mini-batches are drawn without replacement:
def SGD(self, training_data, epochs, mini_batch_size, eta, test_data=None):
n = len(training_data)
|
Should training samples randomly drawn for mini-batch training neural nets be drawn without replacement?
According to the code in Nielsen's repository, mini-batches are drawn without replacement:
def SGD(self, training_data, epochs, mini_batch_size, eta, test_data=None):
n = len(training_data)
for j in range(epochs):
random.shuffle(training_data)
mini_batches = [
training_data[k:k+mini_batch_size]
for k in range(0, n, mini_batch_size)
]
for mini_batch in mini_batches:
self.update_mini_batch(mini_batch, eta)
We can see that there is no replacement of training samples within an epoch. Interestingly, we can also see that Nielsen chooses not to worry about adjusting eta (the learning rate) for the last mini_batch size, which may not have as many training samples as the previous mini-batches. Presumably this is an advanced modification he leaves for later chapters.**
** EDIT: Actually, this scaling occurs in the def update_mini_batch function. For example, with the weights:
self.weights = [w-(eta/len(mini_batch))*nw for w, nw in zip(self.weights, nabla_w)]
This is necessary because the last mini_batch may be smaller than the previous mini_batches if the number of training samples per mini_batch does not divide evenly into the total number of training samples available.
mylist = ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10']
n = len(mylist)
mini_batch_size = 2
mini_batches = [
mylist[k:k+mini_batch_size]
for k in range(0, n, mini_batch_size)
]
for mini_batch in mini_batches:
print(mini_batch)
Output:
['1', '2']
['3', '4']
['5', '6']
['7', '8']
['9', '10']
Changing mini_batch_size to 3, which does not divide evenly into our 10 training samples. For output we get:
['1', '2', '3']
['4', '5', '6']
['7', '8', '9']
['10']
When evaluating a range over list indices (something of the form [x:y] where x and y are some indices into the list), if our right-hand value exceeds the list length, python simply returns the items from the list up until the value goes out of index range.
So the last mini-batch might be smaller than previous mini-batches, but if it is weighted by the same eta then those training samples will contribute more to the learning than samples in the other, larger mini-batches. Since this is just the last mini-batch it's probably not worth worrying about too much, but can easily be solved by scaling eta to the length of the mini-batch.
|
Should training samples randomly drawn for mini-batch training neural nets be drawn without replacem
According to the code in Nielsen's repository, mini-batches are drawn without replacement:
def SGD(self, training_data, epochs, mini_batch_size, eta, test_data=None):
n = len(training_data)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.