idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
34,501 | In neural network literature, which one is activation? | A 'layer' does not have an activation. Each individual neuron has an activation.
The state of a neuron is it's bias + all incoming connections (weight * activation from source neuron). So that's $z_2$.
The activation is the state of a neuron passed through an activation function. So that's $a_2$. As $\sigma()$ is the activation function.
Related: [1]
The answer below seems contradictory, however (logically) it makes sense to call the activation of a neuron the value that is received from the activation function. References to support my claim:
As you see, the value neuron y is getting from x1 is called the 'activation of neuron1', meaning its output - thus the value received after the activation function. [Source] | In neural network literature, which one is activation? | A 'layer' does not have an activation. Each individual neuron has an activation.
The state of a neuron is it's bias + all incoming connections (weight * activation from source neuron). So that's $z_2$ | In neural network literature, which one is activation?
A 'layer' does not have an activation. Each individual neuron has an activation.
The state of a neuron is it's bias + all incoming connections (weight * activation from source neuron). So that's $z_2$.
The activation is the state of a neuron passed through an activation function. So that's $a_2$. As $\sigma()$ is the activation function.
Related: [1]
The answer below seems contradictory, however (logically) it makes sense to call the activation of a neuron the value that is received from the activation function. References to support my claim:
As you see, the value neuron y is getting from x1 is called the 'activation of neuron1', meaning its output - thus the value received after the activation function. [Source] | In neural network literature, which one is activation?
A 'layer' does not have an activation. Each individual neuron has an activation.
The state of a neuron is it's bias + all incoming connections (weight * activation from source neuron). So that's $z_2$ |
34,502 | In neural network literature, which one is activation? | I've seen people calling both activation, but the input of the activation function ($z_2$) seems more formal.
For example in Pattern Recognition and Machine Learning (the following $a_j$ and $z_j$ are not the same as in the question)
this batch normalization paper from Google (where the word "activation" is mentioned 42 times to denote the input of the activation function), and also this layer normalization paper by Hinton et al. | In neural network literature, which one is activation? | I've seen people calling both activation, but the input of the activation function ($z_2$) seems more formal.
For example in Pattern Recognition and Machine Learning (the following $a_j$ and $z_j$ are | In neural network literature, which one is activation?
I've seen people calling both activation, but the input of the activation function ($z_2$) seems more formal.
For example in Pattern Recognition and Machine Learning (the following $a_j$ and $z_j$ are not the same as in the question)
this batch normalization paper from Google (where the word "activation" is mentioned 42 times to denote the input of the activation function), and also this layer normalization paper by Hinton et al. | In neural network literature, which one is activation?
I've seen people calling both activation, but the input of the activation function ($z_2$) seems more formal.
For example in Pattern Recognition and Machine Learning (the following $a_j$ and $z_j$ are |
34,503 | In neural network literature, which one is activation? | Normally, the output of each neuron after performing the activation function is called the activation of that neuron. So, in your example, $a_2$ is the activation of the hidden neuron and $y$ is the activation of the output neuron. | In neural network literature, which one is activation? | Normally, the output of each neuron after performing the activation function is called the activation of that neuron. So, in your example, $a_2$ is the activation of the hidden neuron and $y$ is the a | In neural network literature, which one is activation?
Normally, the output of each neuron after performing the activation function is called the activation of that neuron. So, in your example, $a_2$ is the activation of the hidden neuron and $y$ is the activation of the output neuron. | In neural network literature, which one is activation?
Normally, the output of each neuron after performing the activation function is called the activation of that neuron. So, in your example, $a_2$ is the activation of the hidden neuron and $y$ is the a |
34,504 | Why does maximum likelihood estimation have issues with over fitting? | Maximum likelihood does not tell us much, besides that our estimate is the best one we can give based on the data. It does not tell us anything about the quality of the estimate, nor about how well we can actually predict anything from the estimates.
Overfitting means, we are estimating some parameters, which only help us very little for actual prediction. There is nothing in maximum likelihood that helps us estimate how well we predict. Actually, it is possible to increase the likelihood beyond any bound, without increasing predictive accuracy at all.
To illustrate the last point about increasing likelihood without increasing predictive quality, let me give an example. Let's assume we want to predict the number of car crashes in the USA on a given day. As a predictor we only have the number of rocks analyzed by the curiosity rover on mars. Now, it seems highly unlikely that the predictor has any relation to the number of car crashes, but we can still generate a maximum likelihood model using that predictor. Maximum likelihood only tells us, it is the best we can do given the current dataset, even though this "best we can do" may still be total garbage. Since there is no relationship between the predictor and the number to be predicted, we cannot do anything except to overfit.
Now let's take this a bit further, and assume we want to further increase our maximum likelihood. We add the average distance of the planet Jupiter on that day as another predictor. Again this carries no predictive value. But our maximum likelihood for the model will increase. It cannot decrease, since we are still including the original predictor, so the model that just ignores the distance of Jupiter is a possible model, and this has the exact same likelihood as the previous model. So we are increasing likelihood without adding predictive value, i.e., we are overfitting.
Let's further assume somebody provides a model that is estimating the number of car crashes based on some reasonable predictors (number of cars driven on that day, whether the day is a holiday / weekend / weekday, etc.) and that model gives us a likelihood of $L$. Now we can carry our "astrological" model by just adding arbitrary figures derived from constellations of stars and planet. If we just add enough constellations, we can get our "astrological" model to have a maximum likelihood $L' > L$. Does that mean we should discard the well reasoned model and use the astrological one instead? Of course not.
This should show that overfitting is always present, unless we introduce some method to guard against overfitting. | Why does maximum likelihood estimation have issues with over fitting? | Maximum likelihood does not tell us much, besides that our estimate is the best one we can give based on the data. It does not tell us anything about the quality of the estimate, nor about how well we | Why does maximum likelihood estimation have issues with over fitting?
Maximum likelihood does not tell us much, besides that our estimate is the best one we can give based on the data. It does not tell us anything about the quality of the estimate, nor about how well we can actually predict anything from the estimates.
Overfitting means, we are estimating some parameters, which only help us very little for actual prediction. There is nothing in maximum likelihood that helps us estimate how well we predict. Actually, it is possible to increase the likelihood beyond any bound, without increasing predictive accuracy at all.
To illustrate the last point about increasing likelihood without increasing predictive quality, let me give an example. Let's assume we want to predict the number of car crashes in the USA on a given day. As a predictor we only have the number of rocks analyzed by the curiosity rover on mars. Now, it seems highly unlikely that the predictor has any relation to the number of car crashes, but we can still generate a maximum likelihood model using that predictor. Maximum likelihood only tells us, it is the best we can do given the current dataset, even though this "best we can do" may still be total garbage. Since there is no relationship between the predictor and the number to be predicted, we cannot do anything except to overfit.
Now let's take this a bit further, and assume we want to further increase our maximum likelihood. We add the average distance of the planet Jupiter on that day as another predictor. Again this carries no predictive value. But our maximum likelihood for the model will increase. It cannot decrease, since we are still including the original predictor, so the model that just ignores the distance of Jupiter is a possible model, and this has the exact same likelihood as the previous model. So we are increasing likelihood without adding predictive value, i.e., we are overfitting.
Let's further assume somebody provides a model that is estimating the number of car crashes based on some reasonable predictors (number of cars driven on that day, whether the day is a holiday / weekend / weekday, etc.) and that model gives us a likelihood of $L$. Now we can carry our "astrological" model by just adding arbitrary figures derived from constellations of stars and planet. If we just add enough constellations, we can get our "astrological" model to have a maximum likelihood $L' > L$. Does that mean we should discard the well reasoned model and use the astrological one instead? Of course not.
This should show that overfitting is always present, unless we introduce some method to guard against overfitting. | Why does maximum likelihood estimation have issues with over fitting?
Maximum likelihood does not tell us much, besides that our estimate is the best one we can give based on the data. It does not tell us anything about the quality of the estimate, nor about how well we |
34,505 | Why does maximum likelihood estimation have issues with over fitting? | Some models are just too flexible: In these cases, maximum likelihood estimators can effectively "memorize" the data---signal and noise. Such considerations motivate reducing the flexibility of some models, through, for instance, some kind of regularization. | Why does maximum likelihood estimation have issues with over fitting? | Some models are just too flexible: In these cases, maximum likelihood estimators can effectively "memorize" the data---signal and noise. Such considerations motivate reducing the flexibility of some m | Why does maximum likelihood estimation have issues with over fitting?
Some models are just too flexible: In these cases, maximum likelihood estimators can effectively "memorize" the data---signal and noise. Such considerations motivate reducing the flexibility of some models, through, for instance, some kind of regularization. | Why does maximum likelihood estimation have issues with over fitting?
Some models are just too flexible: In these cases, maximum likelihood estimators can effectively "memorize" the data---signal and noise. Such considerations motivate reducing the flexibility of some m |
34,506 | Why does maximum likelihood estimation have issues with over fitting? | In my opinion, the reason of overfitting caused by the use of maximum likelihood method is that the values of parameters are estimated in the light of current data. You should derive the values of parameters in the light of future data instead of current data. If you are interested in this idea, please refer to the paper below.
Takezawa, K. (2012): "A Revision of AIC for Normal Error Models," Open Journal of Statistics, Vol. 2 No. 3, 2012, pp. 309-312. doi: 10.4236/ojs.2012.23038. http://www.scirp.org/journal/PaperInformation.aspx?paperID=20651 | Why does maximum likelihood estimation have issues with over fitting? | In my opinion, the reason of overfitting caused by the use of maximum likelihood method is that the values of parameters are estimated in the light of current data. You should derive the values of par | Why does maximum likelihood estimation have issues with over fitting?
In my opinion, the reason of overfitting caused by the use of maximum likelihood method is that the values of parameters are estimated in the light of current data. You should derive the values of parameters in the light of future data instead of current data. If you are interested in this idea, please refer to the paper below.
Takezawa, K. (2012): "A Revision of AIC for Normal Error Models," Open Journal of Statistics, Vol. 2 No. 3, 2012, pp. 309-312. doi: 10.4236/ojs.2012.23038. http://www.scirp.org/journal/PaperInformation.aspx?paperID=20651 | Why does maximum likelihood estimation have issues with over fitting?
In my opinion, the reason of overfitting caused by the use of maximum likelihood method is that the values of parameters are estimated in the light of current data. You should derive the values of par |
34,507 | In caterpillars, is diet more important than size in resistance to predators? | tl;dr @whuber is right that diet and weight are confounded in your analysis: this is what the picture looks like.
The fat points + ranges show the mean and bootstrap confidence intervals for diet alone; the gray line + confidence interval shows the overall relationship with weight; the individual lines + CI show the relationships with weight for each group. There's more rejection for diet=N, but those individuals also have higher weights.
Going into the gory mechanical details: you're on the right track with your analysis, but (1) when you test the effect of diet, you have to take the effect of weight into account, and vice versa; by default R does a sequential ANOVA, which tests the effect of diet alone; (2) for data like this you should probably be using a Poisson generalized linear model (GLM), although it doesn't make too much difference to the statistical conclusions in this case.
If you look at summary() rather than anova(), which tests marginal effects, you'll see that nothing looks particularly significant (you also have to be careful with testing main effects in the presence of an interaction: in this case the effect of diet is evaluated at a weight of zero: probably not sensible, but since the interaction is non-significant (although it has a large effect!) it may not make much difference.
summary(fit.lm <- lm(rejections~diet*weight,data=dd2))
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.3455 0.9119 0.379 0.710
## dietN 1.9557 1.4000 1.397 0.182
## weight 3.9573 21.6920 0.182 0.858
## dietN:weight -5.7465 22.5013 -0.255 0.802
Centering the weight variable:
dd2$cweight <- dd2$weight-mean(dd2$weight)
summary(fit.clm <- update(fit.lm,rejections~diet*cweight))
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.7559 1.4429 0.524 0.608
## dietN 1.3598 1.5318 0.888 0.388
## cweight 3.9573 21.6920 0.182 0.858
## dietN:cweight -5.7465 22.5013 -0.255 0.802
No huge changes in the story here.
car::Anova(fit.clm,type="3")
## Response: rejections
## Sum Sq Df F value Pr(>F)
## (Intercept) 0.3149 1 0.2744 0.6076
## diet 0.9043 1 0.7881 0.3878
## cweight 0.0382 1 0.0333 0.8575
## diet:cweight 0.0748 1 0.0652 0.8017
## Residuals 18.3591 16
There is some argument about whether so-called "type 3" tests make sense; they don't always, although centering the weight helps. Type 2 analysis, which tests the main effects after taking the interaction out of the model, may be more defensible. In this case diet and cweight are tested in the presence of each other, but without the interactions included.
car::Anova(fit.clm,type="2")
## Response: rejections
## Sum Sq Df F value Pr(>F)
## diet 4.1179 1 3.5888 0.07639 .
## cweight 0.0661 1 0.0576 0.81343
## diet:cweight 0.0748 1 0.0652 0.80168
## Residuals 18.3591 16
We can see that if we analyzed diet ignoring the effects of weight we would get a highly significant result - this is essentially what you found in your analysis, because of the sequential ANOVA.
fit.lm_diet <- update(fit.clm,. ~ diet)
car::Anova(fit.lm_diet)
## Response: rejections
## Sum Sq Df F value Pr(>F)
## diet 11.25 1 10.946 0.003908 **
## Residuals 18.50 18
It would be more standard to fit this kind of data to a Poisson GLM (glm(rejections~diet*cweight,data=dd2,family=poisson)), but in this case it doesn't make very much difference to the conclusions.
By the way, it's better to rearrange your data programmatically rather than by hand if you can. For reference, this is how I did it (sorry for the amount of "magic" I used):
library(tidyr)
library(dplyr)
dd <- read.table(header=TRUE,text=
"Trial A_Weight N_Weight A_Rejections N_Rejections
1 0.0496 0.1857 0 1
2 0.0324 0.1112 0 2
3 0.0291 0.3011 0 2
4 0.0247 0.2066 0 3
5 0.0394 0.1448 3 1
6 0.0641 0.0838 1 3
7 0.0360 0.1963 0 2
8 0.0243 0.145 0 3
9 0.0682 0.1519 0 3
10 0.0225 0.1571 1 0
")
## pick out weight and rearrange to long format
dwt <- dd %>% select(Trial,A_Weight,N_Weight) %>%
gather(diet,weight,-Trial) %>%
mutate(diet=gsub("_.*","",diet))
## ditto, rejections
drej <- dd %>% select(Trial,A_Rejections,N_Rejections) %>%
gather(diet,rejections,-Trial) %>%
mutate(diet=gsub("_.*","",diet))
## put them back together
dd2 <- full_join(dwt,drej,by=c("Trial","diet"))
Plotting code:
dd_sum <- dd2 %>% group_by(diet) %>%
do(data.frame(weight=mean(.$weight),
rbind(mean_cl_boot(.$rejections))))
library(ggplot2); theme_set(theme_bw())
ggplot(dd2,aes(weight,rejections,colour=diet))+
geom_point()+
scale_colour_brewer(palette="Set1")+
scale_fill_brewer(palette="Set1")+
geom_pointrange(data=dd_sum,aes(y=y,ymin=ymin,ymax=ymax),
size=4,alpha=0.5,show.legend=FALSE)+
geom_smooth(method="lm",aes(fill=diet),alpha=0.1)+
geom_smooth(method="lm",aes(group=1),colour="darkgray",
alpha=0.1)+
scale_y_continuous(limits=c(0,3),oob=scales::squish) | In caterpillars, is diet more important than size in resistance to predators? | tl;dr @whuber is right that diet and weight are confounded in your analysis: this is what the picture looks like.
The fat points + ranges show the mean and bootstrap confidence intervals for diet alo | In caterpillars, is diet more important than size in resistance to predators?
tl;dr @whuber is right that diet and weight are confounded in your analysis: this is what the picture looks like.
The fat points + ranges show the mean and bootstrap confidence intervals for diet alone; the gray line + confidence interval shows the overall relationship with weight; the individual lines + CI show the relationships with weight for each group. There's more rejection for diet=N, but those individuals also have higher weights.
Going into the gory mechanical details: you're on the right track with your analysis, but (1) when you test the effect of diet, you have to take the effect of weight into account, and vice versa; by default R does a sequential ANOVA, which tests the effect of diet alone; (2) for data like this you should probably be using a Poisson generalized linear model (GLM), although it doesn't make too much difference to the statistical conclusions in this case.
If you look at summary() rather than anova(), which tests marginal effects, you'll see that nothing looks particularly significant (you also have to be careful with testing main effects in the presence of an interaction: in this case the effect of diet is evaluated at a weight of zero: probably not sensible, but since the interaction is non-significant (although it has a large effect!) it may not make much difference.
summary(fit.lm <- lm(rejections~diet*weight,data=dd2))
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.3455 0.9119 0.379 0.710
## dietN 1.9557 1.4000 1.397 0.182
## weight 3.9573 21.6920 0.182 0.858
## dietN:weight -5.7465 22.5013 -0.255 0.802
Centering the weight variable:
dd2$cweight <- dd2$weight-mean(dd2$weight)
summary(fit.clm <- update(fit.lm,rejections~diet*cweight))
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 0.7559 1.4429 0.524 0.608
## dietN 1.3598 1.5318 0.888 0.388
## cweight 3.9573 21.6920 0.182 0.858
## dietN:cweight -5.7465 22.5013 -0.255 0.802
No huge changes in the story here.
car::Anova(fit.clm,type="3")
## Response: rejections
## Sum Sq Df F value Pr(>F)
## (Intercept) 0.3149 1 0.2744 0.6076
## diet 0.9043 1 0.7881 0.3878
## cweight 0.0382 1 0.0333 0.8575
## diet:cweight 0.0748 1 0.0652 0.8017
## Residuals 18.3591 16
There is some argument about whether so-called "type 3" tests make sense; they don't always, although centering the weight helps. Type 2 analysis, which tests the main effects after taking the interaction out of the model, may be more defensible. In this case diet and cweight are tested in the presence of each other, but without the interactions included.
car::Anova(fit.clm,type="2")
## Response: rejections
## Sum Sq Df F value Pr(>F)
## diet 4.1179 1 3.5888 0.07639 .
## cweight 0.0661 1 0.0576 0.81343
## diet:cweight 0.0748 1 0.0652 0.80168
## Residuals 18.3591 16
We can see that if we analyzed diet ignoring the effects of weight we would get a highly significant result - this is essentially what you found in your analysis, because of the sequential ANOVA.
fit.lm_diet <- update(fit.clm,. ~ diet)
car::Anova(fit.lm_diet)
## Response: rejections
## Sum Sq Df F value Pr(>F)
## diet 11.25 1 10.946 0.003908 **
## Residuals 18.50 18
It would be more standard to fit this kind of data to a Poisson GLM (glm(rejections~diet*cweight,data=dd2,family=poisson)), but in this case it doesn't make very much difference to the conclusions.
By the way, it's better to rearrange your data programmatically rather than by hand if you can. For reference, this is how I did it (sorry for the amount of "magic" I used):
library(tidyr)
library(dplyr)
dd <- read.table(header=TRUE,text=
"Trial A_Weight N_Weight A_Rejections N_Rejections
1 0.0496 0.1857 0 1
2 0.0324 0.1112 0 2
3 0.0291 0.3011 0 2
4 0.0247 0.2066 0 3
5 0.0394 0.1448 3 1
6 0.0641 0.0838 1 3
7 0.0360 0.1963 0 2
8 0.0243 0.145 0 3
9 0.0682 0.1519 0 3
10 0.0225 0.1571 1 0
")
## pick out weight and rearrange to long format
dwt <- dd %>% select(Trial,A_Weight,N_Weight) %>%
gather(diet,weight,-Trial) %>%
mutate(diet=gsub("_.*","",diet))
## ditto, rejections
drej <- dd %>% select(Trial,A_Rejections,N_Rejections) %>%
gather(diet,rejections,-Trial) %>%
mutate(diet=gsub("_.*","",diet))
## put them back together
dd2 <- full_join(dwt,drej,by=c("Trial","diet"))
Plotting code:
dd_sum <- dd2 %>% group_by(diet) %>%
do(data.frame(weight=mean(.$weight),
rbind(mean_cl_boot(.$rejections))))
library(ggplot2); theme_set(theme_bw())
ggplot(dd2,aes(weight,rejections,colour=diet))+
geom_point()+
scale_colour_brewer(palette="Set1")+
scale_fill_brewer(palette="Set1")+
geom_pointrange(data=dd_sum,aes(y=y,ymin=ymin,ymax=ymax),
size=4,alpha=0.5,show.legend=FALSE)+
geom_smooth(method="lm",aes(fill=diet),alpha=0.1)+
geom_smooth(method="lm",aes(group=1),colour="darkgray",
alpha=0.1)+
scale_y_continuous(limits=c(0,3),oob=scales::squish) | In caterpillars, is diet more important than size in resistance to predators?
tl;dr @whuber is right that diet and weight are confounded in your analysis: this is what the picture looks like.
The fat points + ranges show the mean and bootstrap confidence intervals for diet alo |
34,508 | When does logistic regression not work properly? | Consider these data (copied from @Sycorax's answer here: Can Random Forest be used for Feature Selection in Multiple Linear Regression?):
There are two aspects to the data in this figure. First, the relationship is non-linear. That isn't actually a problem for logistic regression properly specified. In some cases, a logistic regression might fair better than a standard decision tree (cf., my answer here: How to use boxplots to find the point where values are more likely to come from different conditions?, although vis-a-vie a random forest is more ambiguous). The bigger problem is that there is complete separation at the decision boundary. There are ways of trying to deal with that (see @Scortchi's answer here: How to deal with perfect separation in logistic regression?), but it adds complexity and requires considerable sophistication to address well. I think a random forest would handle this as a matter of course. | When does logistic regression not work properly? | Consider these data (copied from @Sycorax's answer here: Can Random Forest be used for Feature Selection in Multiple Linear Regression?):
There are two aspects to the data in this figure. First, t | When does logistic regression not work properly?
Consider these data (copied from @Sycorax's answer here: Can Random Forest be used for Feature Selection in Multiple Linear Regression?):
There are two aspects to the data in this figure. First, the relationship is non-linear. That isn't actually a problem for logistic regression properly specified. In some cases, a logistic regression might fair better than a standard decision tree (cf., my answer here: How to use boxplots to find the point where values are more likely to come from different conditions?, although vis-a-vie a random forest is more ambiguous). The bigger problem is that there is complete separation at the decision boundary. There are ways of trying to deal with that (see @Scortchi's answer here: How to deal with perfect separation in logistic regression?), but it adds complexity and requires considerable sophistication to address well. I think a random forest would handle this as a matter of course. | When does logistic regression not work properly?
Consider these data (copied from @Sycorax's answer here: Can Random Forest be used for Feature Selection in Multiple Linear Regression?):
There are two aspects to the data in this figure. First, t |
34,509 | When does logistic regression not work properly? | The answers so far emphasize the predictive failure of logistic regression. However there's also issues of poor feature importance/inference. For example, when your classes are highly correlate or highly nonlinear, the coefficients of your logistic regression will not correctly predict the gain/loss from each individual feature. In gung's example, if you were to train a logistic regression on the picture of points shown, it will likely create a linear split somewhere in the middle of the red region (for example a vertical line), implying that an increase of say, $x_1$, will lead to a higher probability of being a red class, which is true for $x_1$ starting to the left of the prediction boundary, and false for $x_1$ starting to the right. | When does logistic regression not work properly? | The answers so far emphasize the predictive failure of logistic regression. However there's also issues of poor feature importance/inference. For example, when your classes are highly correlate or hig | When does logistic regression not work properly?
The answers so far emphasize the predictive failure of logistic regression. However there's also issues of poor feature importance/inference. For example, when your classes are highly correlate or highly nonlinear, the coefficients of your logistic regression will not correctly predict the gain/loss from each individual feature. In gung's example, if you were to train a logistic regression on the picture of points shown, it will likely create a linear split somewhere in the middle of the red region (for example a vertical line), implying that an increase of say, $x_1$, will lead to a higher probability of being a red class, which is true for $x_1$ starting to the left of the prediction boundary, and false for $x_1$ starting to the right. | When does logistic regression not work properly?
The answers so far emphasize the predictive failure of logistic regression. However there's also issues of poor feature importance/inference. For example, when your classes are highly correlate or hig |
34,510 | When does logistic regression not work properly? | @gung had a good answer. Logistic regression is a linear model, so it may not work well on non-linear cases. But as I mentioned in the comment, it might be some ways to transform data into another space, where logistic regression will be good again, but finding the basis expansion / feature transformation may be not trivial.
Essentially, certain model will work well when the data satisfy the assumption of the model, e.g., if the decision boundary is linear, then logistic regression will work well.
On the other hand, I would strongly recommend you to review bias variance trade off.
In terms of model complexity, logistic regression has high bias and low variance. And random forest is opposite. Which means, in general, logistic regression will preform with "less accurate" but "more stable", and random forest is opposite. | When does logistic regression not work properly? | @gung had a good answer. Logistic regression is a linear model, so it may not work well on non-linear cases. But as I mentioned in the comment, it might be some ways to transform data into another spa | When does logistic regression not work properly?
@gung had a good answer. Logistic regression is a linear model, so it may not work well on non-linear cases. But as I mentioned in the comment, it might be some ways to transform data into another space, where logistic regression will be good again, but finding the basis expansion / feature transformation may be not trivial.
Essentially, certain model will work well when the data satisfy the assumption of the model, e.g., if the decision boundary is linear, then logistic regression will work well.
On the other hand, I would strongly recommend you to review bias variance trade off.
In terms of model complexity, logistic regression has high bias and low variance. And random forest is opposite. Which means, in general, logistic regression will preform with "less accurate" but "more stable", and random forest is opposite. | When does logistic regression not work properly?
@gung had a good answer. Logistic regression is a linear model, so it may not work well on non-linear cases. But as I mentioned in the comment, it might be some ways to transform data into another spa |
34,511 | When does logistic regression not work properly? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
A simple example for a case when logistic regression can’t work properly:
https://towardsdatascience.com/when-logistic-regression-simply-doesnt-work-8cd8f2f9d997 | When does logistic regression not work properly? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| When does logistic regression not work properly?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
A simple example for a case when logistic regression can’t work properly:
https://towardsdatascience.com/when-logistic-regression-simply-doesnt-work-8cd8f2f9d997 | When does logistic regression not work properly?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
34,512 | When does logistic regression not work properly? | Situations where features combined effect on response are not linear.
Imagine you try classifying documents as pet related or not per related.
You have two features -
* Number of words in the document(X1).
* Number of times the word Dog appears in the document(X2).
Intuitively, X2/X1 is a good way determining a document's class.
This is not a linear relation, so where tree like models can use a:
If X1>10:
if X2>5:
Pet related
else:
Not Pet related
A logistic regression will have no such option, and will result in model described by
if aX1 + bX2 > Z:
Pet related
else:
Not Pet related
For some Z. | When does logistic regression not work properly? | Situations where features combined effect on response are not linear.
Imagine you try classifying documents as pet related or not per related.
You have two features -
* Number of words in the docume | When does logistic regression not work properly?
Situations where features combined effect on response are not linear.
Imagine you try classifying documents as pet related or not per related.
You have two features -
* Number of words in the document(X1).
* Number of times the word Dog appears in the document(X2).
Intuitively, X2/X1 is a good way determining a document's class.
This is not a linear relation, so where tree like models can use a:
If X1>10:
if X2>5:
Pet related
else:
Not Pet related
A logistic regression will have no such option, and will result in model described by
if aX1 + bX2 > Z:
Pet related
else:
Not Pet related
For some Z. | When does logistic regression not work properly?
Situations where features combined effect on response are not linear.
Imagine you try classifying documents as pet related or not per related.
You have two features -
* Number of words in the docume |
34,513 | How to test for independence of residuals in linear model? | [I thought I'd be able to find a close duplicate with a similar answer but a couple of searches didn't turn up something suitable for some reason. I'll post an answer for now but I may still locate a duplicate.]
Note that residuals are not actually independent. It's the error term that's assumed to be independent. The residuals estimate the error term but they're definitely dependent.
There are many, many ways for errors to fail to be independent, so it's quite hard to do a general test for dependence (there are a few very general dependence tests but they require very large sample sizes to pick much up; failing to find dependence with a test that has its power scattered to the four winds is not much consolation). For more typical problems, you really need to specify what kind of dependence you might be looking for. For example, if you have observations over time, you might anticipate autocorrelation, which is easy to look for via an acf plot. If you suspect some form of intra-class correlation where there's a "class" variable not in the model (or indeed any dependence due to a variable not being in the model) but you have the variable or a reasonable proxy, it's easy enough to see whether residuals relate to that variable. So if you can elucidate a likely source of dependence in your problem, that will tell you a great deal about what kinds of things to look for. | How to test for independence of residuals in linear model? | [I thought I'd be able to find a close duplicate with a similar answer but a couple of searches didn't turn up something suitable for some reason. I'll post an answer for now but I may still locate a | How to test for independence of residuals in linear model?
[I thought I'd be able to find a close duplicate with a similar answer but a couple of searches didn't turn up something suitable for some reason. I'll post an answer for now but I may still locate a duplicate.]
Note that residuals are not actually independent. It's the error term that's assumed to be independent. The residuals estimate the error term but they're definitely dependent.
There are many, many ways for errors to fail to be independent, so it's quite hard to do a general test for dependence (there are a few very general dependence tests but they require very large sample sizes to pick much up; failing to find dependence with a test that has its power scattered to the four winds is not much consolation). For more typical problems, you really need to specify what kind of dependence you might be looking for. For example, if you have observations over time, you might anticipate autocorrelation, which is easy to look for via an acf plot. If you suspect some form of intra-class correlation where there's a "class" variable not in the model (or indeed any dependence due to a variable not being in the model) but you have the variable or a reasonable proxy, it's easy enough to see whether residuals relate to that variable. So if you can elucidate a likely source of dependence in your problem, that will tell you a great deal about what kinds of things to look for. | How to test for independence of residuals in linear model?
[I thought I'd be able to find a close duplicate with a similar answer but a couple of searches didn't turn up something suitable for some reason. I'll post an answer for now but I may still locate a |
34,514 | How to test for independence of residuals in linear model? | When you want to check for dependence of residuals, you need something they can depend on. There are basically 2 classes of dependencies
Residuals correlate with another variable
Residuals correlate with other (close) residuals (autocorrelation)
For 1), it is common to plot
Res against predicted value
Res against predictors
You can formalize any dependency you spot with a correlation test or a regression if you want, but usually problems are visually identified.
For 2), one can use autocorrelation plots or spatial / temporal variograms. A formal analysis can be done with the usual time-series / spatial analysis methods, e.g. Durbin-Watson or a CAR model for temporal, or MORAN's I for spatial. Note the caveats of all these methods, e.g. that they usually assume homogeneity of the correlation structure, which is commonly violated. | How to test for independence of residuals in linear model? | When you want to check for dependence of residuals, you need something they can depend on. There are basically 2 classes of dependencies
Residuals correlate with another variable
Residuals correlate | How to test for independence of residuals in linear model?
When you want to check for dependence of residuals, you need something they can depend on. There are basically 2 classes of dependencies
Residuals correlate with another variable
Residuals correlate with other (close) residuals (autocorrelation)
For 1), it is common to plot
Res against predicted value
Res against predictors
You can formalize any dependency you spot with a correlation test or a regression if you want, but usually problems are visually identified.
For 2), one can use autocorrelation plots or spatial / temporal variograms. A formal analysis can be done with the usual time-series / spatial analysis methods, e.g. Durbin-Watson or a CAR model for temporal, or MORAN's I for spatial. Note the caveats of all these methods, e.g. that they usually assume homogeneity of the correlation structure, which is commonly violated. | How to test for independence of residuals in linear model?
When you want to check for dependence of residuals, you need something they can depend on. There are basically 2 classes of dependencies
Residuals correlate with another variable
Residuals correlate |
34,515 | How to test for independence of residuals in linear model? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
When the error terms in the model are independent, the dependence of the residuals is mild. So testing that the residuals are independent or uncorrelated under the normal assumption is what is commonly done. | How to test for independence of residuals in linear model? | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| How to test for independence of residuals in linear model?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
When the error terms in the model are independent, the dependence of the residuals is mild. So testing that the residuals are independent or uncorrelated under the normal assumption is what is commonly done. | How to test for independence of residuals in linear model?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
34,516 | (Non-)linear regression at leafs decision tree | There has been quite some research on this topic over the last decades, starting with the pioneering efforts of Ciampi, followed by Loh's GUIDE, and then also Gama's functional trees or the model-based recursive partitioning approach by us. A nice overview is given in @Momo's answer to this question: Advantage of GLMs in terminal nodes of a regression tree?
Corresponding software is less widely used than simple constant-fit trees as you observe. Part of the reason for this is presumably that it is more difficult to write - but also more difficult to use. It just requires more specifications than a simple CART model. But software is available (as previously pointed out here by @marqram or @Momo at: Regression tree algorithm with linear regression models in each leaf). Prominent software packages include:
In the Weka suite there are M5P (M5') for continuous responses, LMT (logistic model trees) for binary responses, and FT (functional trees) for categorical responses. See http://www.cs.waikato.ac.nz/~ml/weka/ for more details. The former two functions are also easily interfaced through the R package RWeka.
Loh's GUIDE implementation is available in binary form at no cost (but without source code) from http://www.stat.wisc.edu/~loh/guide.html. It allows to modify the details of the method by a wide range of control options.
Our MOB (MOdel-Based recursive partitioning) algorithm is available in the R package partykit (successor to the party implementation). The mob() function gives you a general framework, allowing you to specify new models that can be easily fitted in the nodes/leaves of the tree. Convenience interfaces lmtree() and glmtree() that combine mob() with lm() and glm() are directly available and illustrated in vignette("mob", package = "partykit"). But other plugins can also be defined. For example, in https://stackoverflow.com/questions/37037445/using-mob-trees-partykit-package-with-nls-model mob() is combined with nls(). But there are also "mobsters" for various psychometric models (in psychotree) and for beta regression (in betareg). | (Non-)linear regression at leafs decision tree | There has been quite some research on this topic over the last decades, starting with the pioneering efforts of Ciampi, followed by Loh's GUIDE, and then also Gama's functional trees or the model-base | (Non-)linear regression at leafs decision tree
There has been quite some research on this topic over the last decades, starting with the pioneering efforts of Ciampi, followed by Loh's GUIDE, and then also Gama's functional trees or the model-based recursive partitioning approach by us. A nice overview is given in @Momo's answer to this question: Advantage of GLMs in terminal nodes of a regression tree?
Corresponding software is less widely used than simple constant-fit trees as you observe. Part of the reason for this is presumably that it is more difficult to write - but also more difficult to use. It just requires more specifications than a simple CART model. But software is available (as previously pointed out here by @marqram or @Momo at: Regression tree algorithm with linear regression models in each leaf). Prominent software packages include:
In the Weka suite there are M5P (M5') for continuous responses, LMT (logistic model trees) for binary responses, and FT (functional trees) for categorical responses. See http://www.cs.waikato.ac.nz/~ml/weka/ for more details. The former two functions are also easily interfaced through the R package RWeka.
Loh's GUIDE implementation is available in binary form at no cost (but without source code) from http://www.stat.wisc.edu/~loh/guide.html. It allows to modify the details of the method by a wide range of control options.
Our MOB (MOdel-Based recursive partitioning) algorithm is available in the R package partykit (successor to the party implementation). The mob() function gives you a general framework, allowing you to specify new models that can be easily fitted in the nodes/leaves of the tree. Convenience interfaces lmtree() and glmtree() that combine mob() with lm() and glm() are directly available and illustrated in vignette("mob", package = "partykit"). But other plugins can also be defined. For example, in https://stackoverflow.com/questions/37037445/using-mob-trees-partykit-package-with-nls-model mob() is combined with nls(). But there are also "mobsters" for various psychometric models (in psychotree) and for beta regression (in betareg). | (Non-)linear regression at leafs decision tree
There has been quite some research on this topic over the last decades, starting with the pioneering efforts of Ciampi, followed by Loh's GUIDE, and then also Gama's functional trees or the model-base |
34,517 | (Non-)linear regression at leafs decision tree | MARS does this
I think it isn't more popular is that a lot of the robustness of ensembles of decision tree style models comes from the fact they allways predict constant values in range they've seen.
Outliers in the data generally just get lumped together with the highest/lowest normal values in the data on the last leaf and don't cause strange predictions or throw off coeffichents.
They also don't suffer from issues with multicolinearity as much as linear models.
You mighe be able to adress these issues in an implementation but its probally easier and more robust to just add more trees in an ensemble via boosting or bagging untll you get the smoothness you need. | (Non-)linear regression at leafs decision tree | MARS does this
I think it isn't more popular is that a lot of the robustness of ensembles of decision tree style models comes from the fact they allways predict constant values in range they've seen. | (Non-)linear regression at leafs decision tree
MARS does this
I think it isn't more popular is that a lot of the robustness of ensembles of decision tree style models comes from the fact they allways predict constant values in range they've seen.
Outliers in the data generally just get lumped together with the highest/lowest normal values in the data on the last leaf and don't cause strange predictions or throw off coeffichents.
They also don't suffer from issues with multicolinearity as much as linear models.
You mighe be able to adress these issues in an implementation but its probally easier and more robust to just add more trees in an ensemble via boosting or bagging untll you get the smoothness you need. | (Non-)linear regression at leafs decision tree
MARS does this
I think it isn't more popular is that a lot of the robustness of ensembles of decision tree style models comes from the fact they allways predict constant values in range they've seen. |
34,518 | (Non-)linear regression at leafs decision tree | I found a method that does just this (a decision tree, where the leafs contain a linear-regression instead of an average value). They are called model trees [1] and an example is the M5P[2] algorithm of weka. In M5P a linear regression is at each leaf.
Edit: I found another package/model that does something similar and seems to give very good results for my datasets: cubist. An implementation in R is given by the cubist package [3]. Cubist adds boosting ensembling to M5P and what it calls 'instance based corrections'.
[1]: Torgo, L. Functional models for regression tree leaves. In Proceedings of the 14th International Conference on Machine Learning, pp. 385–393. Morgan Kaufmann, 1997.
[2]: M5P http://weka.sourceforge.net/doc.dev/weka/classifiers/trees/M5P.html
[3]: Cubist model Cubist: Rule- And Instance-Based Regression Modeling https://cran.r-project.org/web/packages/Cubist/index.html | (Non-)linear regression at leafs decision tree | I found a method that does just this (a decision tree, where the leafs contain a linear-regression instead of an average value). They are called model trees [1] and an example is the M5P[2] algorithm | (Non-)linear regression at leafs decision tree
I found a method that does just this (a decision tree, where the leafs contain a linear-regression instead of an average value). They are called model trees [1] and an example is the M5P[2] algorithm of weka. In M5P a linear regression is at each leaf.
Edit: I found another package/model that does something similar and seems to give very good results for my datasets: cubist. An implementation in R is given by the cubist package [3]. Cubist adds boosting ensembling to M5P and what it calls 'instance based corrections'.
[1]: Torgo, L. Functional models for regression tree leaves. In Proceedings of the 14th International Conference on Machine Learning, pp. 385–393. Morgan Kaufmann, 1997.
[2]: M5P http://weka.sourceforge.net/doc.dev/weka/classifiers/trees/M5P.html
[3]: Cubist model Cubist: Rule- And Instance-Based Regression Modeling https://cran.r-project.org/web/packages/Cubist/index.html | (Non-)linear regression at leafs decision tree
I found a method that does just this (a decision tree, where the leafs contain a linear-regression instead of an average value). They are called model trees [1] and an example is the M5P[2] algorithm |
34,519 | Is there a term for the standard deviation of a sample as a percentage of the mean? | Standard deviation divided by mean is called coefficient of variation. It is defined exactly as you did
$$ c_{\rm v} = \frac{\sigma}{\mu} $$
in terms of population mean and standard deviation, or it can be estimated using sample mean and sample standard deviation. | Is there a term for the standard deviation of a sample as a percentage of the mean? | Standard deviation divided by mean is called coefficient of variation. It is defined exactly as you did
$$ c_{\rm v} = \frac{\sigma}{\mu} $$
in terms of population mean and standard deviation, or it c | Is there a term for the standard deviation of a sample as a percentage of the mean?
Standard deviation divided by mean is called coefficient of variation. It is defined exactly as you did
$$ c_{\rm v} = \frac{\sigma}{\mu} $$
in terms of population mean and standard deviation, or it can be estimated using sample mean and sample standard deviation. | Is there a term for the standard deviation of a sample as a percentage of the mean?
Standard deviation divided by mean is called coefficient of variation. It is defined exactly as you did
$$ c_{\rm v} = \frac{\sigma}{\mu} $$
in terms of population mean and standard deviation, or it c |
34,520 | How do I get cost function of logistic regression in Scikit Learn from log likelihood function? | Your log-likelihood is:
$$
\log L(x, y; w) = \sum_{i=1}^N \ell_i
$$
where
\begin{align}
\ell_i
&= y_i \log\left( \frac{1}{1 + \exp(- w^T x_i)} \right)
+ (1-y_i) \log\left( 1 - \frac{1}{1 + \exp(- w^T x_i)} \right)
\\&= y_i \log\left( \frac{1}{1 + \exp(- w^T x_i)} \right)
+ (1-y_i) \log\left( \frac{1 + \exp(- w^T x_i)}{1 + \exp(- w^T x_i)} - \frac{1}{1 + \exp(- w^T x_i)} \right)
\\&= y_i \log\left( \frac{1}{1 + \exp(- w^T x_i)} \right)
+ (1-y_i) \log\left( \frac{\exp(- w^T x_i)}{1 + \exp(- w^T x_i)} \right)
\\&= y_i \log\left( \frac{1}{1 + \exp(- w^T x_i)} \right)
+ (1-y_i) \log\left( \frac{\exp(- w^T x_i)}{1 + \exp(- w^T x_i)} \times \frac{\exp(w^T x_i)}{\exp(w^T x_i)} \right)
\\&= y_i \log\left( \frac{1}{1 + \exp(- w^T x_i)} \right)
+ (1-y_i) \log\left( \frac{1}{\exp(w^T x_i) + 1} \right)
\\&= \log\left( \frac{1}{1 + \exp\left( \begin{cases}- w^T x_i & y_i = 1 \\ w^T x_i & y_i = 0\end{cases} \right)} \right)
\\&= \log\left( \frac{1}{1 + \exp\left( - y'_i w^T x_i \right)} \right)
\\&= -\log\left( 1 + \exp\left( - y_i' w^T x_i \right) \right)
\end{align}
where $y_i \in \{0, 1\}$ but we defined $y_i' \in \{-1, 1\}$.
To get to the loss function in the image, first we need to add an intercept to the model, replacing $w^T x_i$ with $w^T x_i + c$.
Then:
$$
\arg\max \log L(X, y; w, c)
= \arg\min - \log L(X, y; w, c)
,$$
and then we add a regularizer $P(c, w)$:
$$
\arg\min \lambda P(w, c) - \log L(X, y; w, c)
= \arg\min P(w, c) - \frac{1}{\lambda} \log L(X, y; w, c)
,$$
where we then set $C := \frac1\lambda$.
The $L_2$ penalty is
$$
P(w, c) = \frac12 w^T w = \frac12 \sum_{j=1}^d w_j^2
;$$
that $\tfrac12$ is just done for mathematical convenience when we differentiate, it doesn't really affect anything. The $L_1$ penalty has
$$
P(w, c) = \lVert w \rVert_1 = \sum_{j=1}^d \lvert w_j \rvert
.$$ | How do I get cost function of logistic regression in Scikit Learn from log likelihood function? | Your log-likelihood is:
$$
\log L(x, y; w) = \sum_{i=1}^N \ell_i
$$
where
\begin{align}
\ell_i
&= y_i \log\left( \frac{1}{1 + \exp(- w^T x_i)} \right)
+ (1-y_i) \log\left( 1 - \frac{1}{1 + \exp(- w^T | How do I get cost function of logistic regression in Scikit Learn from log likelihood function?
Your log-likelihood is:
$$
\log L(x, y; w) = \sum_{i=1}^N \ell_i
$$
where
\begin{align}
\ell_i
&= y_i \log\left( \frac{1}{1 + \exp(- w^T x_i)} \right)
+ (1-y_i) \log\left( 1 - \frac{1}{1 + \exp(- w^T x_i)} \right)
\\&= y_i \log\left( \frac{1}{1 + \exp(- w^T x_i)} \right)
+ (1-y_i) \log\left( \frac{1 + \exp(- w^T x_i)}{1 + \exp(- w^T x_i)} - \frac{1}{1 + \exp(- w^T x_i)} \right)
\\&= y_i \log\left( \frac{1}{1 + \exp(- w^T x_i)} \right)
+ (1-y_i) \log\left( \frac{\exp(- w^T x_i)}{1 + \exp(- w^T x_i)} \right)
\\&= y_i \log\left( \frac{1}{1 + \exp(- w^T x_i)} \right)
+ (1-y_i) \log\left( \frac{\exp(- w^T x_i)}{1 + \exp(- w^T x_i)} \times \frac{\exp(w^T x_i)}{\exp(w^T x_i)} \right)
\\&= y_i \log\left( \frac{1}{1 + \exp(- w^T x_i)} \right)
+ (1-y_i) \log\left( \frac{1}{\exp(w^T x_i) + 1} \right)
\\&= \log\left( \frac{1}{1 + \exp\left( \begin{cases}- w^T x_i & y_i = 1 \\ w^T x_i & y_i = 0\end{cases} \right)} \right)
\\&= \log\left( \frac{1}{1 + \exp\left( - y'_i w^T x_i \right)} \right)
\\&= -\log\left( 1 + \exp\left( - y_i' w^T x_i \right) \right)
\end{align}
where $y_i \in \{0, 1\}$ but we defined $y_i' \in \{-1, 1\}$.
To get to the loss function in the image, first we need to add an intercept to the model, replacing $w^T x_i$ with $w^T x_i + c$.
Then:
$$
\arg\max \log L(X, y; w, c)
= \arg\min - \log L(X, y; w, c)
,$$
and then we add a regularizer $P(c, w)$:
$$
\arg\min \lambda P(w, c) - \log L(X, y; w, c)
= \arg\min P(w, c) - \frac{1}{\lambda} \log L(X, y; w, c)
,$$
where we then set $C := \frac1\lambda$.
The $L_2$ penalty is
$$
P(w, c) = \frac12 w^T w = \frac12 \sum_{j=1}^d w_j^2
;$$
that $\tfrac12$ is just done for mathematical convenience when we differentiate, it doesn't really affect anything. The $L_1$ penalty has
$$
P(w, c) = \lVert w \rVert_1 = \sum_{j=1}^d \lvert w_j \rvert
.$$ | How do I get cost function of logistic regression in Scikit Learn from log likelihood function?
Your log-likelihood is:
$$
\log L(x, y; w) = \sum_{i=1}^N \ell_i
$$
where
\begin{align}
\ell_i
&= y_i \log\left( \frac{1}{1 + \exp(- w^T x_i)} \right)
+ (1-y_i) \log\left( 1 - \frac{1}{1 + \exp(- w^T |
34,521 | What does "performing PCA on a single time series" mean/do? | It isn't meaningful to run PCA on a univariate time series (or, more generally, a single vector). To run PCA on time series data, you'd need to have either a multivariate time series, or multiple univariate time series. There are ways to transform a univariate time series into a multivariate one (e.g. wavelet or time-frequency transforms, time delay embeddings, etc.). For example, the spectrogram of a univariate time series gives you the power at each frequency, for each moment in time.
Say we have a multivariate time series with $p$ dimensions/variables. Or, we might have a set of $p$ univariate time series, where each time point has some common meaning across time series (e.g. time relative to some event). In both cases, there are $n$ time points. There are a couple ways to run PCA:
Consider each time point to be an observation. Dimensions correspond to variables of the multivariate time series, or to the different univariate time series. So, there are $n$ points in a $p$ dimensional space. In this case, eigenvectors correspond to instantaneous patterns across the dimensions/time series. At each moment in time, we represent the amplitude across dimensions/time series as a linear combination of these patterns.
Consider each variable of the multivariate time series (or each univariate time series) to be an observation. Dimensions correspond to time points. So, there are $p$ points in an $n$-dimensional space. In this case, the eigenvectors correspond to temporal basis functions, and we're representing each time series as a linear combination of these basis functions.
Given the above, it's apparent why PCA doesn't make sense for a single univariate time series. Either you have $n$ observations and 1 dimension (in which case there's nothing for PCA to do), or you have a single observation with $n$ dimensions (in which case the problem is completely underdetermined and all solutions are equivalent). | What does "performing PCA on a single time series" mean/do? | It isn't meaningful to run PCA on a univariate time series (or, more generally, a single vector). To run PCA on time series data, you'd need to have either a multivariate time series, or multiple univ | What does "performing PCA on a single time series" mean/do?
It isn't meaningful to run PCA on a univariate time series (or, more generally, a single vector). To run PCA on time series data, you'd need to have either a multivariate time series, or multiple univariate time series. There are ways to transform a univariate time series into a multivariate one (e.g. wavelet or time-frequency transforms, time delay embeddings, etc.). For example, the spectrogram of a univariate time series gives you the power at each frequency, for each moment in time.
Say we have a multivariate time series with $p$ dimensions/variables. Or, we might have a set of $p$ univariate time series, where each time point has some common meaning across time series (e.g. time relative to some event). In both cases, there are $n$ time points. There are a couple ways to run PCA:
Consider each time point to be an observation. Dimensions correspond to variables of the multivariate time series, or to the different univariate time series. So, there are $n$ points in a $p$ dimensional space. In this case, eigenvectors correspond to instantaneous patterns across the dimensions/time series. At each moment in time, we represent the amplitude across dimensions/time series as a linear combination of these patterns.
Consider each variable of the multivariate time series (or each univariate time series) to be an observation. Dimensions correspond to time points. So, there are $p$ points in an $n$-dimensional space. In this case, the eigenvectors correspond to temporal basis functions, and we're representing each time series as a linear combination of these basis functions.
Given the above, it's apparent why PCA doesn't make sense for a single univariate time series. Either you have $n$ observations and 1 dimension (in which case there's nothing for PCA to do), or you have a single observation with $n$ dimensions (in which case the problem is completely underdetermined and all solutions are equivalent). | What does "performing PCA on a single time series" mean/do?
It isn't meaningful to run PCA on a univariate time series (or, more generally, a single vector). To run PCA on time series data, you'd need to have either a multivariate time series, or multiple univ |
34,522 | What does "performing PCA on a single time series" mean/do? | PCA on a single time series can be done, of course. The result will be one principal component, which will be equal to the original series. Hence, technically it'll work, but it'll be pointless: you'll get your input series in the output.
Here's a MATLAB example. I got the PCA of a random series, then plotted the only principal component against the original series to show that it's the same thing. I also show you the differences between to series (adjusted for mean) is zero.
x=randn(10,1);
[~,score,~,~,~,mu]=pca(x);
scatter(x,score);
max(abs(x-score-mu))
ans =
4.4409e-16 | What does "performing PCA on a single time series" mean/do? | PCA on a single time series can be done, of course. The result will be one principal component, which will be equal to the original series. Hence, technically it'll work, but it'll be pointless: you'l | What does "performing PCA on a single time series" mean/do?
PCA on a single time series can be done, of course. The result will be one principal component, which will be equal to the original series. Hence, technically it'll work, but it'll be pointless: you'll get your input series in the output.
Here's a MATLAB example. I got the PCA of a random series, then plotted the only principal component against the original series to show that it's the same thing. I also show you the differences between to series (adjusted for mean) is zero.
x=randn(10,1);
[~,score,~,~,~,mu]=pca(x);
scatter(x,score);
max(abs(x-score-mu))
ans =
4.4409e-16 | What does "performing PCA on a single time series" mean/do?
PCA on a single time series can be done, of course. The result will be one principal component, which will be equal to the original series. Hence, technically it'll work, but it'll be pointless: you'l |
34,523 | What does "performing PCA on a single time series" mean/do? | Maybe, “performing PCA on a single time series” means application of singular spectrum analysis (SSA), which is sometimes called PCA of time series.
In SSA, multivariate data are constructed from lagged (moving) subseries of the initial time series. Then PCA (usually, SVD, which is PCA without centering/standardizing) is applied to the obtained multivariate data.
Example in R:
library(Rssa)
s <- ssa(co2)
# Plot eigenvectors
plot(s, type = "vectors")
# Reconstruct the series, grouping elementary series.
r <- reconstruct(s, groups = list(Trend = c(1, 4), Season1 = c(2,3), Season2 = c(5, 6)))
plot(r) | What does "performing PCA on a single time series" mean/do? | Maybe, “performing PCA on a single time series” means application of singular spectrum analysis (SSA), which is sometimes called PCA of time series.
In SSA, multivariate data are constructed from lagg | What does "performing PCA on a single time series" mean/do?
Maybe, “performing PCA on a single time series” means application of singular spectrum analysis (SSA), which is sometimes called PCA of time series.
In SSA, multivariate data are constructed from lagged (moving) subseries of the initial time series. Then PCA (usually, SVD, which is PCA without centering/standardizing) is applied to the obtained multivariate data.
Example in R:
library(Rssa)
s <- ssa(co2)
# Plot eigenvectors
plot(s, type = "vectors")
# Reconstruct the series, grouping elementary series.
r <- reconstruct(s, groups = list(Trend = c(1, 4), Season1 = c(2,3), Season2 = c(5, 6)))
plot(r) | What does "performing PCA on a single time series" mean/do?
Maybe, “performing PCA on a single time series” means application of singular spectrum analysis (SSA), which is sometimes called PCA of time series.
In SSA, multivariate data are constructed from lagg |
34,524 | What does "performing PCA on a single time series" mean/do? | Let us assume that whoever was asking you to perform PCA on a univariate time series was really asking you "How many linearly independent subsegments does this single time series have?". We can perform PCA on this data after a simple transformation of your univariate time series. Assume your time series has length $T$ and that $T=MN$, where $M$ is the length of each temporal subsegment of the original time series. You can perform PCA on this $M$ x $N$ matrix. For a very contrived example, say you have this time series with time on x-axis and total time length of 750:
Let us now rearrange this vector of length $T$ to a 250 x 3 matrix. Notice in this very contrived case that the waveform repeats itself every 250 timesteps. We then run PCA on this 250 x 3 matrix. The plot of the variance explained vs the PCA components ranked by variance explained will be:
We can plot the representation of our original data in the PC space, where we see the M-length vector corresponding to PC1 (blue) matches with the actual observed waveform that repeats, and that the plot of the M-length vector corresponding to PC2 (gold) is essentially zero as all of the variance in the original time series is explained by PC1.
This is greatly simplified versus how you would do this in application. The best way to do it would be to have $M$ act as a sliding filter along the time series, where the length of $M$ and the stride are hyperparameters. You could use PCA on the eventual $MN$ matrix to reconstruct the original signal, treating the PCA scores as a basis set and comparing error between your reconstruction and the original signal. In this basic example, the stride is equal to the length of $M$ | What does "performing PCA on a single time series" mean/do? | Let us assume that whoever was asking you to perform PCA on a univariate time series was really asking you "How many linearly independent subsegments does this single time series have?". We can perfor | What does "performing PCA on a single time series" mean/do?
Let us assume that whoever was asking you to perform PCA on a univariate time series was really asking you "How many linearly independent subsegments does this single time series have?". We can perform PCA on this data after a simple transformation of your univariate time series. Assume your time series has length $T$ and that $T=MN$, where $M$ is the length of each temporal subsegment of the original time series. You can perform PCA on this $M$ x $N$ matrix. For a very contrived example, say you have this time series with time on x-axis and total time length of 750:
Let us now rearrange this vector of length $T$ to a 250 x 3 matrix. Notice in this very contrived case that the waveform repeats itself every 250 timesteps. We then run PCA on this 250 x 3 matrix. The plot of the variance explained vs the PCA components ranked by variance explained will be:
We can plot the representation of our original data in the PC space, where we see the M-length vector corresponding to PC1 (blue) matches with the actual observed waveform that repeats, and that the plot of the M-length vector corresponding to PC2 (gold) is essentially zero as all of the variance in the original time series is explained by PC1.
This is greatly simplified versus how you would do this in application. The best way to do it would be to have $M$ act as a sliding filter along the time series, where the length of $M$ and the stride are hyperparameters. You could use PCA on the eventual $MN$ matrix to reconstruct the original signal, treating the PCA scores as a basis set and comparing error between your reconstruction and the original signal. In this basic example, the stride is equal to the length of $M$ | What does "performing PCA on a single time series" mean/do?
Let us assume that whoever was asking you to perform PCA on a univariate time series was really asking you "How many linearly independent subsegments does this single time series have?". We can perfor |
34,525 | Generate synthetic data to match sample data | I am trying to answer my own question after doing few initial experiments. I tried the SMOTE technique to generate new synthetic samples. And the results are encouraging. It generates synthetic data which has almost similar characteristics of the sample data. The code is from http://comments.gmane.org/gmane.comp.python.scikit-learn/5278 by Karsten Jeschkies which is as below
import numpy as np
from random import randrange, choice
from sklearn.neighbors import NearestNeighbors
def SMOTE(T, N, k):
"""
Returns (N/100) * n_minority_samples synthetic minority samples.
Parameters
----------
T : array-like, shape = [n_minority_samples, n_features]
Holds the minority samples
N : percetange of new synthetic samples:
n_synthetic_samples = N/100 * n_minority_samples. Can be < 100.
k : int. Number of nearest neighbours.
Returns
-------
S : array, shape = [(N/100) * n_minority_samples, n_features]
"""
n_minority_samples, n_features = T.shape
if N < 100:
#create synthetic samples only for a subset of T.
#TODO: select random minortiy samples
N = 100
pass
if (N % 100) != 0:
raise ValueError("N must be < 100 or multiple of 100")
N = N/100
n_synthetic_samples = N * n_minority_samples
S = np.zeros(shape=(n_synthetic_samples, n_features))
#Learn nearest neighbours
neigh = NearestNeighbors(n_neighbors = k)
neigh.fit(T)
#Calculate synthetic samples
for i in xrange(n_minority_samples):
nn = neigh.kneighbors(T[i], return_distance=False)
for n in xrange(N):
nn_index = choice(nn[0])
#NOTE: nn includes T[i], we don't want to select it
while nn_index == i:
nn_index = choice(nn[0])
dif = T[nn_index] - T[i]
gap = np.random.random()
S[n + i * N, :] = T[i,:] + gap * dif[:]
return S
The got the following results with a small dataset of 4999 samples having 2 features.
Sample or the small data description
After Before
count 4999.000000 4999.000000
mean 350.577866 391.757958
std 566.065273 693.179718
min 0.000000 0.000000
25% 52.975000 93.991500
50% 183.388000 226.027000
75% 414.599000 453.261167
max 10980.004000 27028.158333
Histogram is as follows

Scatter plot to see the joint distribution is as follows:
After using SMOTE technique to generate twice the number of samples, I get the following
After Before
count 9998.000000 9998.000000
mean 350.042946 389.020419
std 556.334086 652.886148
min 0.000000 0.000000
25% 53.074959 94.885295
50% 184.067407 226.802912
75% 414.955448 454.008691
max 10685.308012 26688.626042
Histogram is as follows
Scatter plot to see the joint distribution is as follows: | Generate synthetic data to match sample data | I am trying to answer my own question after doing few initial experiments. I tried the SMOTE technique to generate new synthetic samples. And the results are encouraging. It generates synthetic data w | Generate synthetic data to match sample data
I am trying to answer my own question after doing few initial experiments. I tried the SMOTE technique to generate new synthetic samples. And the results are encouraging. It generates synthetic data which has almost similar characteristics of the sample data. The code is from http://comments.gmane.org/gmane.comp.python.scikit-learn/5278 by Karsten Jeschkies which is as below
import numpy as np
from random import randrange, choice
from sklearn.neighbors import NearestNeighbors
def SMOTE(T, N, k):
"""
Returns (N/100) * n_minority_samples synthetic minority samples.
Parameters
----------
T : array-like, shape = [n_minority_samples, n_features]
Holds the minority samples
N : percetange of new synthetic samples:
n_synthetic_samples = N/100 * n_minority_samples. Can be < 100.
k : int. Number of nearest neighbours.
Returns
-------
S : array, shape = [(N/100) * n_minority_samples, n_features]
"""
n_minority_samples, n_features = T.shape
if N < 100:
#create synthetic samples only for a subset of T.
#TODO: select random minortiy samples
N = 100
pass
if (N % 100) != 0:
raise ValueError("N must be < 100 or multiple of 100")
N = N/100
n_synthetic_samples = N * n_minority_samples
S = np.zeros(shape=(n_synthetic_samples, n_features))
#Learn nearest neighbours
neigh = NearestNeighbors(n_neighbors = k)
neigh.fit(T)
#Calculate synthetic samples
for i in xrange(n_minority_samples):
nn = neigh.kneighbors(T[i], return_distance=False)
for n in xrange(N):
nn_index = choice(nn[0])
#NOTE: nn includes T[i], we don't want to select it
while nn_index == i:
nn_index = choice(nn[0])
dif = T[nn_index] - T[i]
gap = np.random.random()
S[n + i * N, :] = T[i,:] + gap * dif[:]
return S
The got the following results with a small dataset of 4999 samples having 2 features.
Sample or the small data description
After Before
count 4999.000000 4999.000000
mean 350.577866 391.757958
std 566.065273 693.179718
min 0.000000 0.000000
25% 52.975000 93.991500
50% 183.388000 226.027000
75% 414.599000 453.261167
max 10980.004000 27028.158333
Histogram is as follows

Scatter plot to see the joint distribution is as follows:
After using SMOTE technique to generate twice the number of samples, I get the following
After Before
count 9998.000000 9998.000000
mean 350.042946 389.020419
std 556.334086 652.886148
min 0.000000 0.000000
25% 53.074959 94.885295
50% 184.067407 226.802912
75% 414.955448 454.008691
max 10685.308012 26688.626042
Histogram is as follows
Scatter plot to see the joint distribution is as follows: | Generate synthetic data to match sample data
I am trying to answer my own question after doing few initial experiments. I tried the SMOTE technique to generate new synthetic samples. And the results are encouraging. It generates synthetic data w |
34,526 | Generate synthetic data to match sample data | I found this R package named synthpop that was developed for public release of confidential data for modeling. Supersampling with it seems reasonable.
synthpop: Bespoke Creation of Synthetic Data in R | Generate synthetic data to match sample data | I found this R package named synthpop that was developed for public release of confidential data for modeling. Supersampling with it seems reasonable.
synthpop: Bespoke Creation of Synthetic Data in | Generate synthetic data to match sample data
I found this R package named synthpop that was developed for public release of confidential data for modeling. Supersampling with it seems reasonable.
synthpop: Bespoke Creation of Synthetic Data in R | Generate synthetic data to match sample data
I found this R package named synthpop that was developed for public release of confidential data for modeling. Supersampling with it seems reasonable.
synthpop: Bespoke Creation of Synthetic Data in |
34,527 | Generate synthetic data to match sample data | You could also look at MUNGE. It generates synthetic datasets from a nonparametric estimate of the joint distribution. The idea is similar to SMOTE (perturb original data points using information about their nearest neighbors), but the implementation is different, as well as its original purpose. Whereas SMOTE was proposed for balancing imbalanced classes, MUNGE was proposed as part of a 'model compression' strategy. The goal is to replace a large, accurate model with a smaller, efficient model that's trained to mimic its behavior. There are many details you can ignore if you're just interested in the sampling procedure. The paper compares MUNGE to some simpler schemes for generating synthetic data.
Basic idea:
Generate a synthetic point as a copy of original data point $e$
Let $e'$ be be the nearest neighbor
For each attribute $a$:
If $a$ is discrete: With probability $p$, replace the synthetic point's attribute $a$ with $e'_a$.
If $a$ is continuous: With probability $p$, replace the synthetic point's attribute $a$ with a value drawn from a normal distribution with mean $e'_a$ and standard deviation $\left | e_a - e'_a \right | / s$
$p$ and $s$ are parameters
The paper:
Bucila et al. (2006). Model compression.
Regarding the stats/plots you showed, it would be good to check some measure of the joint distribution too, since it's possible to destroy the joint distribution while preserving the marginals. | Generate synthetic data to match sample data | You could also look at MUNGE. It generates synthetic datasets from a nonparametric estimate of the joint distribution. The idea is similar to SMOTE (perturb original data points using information abou | Generate synthetic data to match sample data
You could also look at MUNGE. It generates synthetic datasets from a nonparametric estimate of the joint distribution. The idea is similar to SMOTE (perturb original data points using information about their nearest neighbors), but the implementation is different, as well as its original purpose. Whereas SMOTE was proposed for balancing imbalanced classes, MUNGE was proposed as part of a 'model compression' strategy. The goal is to replace a large, accurate model with a smaller, efficient model that's trained to mimic its behavior. There are many details you can ignore if you're just interested in the sampling procedure. The paper compares MUNGE to some simpler schemes for generating synthetic data.
Basic idea:
Generate a synthetic point as a copy of original data point $e$
Let $e'$ be be the nearest neighbor
For each attribute $a$:
If $a$ is discrete: With probability $p$, replace the synthetic point's attribute $a$ with $e'_a$.
If $a$ is continuous: With probability $p$, replace the synthetic point's attribute $a$ with a value drawn from a normal distribution with mean $e'_a$ and standard deviation $\left | e_a - e'_a \right | / s$
$p$ and $s$ are parameters
The paper:
Bucila et al. (2006). Model compression.
Regarding the stats/plots you showed, it would be good to check some measure of the joint distribution too, since it's possible to destroy the joint distribution while preserving the marginals. | Generate synthetic data to match sample data
You could also look at MUNGE. It generates synthetic datasets from a nonparametric estimate of the joint distribution. The idea is similar to SMOTE (perturb original data points using information abou |
34,528 | Generate synthetic data to match sample data | I am developing a Python package, PySynth, aimed at data synthesis that should do what you need: https://pypi.org/project/pysynth/ The IPF method used there now does not work well for datasets with many columns, but it should be sufficient for the needs you mention here. | Generate synthetic data to match sample data | I am developing a Python package, PySynth, aimed at data synthesis that should do what you need: https://pypi.org/project/pysynth/ The IPF method used there now does not work well for datasets with ma | Generate synthetic data to match sample data
I am developing a Python package, PySynth, aimed at data synthesis that should do what you need: https://pypi.org/project/pysynth/ The IPF method used there now does not work well for datasets with many columns, but it should be sufficient for the needs you mention here. | Generate synthetic data to match sample data
I am developing a Python package, PySynth, aimed at data synthesis that should do what you need: https://pypi.org/project/pysynth/ The IPF method used there now does not work well for datasets with ma |
34,529 | what is bias and variance of an estimator? | It may be that the math notation is somehow intimidating, but it's not that daunting. Take a look at what happens with an un-biased estimator, such as the sample mean:
The difference between the expectation of the means of the samples we get from a population with mean $\theta$ and that population parameter, $\theta$, itself is zero, because the sample means will be all distributed around the population mean. None of them will be the population mean exactly, but the mean of all the sample means will be exactly the population mean.
This is not the case for other parameters, such as the variance, for which the variance observed in the sample tends to be too small in comparison to the true variance. So if we want to estimate the population variance from the sample we divide by $n-1$, instead of $n$ (Bassel's correction) to correct the bias of the sample variance as an estimator of the population variance:
In both instances, the sample is governed by the population parameter $\theta$, explaining the part in red in the defining equation: $\text{Bias}E[\bar\theta]=E_\color{red}{{p(X|\theta)}}[\bar\theta]-\theta$. However, the $\bar\theta$ steers away from $\theta$ when the estimator is biased. | what is bias and variance of an estimator? | It may be that the math notation is somehow intimidating, but it's not that daunting. Take a look at what happens with an un-biased estimator, such as the sample mean:
The difference between the expe | what is bias and variance of an estimator?
It may be that the math notation is somehow intimidating, but it's not that daunting. Take a look at what happens with an un-biased estimator, such as the sample mean:
The difference between the expectation of the means of the samples we get from a population with mean $\theta$ and that population parameter, $\theta$, itself is zero, because the sample means will be all distributed around the population mean. None of them will be the population mean exactly, but the mean of all the sample means will be exactly the population mean.
This is not the case for other parameters, such as the variance, for which the variance observed in the sample tends to be too small in comparison to the true variance. So if we want to estimate the population variance from the sample we divide by $n-1$, instead of $n$ (Bassel's correction) to correct the bias of the sample variance as an estimator of the population variance:
In both instances, the sample is governed by the population parameter $\theta$, explaining the part in red in the defining equation: $\text{Bias}E[\bar\theta]=E_\color{red}{{p(X|\theta)}}[\bar\theta]-\theta$. However, the $\bar\theta$ steers away from $\theta$ when the estimator is biased. | what is bias and variance of an estimator?
It may be that the math notation is somehow intimidating, but it's not that daunting. Take a look at what happens with an un-biased estimator, such as the sample mean:
The difference between the expe |
34,530 | Mars attack (probability to destroy $n$ spaceships with $k \cdot n$ missiles) | An equivalent model for this process is first to put the $n$ spaceships into a bottle. Set the count of destroyed ships to zero. Enumerate the missiles $1, 2, \ldots, m$. To determine which ship is targeted by missile $i$, shake the bottle well and randomly draw a ship from the bottle. With probability $p$, mark it as destroyed; otherwise, do not change any of its markings. If it originally was intact and now has been marked as destroyed, increment the count of destroyed ships. Return this ship to the bottle and repeat.
This describes a Markov Chain on the counts $0, 1, \ldots, n$ that will be run through $m$ iterations. After $i$ ships have been destroyed, the chance that another will be destroyed (thereby making a transition from state $i$ to state $i+1$) will be the chance of selecting an undestroyed ship (of which there are $n-i$) times the chance of destroying that ship (which is $p$). That is,
$$\Pr(i\to i+1) = \frac{n-i}{n} p.$$
Otherwise, the chain stays in the state $i$. The initial state is $i=0$. Interest centers on the chance of being in state $n$ after $m$ iterations.
The transition matrix $\mathbb{P}$ of these probabilities, where $\mathbb{P}_{ij}$ is the probability of making the transition from $i$ to $j$, easily diagonalizes:
$$\eqalign{
\mathbb{P} & = \pmatrix{1-p & p & 0 & \cdots & 0 & 0 \\
0 & 1-\frac{n-1}{n}p & \frac{n-1}{n}p & \cdots & 0 & 0 \\
0 & 0 & 1 - \frac{n-2}{n}p & \frac{n-2}{n}p & \cdots & 0 \\
\vdots & \vdots & \ddots & \ddots & \vdots & \vdots \\
0 & 0 & \cdots & 0 & 1 - \frac{1}{n}p & \frac{1}{n}p \\
0 & 0 & \cdots & 0 & 0 & 1} \\
&=\mathbb{V} \pmatrix{1 & 0 & 0 & \cdots & 0 & 0 \\
0 & \frac{n-p}{n} & 0 & \cdots & 0 & 0 \\
0 & 0 & \frac{n-2p}{n} & \cdots & 0 & 0 \\
\vdots & \vdots & \ddots & \ddots & \vdots & \vdots \\
0 & 0 & \cdots & 0 & \frac{n - (n-1)p}{n} & 0 \\
0 & 0 & \cdots & 0 & 0 & \frac{n - np}{n}} \mathbb{V}^{-1}
}$$
where
$$\mathbb{V} = \pmatrix{\binom{n}{0} & \binom{n}{1} & \binom{n}{2} & \cdots & \binom{n}{n-1} & \binom{n}{n} \\
\binom{n-1}{0} & \binom{n-1}{1} & \binom{n-1}{2} & \cdots & \binom{n-1}{n-1} & 0 \\
\vdots & \vdots & \ddots & \ddots & \vdots & \vdots \\
\binom{1}{0} & \binom{1}{1} & 0 & \cdots & 0 & 0 \\
\binom{0}{0} & 0 & 0 & \cdots & 0 & 0}
$$
is Pascal's Triangle. The inverse is readily found to be
$$\mathbb{V}^{-1} = \pmatrix{0 & 0 & \cdots & 0 & 0 & \binom{0}{0} \\
0 & 0 & \cdots & 0 & \binom{1}{1} & -\binom{1}{0} \\
0 & 0 & \cdots & \binom{2}{2} & -\binom{2}{1} & \binom{2}{0} \\
\vdots & \vdots & \ddots & \ddots & \vdots & \vdots \\
0 & \binom{n-1}{n-1} & \cdots & (-1)^{n-1+2}\binom{n-1}{2} & (-1)^{n-1+1}\binom{n-1}{2} & (-1)^{n-1+0}\binom{n-1}{0} \\
\binom{n}{0} & -\binom{n}{1} & \cdots & (-1)^{n+2}\binom{n}{2} & (-1)^{n+1}\binom{n}{2} & (-1)^{n+0}\binom{n}{0}
}$$
Let that central (diagonal) matrix be written $\Lambda$, so that $$\Lambda_{jj} = \frac{n-jp}{n}.$$
The matrix for $m$ iterations is
$$\mathbb{P}^m = \left(\mathbb{V \Lambda V^{-1}}\right)^m = \mathbb{V} \Lambda^m \mathbb{V}^{-1}\tag{*}$$
and, obviously,
$$(\Lambda^m)_{jj} = \Lambda_{jj}^m = \left(\frac{n-jp}{n}\right)^m.$$
Doing the multiplication in $*$ we find
$$\left(\mathbb{P}^m\right)_{0n} = \sum_{j=0}^{\min(m,n)} (-1)^j \binom{n}{j} \left(\frac{n-jp}{n}\right)^m.\tag{**}$$
This is the chance of being in state $n$ after starting in state $0$. It is zero for $m=0, 1, \ldots, n-1$ and after that it is $p^n$ times a polynomial of degree $m-n$ (with nonzero terms of degrees $0$ through $m-n$), which means no further simplification appears possible. However, when $n/p$ is largish (around $10$ to $20$ or so), the powers in the sum $**$ can be approximated by exponentials,
$$\left(\frac{n - jp}{n}\right)^m = \left(1 - \frac{j p}{n}\right)^m \approx \left(e^{-m p / n}\right)^j,$$
which in turn can be summed via the Binomial Theorem, giving
$$\left(\mathbb{P}^m\right)_{0n} \approx \left(1 - e^{-m p / n}\right)^n .$$
(When $m p /n$ and $n$ are both large, this can further be approximated as $\exp\left(e^{-mp}\right)$.)
To illustrate, this graphic plots the correct values in blue and the approximation in red for $m \le 100$ where $n=5$ and $p=1/3$. The differences are only a few percent at most.
The approximation can be used to estimate an $m$ that is likely to wipe out all the ships. If you would like there to be at least a $1 - \varepsilon$ chance of that, then choose $m$ so that
$m p / n$ is largish and
$m \approx n (\log(n) - \log(\varepsilon))/ p$.
This is obtained from a first-order Taylor series expression for the approximation. For instance, suppose we would like to have a $95\%$ chance of wiping out all the ships in the example of the figure. Then $\varepsilon = 0.05$ and
$$m \approx 5(\log(5) - \log(0.05)) / (1/3) = 69.$$
Note that $m p / n = 69(1/3)/5 = 4.6$ is not terribly large, but it's getting there: the approximation might be OK. In fact, the approximate chance is $95.07\%$ while the correct chance is $95.77\%$.
This is a modified Coupon Collector's Problem in which each coupon that is found has only a $p$ chance of being useful. The method used here produces the entire distribution of destroyed ships for any $m$: just inspect the first row of $\mathbb{P}^m$. | Mars attack (probability to destroy $n$ spaceships with $k \cdot n$ missiles) | An equivalent model for this process is first to put the $n$ spaceships into a bottle. Set the count of destroyed ships to zero. Enumerate the missiles $1, 2, \ldots, m$. To determine which ship is | Mars attack (probability to destroy $n$ spaceships with $k \cdot n$ missiles)
An equivalent model for this process is first to put the $n$ spaceships into a bottle. Set the count of destroyed ships to zero. Enumerate the missiles $1, 2, \ldots, m$. To determine which ship is targeted by missile $i$, shake the bottle well and randomly draw a ship from the bottle. With probability $p$, mark it as destroyed; otherwise, do not change any of its markings. If it originally was intact and now has been marked as destroyed, increment the count of destroyed ships. Return this ship to the bottle and repeat.
This describes a Markov Chain on the counts $0, 1, \ldots, n$ that will be run through $m$ iterations. After $i$ ships have been destroyed, the chance that another will be destroyed (thereby making a transition from state $i$ to state $i+1$) will be the chance of selecting an undestroyed ship (of which there are $n-i$) times the chance of destroying that ship (which is $p$). That is,
$$\Pr(i\to i+1) = \frac{n-i}{n} p.$$
Otherwise, the chain stays in the state $i$. The initial state is $i=0$. Interest centers on the chance of being in state $n$ after $m$ iterations.
The transition matrix $\mathbb{P}$ of these probabilities, where $\mathbb{P}_{ij}$ is the probability of making the transition from $i$ to $j$, easily diagonalizes:
$$\eqalign{
\mathbb{P} & = \pmatrix{1-p & p & 0 & \cdots & 0 & 0 \\
0 & 1-\frac{n-1}{n}p & \frac{n-1}{n}p & \cdots & 0 & 0 \\
0 & 0 & 1 - \frac{n-2}{n}p & \frac{n-2}{n}p & \cdots & 0 \\
\vdots & \vdots & \ddots & \ddots & \vdots & \vdots \\
0 & 0 & \cdots & 0 & 1 - \frac{1}{n}p & \frac{1}{n}p \\
0 & 0 & \cdots & 0 & 0 & 1} \\
&=\mathbb{V} \pmatrix{1 & 0 & 0 & \cdots & 0 & 0 \\
0 & \frac{n-p}{n} & 0 & \cdots & 0 & 0 \\
0 & 0 & \frac{n-2p}{n} & \cdots & 0 & 0 \\
\vdots & \vdots & \ddots & \ddots & \vdots & \vdots \\
0 & 0 & \cdots & 0 & \frac{n - (n-1)p}{n} & 0 \\
0 & 0 & \cdots & 0 & 0 & \frac{n - np}{n}} \mathbb{V}^{-1}
}$$
where
$$\mathbb{V} = \pmatrix{\binom{n}{0} & \binom{n}{1} & \binom{n}{2} & \cdots & \binom{n}{n-1} & \binom{n}{n} \\
\binom{n-1}{0} & \binom{n-1}{1} & \binom{n-1}{2} & \cdots & \binom{n-1}{n-1} & 0 \\
\vdots & \vdots & \ddots & \ddots & \vdots & \vdots \\
\binom{1}{0} & \binom{1}{1} & 0 & \cdots & 0 & 0 \\
\binom{0}{0} & 0 & 0 & \cdots & 0 & 0}
$$
is Pascal's Triangle. The inverse is readily found to be
$$\mathbb{V}^{-1} = \pmatrix{0 & 0 & \cdots & 0 & 0 & \binom{0}{0} \\
0 & 0 & \cdots & 0 & \binom{1}{1} & -\binom{1}{0} \\
0 & 0 & \cdots & \binom{2}{2} & -\binom{2}{1} & \binom{2}{0} \\
\vdots & \vdots & \ddots & \ddots & \vdots & \vdots \\
0 & \binom{n-1}{n-1} & \cdots & (-1)^{n-1+2}\binom{n-1}{2} & (-1)^{n-1+1}\binom{n-1}{2} & (-1)^{n-1+0}\binom{n-1}{0} \\
\binom{n}{0} & -\binom{n}{1} & \cdots & (-1)^{n+2}\binom{n}{2} & (-1)^{n+1}\binom{n}{2} & (-1)^{n+0}\binom{n}{0}
}$$
Let that central (diagonal) matrix be written $\Lambda$, so that $$\Lambda_{jj} = \frac{n-jp}{n}.$$
The matrix for $m$ iterations is
$$\mathbb{P}^m = \left(\mathbb{V \Lambda V^{-1}}\right)^m = \mathbb{V} \Lambda^m \mathbb{V}^{-1}\tag{*}$$
and, obviously,
$$(\Lambda^m)_{jj} = \Lambda_{jj}^m = \left(\frac{n-jp}{n}\right)^m.$$
Doing the multiplication in $*$ we find
$$\left(\mathbb{P}^m\right)_{0n} = \sum_{j=0}^{\min(m,n)} (-1)^j \binom{n}{j} \left(\frac{n-jp}{n}\right)^m.\tag{**}$$
This is the chance of being in state $n$ after starting in state $0$. It is zero for $m=0, 1, \ldots, n-1$ and after that it is $p^n$ times a polynomial of degree $m-n$ (with nonzero terms of degrees $0$ through $m-n$), which means no further simplification appears possible. However, when $n/p$ is largish (around $10$ to $20$ or so), the powers in the sum $**$ can be approximated by exponentials,
$$\left(\frac{n - jp}{n}\right)^m = \left(1 - \frac{j p}{n}\right)^m \approx \left(e^{-m p / n}\right)^j,$$
which in turn can be summed via the Binomial Theorem, giving
$$\left(\mathbb{P}^m\right)_{0n} \approx \left(1 - e^{-m p / n}\right)^n .$$
(When $m p /n$ and $n$ are both large, this can further be approximated as $\exp\left(e^{-mp}\right)$.)
To illustrate, this graphic plots the correct values in blue and the approximation in red for $m \le 100$ where $n=5$ and $p=1/3$. The differences are only a few percent at most.
The approximation can be used to estimate an $m$ that is likely to wipe out all the ships. If you would like there to be at least a $1 - \varepsilon$ chance of that, then choose $m$ so that
$m p / n$ is largish and
$m \approx n (\log(n) - \log(\varepsilon))/ p$.
This is obtained from a first-order Taylor series expression for the approximation. For instance, suppose we would like to have a $95\%$ chance of wiping out all the ships in the example of the figure. Then $\varepsilon = 0.05$ and
$$m \approx 5(\log(5) - \log(0.05)) / (1/3) = 69.$$
Note that $m p / n = 69(1/3)/5 = 4.6$ is not terribly large, but it's getting there: the approximation might be OK. In fact, the approximate chance is $95.07\%$ while the correct chance is $95.77\%$.
This is a modified Coupon Collector's Problem in which each coupon that is found has only a $p$ chance of being useful. The method used here produces the entire distribution of destroyed ships for any $m$: just inspect the first row of $\mathbb{P}^m$. | Mars attack (probability to destroy $n$ spaceships with $k \cdot n$ missiles)
An equivalent model for this process is first to put the $n$ spaceships into a bottle. Set the count of destroyed ships to zero. Enumerate the missiles $1, 2, \ldots, m$. To determine which ship is |
34,531 | intuition behind exchangeability property, and its use in statistical inference | Exchangeability, loosely writing, means you can permute the indices of the random variables in the expression $F(x_1, \dots, x_n)$ without having the result of the probability calculation change. This means, basically, that you can put the observed value of, for example, $x_1$, in where $x_3$ is in the list of values and vice versa (or more complex permutations) without altering the calculated probability.
Consider an urn example; 3 black balls and 2 white balls, sampling without replacement. Now let's draw two balls; we get one white and one black. Does the probability of the sequence $(w,b)$ equal that of the sequence $(b,w)$? If so, and if this holds for all sequences and all samples, then the sequence is exchangeable, although the draws themselves are clearly not independent.
$P(b,w) = 3/5 * 1/2 = 3/10$
$P(w,b) = 2/5 * 3/4 = 3/10$.
If we see $x_1 = w$ and $x_2 = b$, and permute the indices in the probability calculation to (2,1) instead of (1,2), which means we calculate $P(b,w)$ instead of $P(w,b)$, we'll get the same numeric result. The fact that this is universally true in urn models of this sort means that the sequence of draws (from urn models of this sort) is exchangeable.
As for why we care, I can hardly do better than to point you to this paper by Bernardo (for the Bayesian perspective.) The tl;dr is that exchangeability is all that's necessary to show the existence of a probability distribution and a prior distribution on the parameter(s) of the probability distribution. So it's pretty fundamental stuff, not something you (directly) use to, e.g., help construct a particular statistical test.
To quote: "if a sequence of observations is judged to be exchangeable, then, any finite subset of them is a random sample of some model $p(x_i | \theta)$, and there exists a prior distribution $p(\theta)$ which has to describe the initially available information about the parameter [$\theta$] which labels the model." | intuition behind exchangeability property, and its use in statistical inference | Exchangeability, loosely writing, means you can permute the indices of the random variables in the expression $F(x_1, \dots, x_n)$ without having the result of the probability calculation change. Thi | intuition behind exchangeability property, and its use in statistical inference
Exchangeability, loosely writing, means you can permute the indices of the random variables in the expression $F(x_1, \dots, x_n)$ without having the result of the probability calculation change. This means, basically, that you can put the observed value of, for example, $x_1$, in where $x_3$ is in the list of values and vice versa (or more complex permutations) without altering the calculated probability.
Consider an urn example; 3 black balls and 2 white balls, sampling without replacement. Now let's draw two balls; we get one white and one black. Does the probability of the sequence $(w,b)$ equal that of the sequence $(b,w)$? If so, and if this holds for all sequences and all samples, then the sequence is exchangeable, although the draws themselves are clearly not independent.
$P(b,w) = 3/5 * 1/2 = 3/10$
$P(w,b) = 2/5 * 3/4 = 3/10$.
If we see $x_1 = w$ and $x_2 = b$, and permute the indices in the probability calculation to (2,1) instead of (1,2), which means we calculate $P(b,w)$ instead of $P(w,b)$, we'll get the same numeric result. The fact that this is universally true in urn models of this sort means that the sequence of draws (from urn models of this sort) is exchangeable.
As for why we care, I can hardly do better than to point you to this paper by Bernardo (for the Bayesian perspective.) The tl;dr is that exchangeability is all that's necessary to show the existence of a probability distribution and a prior distribution on the parameter(s) of the probability distribution. So it's pretty fundamental stuff, not something you (directly) use to, e.g., help construct a particular statistical test.
To quote: "if a sequence of observations is judged to be exchangeable, then, any finite subset of them is a random sample of some model $p(x_i | \theta)$, and there exists a prior distribution $p(\theta)$ which has to describe the initially available information about the parameter [$\theta$] which labels the model." | intuition behind exchangeability property, and its use in statistical inference
Exchangeability, loosely writing, means you can permute the indices of the random variables in the expression $F(x_1, \dots, x_n)$ without having the result of the probability calculation change. Thi |
34,532 | intuition behind exchangeability property, and its use in statistical inference | Equicorrelated random variables are another example for exchangeability but non-iid-ness. To take the simplest useful example, consider $n=3$. The correlation matrix then is
$$ \begin{pmatrix} 1&\rho&\rho\\ \rho&1&\rho\\ \rho&\rho&1 \end{pmatrix}$$
As each r.v. has correlation $\rho$ (within the legitimate limits, see here) with any other r.v., it does not matter which r.v. is in the first, second or third position. | intuition behind exchangeability property, and its use in statistical inference | Equicorrelated random variables are another example for exchangeability but non-iid-ness. To take the simplest useful example, consider $n=3$. The correlation matrix then is
$$ \begin{pmatrix} 1&\rho& | intuition behind exchangeability property, and its use in statistical inference
Equicorrelated random variables are another example for exchangeability but non-iid-ness. To take the simplest useful example, consider $n=3$. The correlation matrix then is
$$ \begin{pmatrix} 1&\rho&\rho\\ \rho&1&\rho\\ \rho&\rho&1 \end{pmatrix}$$
As each r.v. has correlation $\rho$ (within the legitimate limits, see here) with any other r.v., it does not matter which r.v. is in the first, second or third position. | intuition behind exchangeability property, and its use in statistical inference
Equicorrelated random variables are another example for exchangeability but non-iid-ness. To take the simplest useful example, consider $n=3$. The correlation matrix then is
$$ \begin{pmatrix} 1&\rho& |
34,533 | Why is Covariance Useful? | Covariance matrix contains more information than correlation matrix:
You can derive a correlation matrix from a covariance matrix.
But you cannot derive a covariance matrix using only a correlation matrix! (You also would need the standard deviations.)
Covariance matrices contain all the information of: (i) a correlation matrix plus (ii) a standard deviation vector. In some sense, covariance matrices are the more compact, mathematically convenient object to work with.
Another example using covariance:
I'll bring up a simple finance example that doesn't obviously involve regression:
Let there be $n$ possible investment assets.
Let $\Sigma$ be the covariance matrix for the $n$ assets.
Let $w$ be a vector denoting portfolio weights on the $n$ assets.
Then portfolio variance is given by the matrix equation:
$$ w^\top \Sigma w $$
You can't write this formula this succinctly using a correlation matrix.
A portfolio that minimizes variance would be a solution to:
$$ \begin{align*} \text{minimize (over $w$) } \quad w^\top \Sigma w \\ \text{ subject to: }\quad\quad \quad w^\top 1 = 1 \end{align*}$$
Note this would be the same as minimizing the standard deviation of portfolio returns.
Covariance turns out to be a rather ubiquitous concept for any problem involving two or more random variables. It comes up all over the place. Better start getting used to it! | Why is Covariance Useful? | Covariance matrix contains more information than correlation matrix:
You can derive a correlation matrix from a covariance matrix.
But you cannot derive a covariance matrix using only a correlation m | Why is Covariance Useful?
Covariance matrix contains more information than correlation matrix:
You can derive a correlation matrix from a covariance matrix.
But you cannot derive a covariance matrix using only a correlation matrix! (You also would need the standard deviations.)
Covariance matrices contain all the information of: (i) a correlation matrix plus (ii) a standard deviation vector. In some sense, covariance matrices are the more compact, mathematically convenient object to work with.
Another example using covariance:
I'll bring up a simple finance example that doesn't obviously involve regression:
Let there be $n$ possible investment assets.
Let $\Sigma$ be the covariance matrix for the $n$ assets.
Let $w$ be a vector denoting portfolio weights on the $n$ assets.
Then portfolio variance is given by the matrix equation:
$$ w^\top \Sigma w $$
You can't write this formula this succinctly using a correlation matrix.
A portfolio that minimizes variance would be a solution to:
$$ \begin{align*} \text{minimize (over $w$) } \quad w^\top \Sigma w \\ \text{ subject to: }\quad\quad \quad w^\top 1 = 1 \end{align*}$$
Note this would be the same as minimizing the standard deviation of portfolio returns.
Covariance turns out to be a rather ubiquitous concept for any problem involving two or more random variables. It comes up all over the place. Better start getting used to it! | Why is Covariance Useful?
Covariance matrix contains more information than correlation matrix:
You can derive a correlation matrix from a covariance matrix.
But you cannot derive a covariance matrix using only a correlation m |
34,534 | Why is Covariance Useful? | It all depends on how you write the parameters in question when
describing a linear regression.
For random variables $X$ and $Y$, the best (in the sense of
linear minimum-mean-square-error) estimate of $Y$ in terms of
$X$ is commonly written as
$$\hat{Y} = \rho\frac{\sigma_Y}{\sigma_X}
(X-\mu_X) + \mu_Y\tag{1}$$
and then it is claimed that $\rho^2$ is the fraction of
$\sigma_Y^2$ that has been "explained" by $X$. All this causes you to get all riled up and declare that the covariance
is a totally useless concept. But some folks (not many) like
to write $(1)$ as
$$\left(\hat{Y} -\mu_Y\right) = \left.\left.
\frac{\operatorname{cov}(Y,X)}{\operatorname{var}(X)}
\right(X-\mu_X\right)\tag{2}$$
and say that the deviation of the estimate
$\hat{Y}$ from its mean and the deviation of $X$ from its
mean have the same ratio as the covariance of $X$ and $Y$
and the variance of $X$, while the explained variance is
just $\displaystyle \frac{\operatorname{cov}^2(Y,X)}{\operatorname{var}(X)}$. Would you be willing to listen to an argument from them
that it is the covariance that is the more fundamental concept
and that the correlation coefficient is just some gobbledygook
of little interest? Why, it can't even make up its mind if its
first name is Pearson or Spearman! | Why is Covariance Useful? | It all depends on how you write the parameters in question when
describing a linear regression.
For random variables $X$ and $Y$, the best (in the sense of
linear minimum-mean-square-error) estimate o | Why is Covariance Useful?
It all depends on how you write the parameters in question when
describing a linear regression.
For random variables $X$ and $Y$, the best (in the sense of
linear minimum-mean-square-error) estimate of $Y$ in terms of
$X$ is commonly written as
$$\hat{Y} = \rho\frac{\sigma_Y}{\sigma_X}
(X-\mu_X) + \mu_Y\tag{1}$$
and then it is claimed that $\rho^2$ is the fraction of
$\sigma_Y^2$ that has been "explained" by $X$. All this causes you to get all riled up and declare that the covariance
is a totally useless concept. But some folks (not many) like
to write $(1)$ as
$$\left(\hat{Y} -\mu_Y\right) = \left.\left.
\frac{\operatorname{cov}(Y,X)}{\operatorname{var}(X)}
\right(X-\mu_X\right)\tag{2}$$
and say that the deviation of the estimate
$\hat{Y}$ from its mean and the deviation of $X$ from its
mean have the same ratio as the covariance of $X$ and $Y$
and the variance of $X$, while the explained variance is
just $\displaystyle \frac{\operatorname{cov}^2(Y,X)}{\operatorname{var}(X)}$. Would you be willing to listen to an argument from them
that it is the covariance that is the more fundamental concept
and that the correlation coefficient is just some gobbledygook
of little interest? Why, it can't even make up its mind if its
first name is Pearson or Spearman! | Why is Covariance Useful?
It all depends on how you write the parameters in question when
describing a linear regression.
For random variables $X$ and $Y$, the best (in the sense of
linear minimum-mean-square-error) estimate o |
34,535 | Why is Covariance Useful? | I had the same question. Seems that the value of the covariance, by itself, is meaningless. The only thing you can tell by the covariance itself is that, is the covariance is a positive number the populations show similar behavior and it is a negative number the populations show opposing behavior.
However, the value of the covariance is a necessary component of other calculations. In finance, the covariance (of a share and the market) / the variance in market = the Beta value. which is a useful thing.
So, by itself the magnitude of the covariance is meaningless but in combination with other things it provides meaning. | Why is Covariance Useful? | I had the same question. Seems that the value of the covariance, by itself, is meaningless. The only thing you can tell by the covariance itself is that, is the covariance is a positive number the pop | Why is Covariance Useful?
I had the same question. Seems that the value of the covariance, by itself, is meaningless. The only thing you can tell by the covariance itself is that, is the covariance is a positive number the populations show similar behavior and it is a negative number the populations show opposing behavior.
However, the value of the covariance is a necessary component of other calculations. In finance, the covariance (of a share and the market) / the variance in market = the Beta value. which is a useful thing.
So, by itself the magnitude of the covariance is meaningless but in combination with other things it provides meaning. | Why is Covariance Useful?
I had the same question. Seems that the value of the covariance, by itself, is meaningless. The only thing you can tell by the covariance itself is that, is the covariance is a positive number the pop |
34,536 | How to simulate the different types of missing data | Rubin defined three types of missing data:
Missing Completely at Random (MCAR)
MCAR occurs when there is a simple probability that data will be missing, and that probability is unrelated to anything else in your study. For example, a patient can miss a follow up visit because there is an accident on the highway and they simply can't get to the visit.
Missing at Random (MAR)
MAR happens when the missingness is related to information in your study, but all the relevant information to predict missingness is in the existing dataset. An example might be a weight loss study in which people drop out if their trajectory is that they are gaining weight. If you can estimate that trajectory for each person before anyone drops out, and see that those whose slope is positive subsequently drop out, you could take that as MAR.
Not Missing at Random (NMAR)
NMAR is like MAR in that the missingness is related to what is happening in your study, but differs in that the data that are related to the missingness is included in the data that are missing. For instance, if you are studying a treatment for vertigo / 'woozy-ness', but anytime a patient is really woozy, they don't show up for the follow-up visit. Thus, all the high values are missing, and they are missing because they are high!
In other words, the types of missingness specify the mechanism that generates the missingness itself, so if you understand how the mechanism works, you simply write code to replicate it. For example, if you want 7% of your data missing completely at random, draw a number from a uniform distribution for every value in your dataset, and if it is <.07, replace the value with NA. For missing at random, simulate a logistic regression data generating process that outputs a probability of each value being missing (i.e., being replaced with NA) using information that will continue to be non-missing in your dataset. (For an example of simulating a logistic regression data generating process, see my answer here: Logistic regression simulation in order to show that intercept is biased when Y=1 is rare.) You can generate missingness not at random using a similar logistic regression data generating process, where the probability of missingness is a function of the y-value itself (i.e., the value that will potentially be replaced by NA).
Here is an example:
##### generic data setup:
set.seed(977) # this makes the simulation exactly reproducible
ni = 100 # 100 people
nj = 10 # 10 week study
id = rep(1:ni, each=nj)
cond = rep(c("control", "diet"), each=nj*(ni/2))
base = round(rep(rnorm(ni, mean=250, sd=10), each=nj))
week = rep(1:nj, times=ni)
y = round(base + rnorm(ni*nj, mean=0, sd=1))
# MCAR
prop.m = .07 # 7% missingness
mcar = runif(ni*nj, min=0, max=1)
y.mcar = ifelse(mcar<prop.m, NA, y) # unrelated to anything
View(cbind(id, week, cond, base, y, y.mcar))
# MAR
y.mar = matrix(y, ncol=nj, nrow=ni, byrow=TRUE)
for(i in 1:ni){
for(j in 4:nj){
dif1 = y.mar[i,j-2]-y.mar[i,j-3]
dif2 = y.mar[i,j-1]-y.mar[i,j-2]
if(dif1>0 & dif2>0){ # if weight goes up twice, drops out
y.mar[i,j:nj] = NA; break
}
}
}
y.mar = as.vector(t(y.mar))
View(cbind(id, week, cond, base, y, y.mar))
# NMAR
sort.y = sort(y, decreasing=TRUE)
nmar = sort.y[ceiling(prop.m*length(y))]
y.nmar = ifelse(y>nmar, NA, y) # doesn't show up when heavier
View(cbind(id, week, cond, base, y, y.nmar)) | How to simulate the different types of missing data | Rubin defined three types of missing data:
Missing Completely at Random (MCAR)
MCAR occurs when there is a simple probability that data will be missing, and that probability is unrelated to anything | How to simulate the different types of missing data
Rubin defined three types of missing data:
Missing Completely at Random (MCAR)
MCAR occurs when there is a simple probability that data will be missing, and that probability is unrelated to anything else in your study. For example, a patient can miss a follow up visit because there is an accident on the highway and they simply can't get to the visit.
Missing at Random (MAR)
MAR happens when the missingness is related to information in your study, but all the relevant information to predict missingness is in the existing dataset. An example might be a weight loss study in which people drop out if their trajectory is that they are gaining weight. If you can estimate that trajectory for each person before anyone drops out, and see that those whose slope is positive subsequently drop out, you could take that as MAR.
Not Missing at Random (NMAR)
NMAR is like MAR in that the missingness is related to what is happening in your study, but differs in that the data that are related to the missingness is included in the data that are missing. For instance, if you are studying a treatment for vertigo / 'woozy-ness', but anytime a patient is really woozy, they don't show up for the follow-up visit. Thus, all the high values are missing, and they are missing because they are high!
In other words, the types of missingness specify the mechanism that generates the missingness itself, so if you understand how the mechanism works, you simply write code to replicate it. For example, if you want 7% of your data missing completely at random, draw a number from a uniform distribution for every value in your dataset, and if it is <.07, replace the value with NA. For missing at random, simulate a logistic regression data generating process that outputs a probability of each value being missing (i.e., being replaced with NA) using information that will continue to be non-missing in your dataset. (For an example of simulating a logistic regression data generating process, see my answer here: Logistic regression simulation in order to show that intercept is biased when Y=1 is rare.) You can generate missingness not at random using a similar logistic regression data generating process, where the probability of missingness is a function of the y-value itself (i.e., the value that will potentially be replaced by NA).
Here is an example:
##### generic data setup:
set.seed(977) # this makes the simulation exactly reproducible
ni = 100 # 100 people
nj = 10 # 10 week study
id = rep(1:ni, each=nj)
cond = rep(c("control", "diet"), each=nj*(ni/2))
base = round(rep(rnorm(ni, mean=250, sd=10), each=nj))
week = rep(1:nj, times=ni)
y = round(base + rnorm(ni*nj, mean=0, sd=1))
# MCAR
prop.m = .07 # 7% missingness
mcar = runif(ni*nj, min=0, max=1)
y.mcar = ifelse(mcar<prop.m, NA, y) # unrelated to anything
View(cbind(id, week, cond, base, y, y.mcar))
# MAR
y.mar = matrix(y, ncol=nj, nrow=ni, byrow=TRUE)
for(i in 1:ni){
for(j in 4:nj){
dif1 = y.mar[i,j-2]-y.mar[i,j-3]
dif2 = y.mar[i,j-1]-y.mar[i,j-2]
if(dif1>0 & dif2>0){ # if weight goes up twice, drops out
y.mar[i,j:nj] = NA; break
}
}
}
y.mar = as.vector(t(y.mar))
View(cbind(id, week, cond, base, y, y.mar))
# NMAR
sort.y = sort(y, decreasing=TRUE)
nmar = sort.y[ceiling(prop.m*length(y))]
y.nmar = ifelse(y>nmar, NA, y) # doesn't show up when heavier
View(cbind(id, week, cond, base, y, y.nmar)) | How to simulate the different types of missing data
Rubin defined three types of missing data:
Missing Completely at Random (MCAR)
MCAR occurs when there is a simple probability that data will be missing, and that probability is unrelated to anything |
34,537 | Semi supervised classification with unseen classes | This is a very interesting framework.
Building one-vs-all classifiers will help you to identify A,B,C and "others".
However, it won't be able to to differ between D,E and the rest in "others".
I think that you should cluster your data in order to identify the clusters of the unknown class.
If you have a distance function at hand, you can evaluate how well it separates the known classes. However, you can actually learn a proper distance function.
Let L be your labeled dataset.
Build a pair dataset for all pairs x,y in L.
Let the concept of the pairs dataset be the desired distance.
If class(x)=class(y), the distance should be zero.
If the class is different is is domain question of the needed distance (e.g., the distance between A and B might be smaller than the distance between B and C).
Now train a regressor on the pairs dataset.
Use the regressor as the distance function to your clustering algorithm.
Hierarchal clustering algorithms seems to fit well to your needs.
Run the clustering algorithm on the unlabelled data to get clusters of samples.
If you also have one-vs-all classifiers fro the known classes, run them on the samples.
Clusters were the samples tend not belong to the known classes are the candidates for the new classes. | Semi supervised classification with unseen classes | This is a very interesting framework.
Building one-vs-all classifiers will help you to identify A,B,C and "others".
However, it won't be able to to differ between D,E and the rest in "others".
I think | Semi supervised classification with unseen classes
This is a very interesting framework.
Building one-vs-all classifiers will help you to identify A,B,C and "others".
However, it won't be able to to differ between D,E and the rest in "others".
I think that you should cluster your data in order to identify the clusters of the unknown class.
If you have a distance function at hand, you can evaluate how well it separates the known classes. However, you can actually learn a proper distance function.
Let L be your labeled dataset.
Build a pair dataset for all pairs x,y in L.
Let the concept of the pairs dataset be the desired distance.
If class(x)=class(y), the distance should be zero.
If the class is different is is domain question of the needed distance (e.g., the distance between A and B might be smaller than the distance between B and C).
Now train a regressor on the pairs dataset.
Use the regressor as the distance function to your clustering algorithm.
Hierarchal clustering algorithms seems to fit well to your needs.
Run the clustering algorithm on the unlabelled data to get clusters of samples.
If you also have one-vs-all classifiers fro the known classes, run them on the samples.
Clusters were the samples tend not belong to the known classes are the candidates for the new classes. | Semi supervised classification with unseen classes
This is a very interesting framework.
Building one-vs-all classifiers will help you to identify A,B,C and "others".
However, it won't be able to to differ between D,E and the rest in "others".
I think |
34,538 | Semi supervised classification with unseen classes | Have a look at one-class-classifiers. These are classifiers that can tell you that a new object does not agree with your training data.
I'd train a regular multi-class classifier, and a one-class classifier on all your data. If a new instance is rejected by the one-class classifier, have the user either assign it to one of the existing classes; or have the user assign a new class. Then update your classificator. | Semi supervised classification with unseen classes | Have a look at one-class-classifiers. These are classifiers that can tell you that a new object does not agree with your training data.
I'd train a regular multi-class classifier, and a one-class clas | Semi supervised classification with unseen classes
Have a look at one-class-classifiers. These are classifiers that can tell you that a new object does not agree with your training data.
I'd train a regular multi-class classifier, and a one-class classifier on all your data. If a new instance is rejected by the one-class classifier, have the user either assign it to one of the existing classes; or have the user assign a new class. Then update your classificator. | Semi supervised classification with unseen classes
Have a look at one-class-classifiers. These are classifiers that can tell you that a new object does not agree with your training data.
I'd train a regular multi-class classifier, and a one-class clas |
34,539 | Semi supervised classification with unseen classes | I would treat this as a set of semi-supervised one-vs-all classification problems, that is build a binary semi-supervised classifier for each known target class by treating its known instances as positives, instances with different known labels as negative (if classes are mutually exclusive) and the remainder as unlabeled. A common and effective way of incorporating unlabeled instances in the learning process is to treat them as negatives with a very low misclassification penalty (far lower than known negatives).
Unlabeled instances that get rejected by all of the resulting classifiers are then likely part of some class you have no labels of. A subsequent step could then be to cluster all of these unclassified instances in an attempt to determine the number of classes you have no labels of, though this is far from easy. | Semi supervised classification with unseen classes | I would treat this as a set of semi-supervised one-vs-all classification problems, that is build a binary semi-supervised classifier for each known target class by treating its known instances as posi | Semi supervised classification with unseen classes
I would treat this as a set of semi-supervised one-vs-all classification problems, that is build a binary semi-supervised classifier for each known target class by treating its known instances as positives, instances with different known labels as negative (if classes are mutually exclusive) and the remainder as unlabeled. A common and effective way of incorporating unlabeled instances in the learning process is to treat them as negatives with a very low misclassification penalty (far lower than known negatives).
Unlabeled instances that get rejected by all of the resulting classifiers are then likely part of some class you have no labels of. A subsequent step could then be to cluster all of these unclassified instances in an attempt to determine the number of classes you have no labels of, though this is far from easy. | Semi supervised classification with unseen classes
I would treat this as a set of semi-supervised one-vs-all classification problems, that is build a binary semi-supervised classifier for each known target class by treating its known instances as posi |
34,540 | Semi supervised classification with unseen classes | Without example prototypes for the additional groups D,E,etc, identifying them as independent clusters may or may not be needed. If the data can be modeled well by something like a gaussian mixture model, a well-fit model may indeed involve identification of these extra groups. However, such methods are largely unsupervised in nature... there is little need or use for the prototypes you have for A,B,C beyond seeding the collection.
Another method would be to tune a classification model using your labeled dataset, and then apply it to the unlabeled set to obtain an expected classification. Define a threshold of uncertianty, and use items which do not classify below it to seed a new collection of items - that being your unknown new category. Use these new populations to retrain your classifiers. This method is generally known as the Expectation-Maximization algorithm in K-Means and Gaussian Mixture Models, but the general logic can be implemented using neural networks or random forests as classification engines as well. If you need to identify category structure within that newly-identified category, you will need to use an unsupervised technique such as clustering.
The other way to identify the new category is certain "single-class" classifiers whose intention is to identify population outliers. For example, single-class SVMs. I have also experimented with single-class decision trees. However, such methods would not use most of the data you have and I would not expect superior results. | Semi supervised classification with unseen classes | Without example prototypes for the additional groups D,E,etc, identifying them as independent clusters may or may not be needed. If the data can be modeled well by something like a gaussian mixture mo | Semi supervised classification with unseen classes
Without example prototypes for the additional groups D,E,etc, identifying them as independent clusters may or may not be needed. If the data can be modeled well by something like a gaussian mixture model, a well-fit model may indeed involve identification of these extra groups. However, such methods are largely unsupervised in nature... there is little need or use for the prototypes you have for A,B,C beyond seeding the collection.
Another method would be to tune a classification model using your labeled dataset, and then apply it to the unlabeled set to obtain an expected classification. Define a threshold of uncertianty, and use items which do not classify below it to seed a new collection of items - that being your unknown new category. Use these new populations to retrain your classifiers. This method is generally known as the Expectation-Maximization algorithm in K-Means and Gaussian Mixture Models, but the general logic can be implemented using neural networks or random forests as classification engines as well. If you need to identify category structure within that newly-identified category, you will need to use an unsupervised technique such as clustering.
The other way to identify the new category is certain "single-class" classifiers whose intention is to identify population outliers. For example, single-class SVMs. I have also experimented with single-class decision trees. However, such methods would not use most of the data you have and I would not expect superior results. | Semi supervised classification with unseen classes
Without example prototypes for the additional groups D,E,etc, identifying them as independent clusters may or may not be needed. If the data can be modeled well by something like a gaussian mixture mo |
34,541 | What balancing method can I apply to an imbalanced data set? | Since you're using R, you could make use of some elaborated methods like ROSE and SMOTE. But I'm not enrirely certain if re-balancing your dataset is the right solution in your case.
An alternative could be a cost-sensitive algorithm like C5.0 that doesn't need balanced data. You could also think about applying Markov chains to your problem. | What balancing method can I apply to an imbalanced data set? | Since you're using R, you could make use of some elaborated methods like ROSE and SMOTE. But I'm not enrirely certain if re-balancing your dataset is the right solution in your case.
An alternative co | What balancing method can I apply to an imbalanced data set?
Since you're using R, you could make use of some elaborated methods like ROSE and SMOTE. But I'm not enrirely certain if re-balancing your dataset is the right solution in your case.
An alternative could be a cost-sensitive algorithm like C5.0 that doesn't need balanced data. You could also think about applying Markov chains to your problem. | What balancing method can I apply to an imbalanced data set?
Since you're using R, you could make use of some elaborated methods like ROSE and SMOTE. But I'm not enrirely certain if re-balancing your dataset is the right solution in your case.
An alternative co |
34,542 | What balancing method can I apply to an imbalanced data set? | I think that in your dataset the main challenge is not being imbalanced.
The dataset is small and due to the few classes you don't have too many samples for any of them.
By using one-vs-all concepts ( A or not A, B or not B) you can get more samples for each concept.
You can take advantage of the fact the the classes are ordered (A > B > C > D > E) and use a concept the aggregates some of them (e.g., B and above, D and bellow).
Assuming there is no real difference in the reason to get D or E, not only you will get more samples, you also gain by diminishing the distinction between quite similar concepts.
As for changing the dataset in order to cope with the imbalance, go for it.
However, you should validate on the original distribution.
For details see:
https://datascience.stackexchange.com/questions/810/should-i-go-for-a-balanced-dataset-or-a-representative-dataset/8628#8628
Instead of just over/under sampling, you can use a better technique in order to cope with the imbalance.
For details see:
https://www.quora.com/In-classification-how-do-you-handle-an-unbalanced-training-set/answer/Dan-Levin-2 | What balancing method can I apply to an imbalanced data set? | I think that in your dataset the main challenge is not being imbalanced.
The dataset is small and due to the few classes you don't have too many samples for any of them.
By using one-vs-all concepts ( | What balancing method can I apply to an imbalanced data set?
I think that in your dataset the main challenge is not being imbalanced.
The dataset is small and due to the few classes you don't have too many samples for any of them.
By using one-vs-all concepts ( A or not A, B or not B) you can get more samples for each concept.
You can take advantage of the fact the the classes are ordered (A > B > C > D > E) and use a concept the aggregates some of them (e.g., B and above, D and bellow).
Assuming there is no real difference in the reason to get D or E, not only you will get more samples, you also gain by diminishing the distinction between quite similar concepts.
As for changing the dataset in order to cope with the imbalance, go for it.
However, you should validate on the original distribution.
For details see:
https://datascience.stackexchange.com/questions/810/should-i-go-for-a-balanced-dataset-or-a-representative-dataset/8628#8628
Instead of just over/under sampling, you can use a better technique in order to cope with the imbalance.
For details see:
https://www.quora.com/In-classification-how-do-you-handle-an-unbalanced-training-set/answer/Dan-Levin-2 | What balancing method can I apply to an imbalanced data set?
I think that in your dataset the main challenge is not being imbalanced.
The dataset is small and due to the few classes you don't have too many samples for any of them.
By using one-vs-all concepts ( |
34,543 | What balancing method can I apply to an imbalanced data set? | Learning from Imbalanced Data
If you choose to oversample, be sure to do it after you create your train-test splits. If you are using cross validation, you should oversample within each fold. | What balancing method can I apply to an imbalanced data set? | Learning from Imbalanced Data
If you choose to oversample, be sure to do it after you create your train-test splits. If you are using cross validation, you should oversample within each fold. | What balancing method can I apply to an imbalanced data set?
Learning from Imbalanced Data
If you choose to oversample, be sure to do it after you create your train-test splits. If you are using cross validation, you should oversample within each fold. | What balancing method can I apply to an imbalanced data set?
Learning from Imbalanced Data
If you choose to oversample, be sure to do it after you create your train-test splits. If you are using cross validation, you should oversample within each fold. |
34,544 | Regression Developing Countries: GDP-Growth or GDP | Basically, there's no ambiguity here: you must use the differences in the regression. There could be some discussion whether it's a simple difference or a log difference, but the latter is more common in the literature. If you had a cross-section, then this wouldn't matter. For a short period of time it probably doesn't matter that much either, but you're probably going to include countries with high growth in your sample, so the rate of change would be more appropriate.
GDP is not stationary even when you suspect or observe that it's stagnant, i.e. not growing over a sample period.
I think most researchers would take it that GDP is an exponential random walk process: $\Delta \ln GDP_t=g_t \Delta t+\varepsilon_t$. However, you can google to find that some would argue that it's an exponential trend process such as $GDP_t=GDP_0 e^{g_t\Delta t}+\varepsilon_t$. In the former case you see that the growth rate is stochastic, while in the latter it's deterministic and the error simply adds up. Both are nonstationary, hence, the growth rate must be used in any case. | Regression Developing Countries: GDP-Growth or GDP | Basically, there's no ambiguity here: you must use the differences in the regression. There could be some discussion whether it's a simple difference or a log difference, but the latter is more common | Regression Developing Countries: GDP-Growth or GDP
Basically, there's no ambiguity here: you must use the differences in the regression. There could be some discussion whether it's a simple difference or a log difference, but the latter is more common in the literature. If you had a cross-section, then this wouldn't matter. For a short period of time it probably doesn't matter that much either, but you're probably going to include countries with high growth in your sample, so the rate of change would be more appropriate.
GDP is not stationary even when you suspect or observe that it's stagnant, i.e. not growing over a sample period.
I think most researchers would take it that GDP is an exponential random walk process: $\Delta \ln GDP_t=g_t \Delta t+\varepsilon_t$. However, you can google to find that some would argue that it's an exponential trend process such as $GDP_t=GDP_0 e^{g_t\Delta t}+\varepsilon_t$. In the former case you see that the growth rate is stochastic, while in the latter it's deterministic and the error simply adds up. Both are nonstationary, hence, the growth rate must be used in any case. | Regression Developing Countries: GDP-Growth or GDP
Basically, there's no ambiguity here: you must use the differences in the regression. There could be some discussion whether it's a simple difference or a log difference, but the latter is more common |
34,545 | Regression Developing Countries: GDP-Growth or GDP | You are interested in why countries are "stagnating"; stagnation is about a lack of growth, so, GDP growth seems logical.
A couple notes:
1. You will need data on countries that are growing at different rates.
2. If you are using data from different years for the same countries (e.g Ghana in 2011, Ghana in 2012, Ghana in 2013...) then a regular regression is probably inappropriate, Your errors are unlikely to be independent. | Regression Developing Countries: GDP-Growth or GDP | You are interested in why countries are "stagnating"; stagnation is about a lack of growth, so, GDP growth seems logical.
A couple notes:
1. You will need data on countries that are growing at differe | Regression Developing Countries: GDP-Growth or GDP
You are interested in why countries are "stagnating"; stagnation is about a lack of growth, so, GDP growth seems logical.
A couple notes:
1. You will need data on countries that are growing at different rates.
2. If you are using data from different years for the same countries (e.g Ghana in 2011, Ghana in 2012, Ghana in 2013...) then a regular regression is probably inappropriate, Your errors are unlikely to be independent. | Regression Developing Countries: GDP-Growth or GDP
You are interested in why countries are "stagnating"; stagnation is about a lack of growth, so, GDP growth seems logical.
A couple notes:
1. You will need data on countries that are growing at differe |
34,546 | Regression Developing Countries: GDP-Growth or GDP | The excellent textbook by Barro and Sala-i-Martin (Economic Growth, MIT press, 2004), can help you to choose your model.
However, as Peter Flom said, be careful with cross-section regression, it can be misleading; you might need to apply a panel data methodology (see the paper by Islam, 1995, on The Quarterly Journal of Economics 110(4), 1127-1170). Again, see Barro and Sala-i-Martin (2004) for almost all references you might need.
It might be useful for you also to check at some classical (but old!) papers on growth economics such as Sala-i-Martin (1994, European Economic Review 38(3-4), 739-747) , Islam (1995, see above) and Baumol (1986, The American Economic Review 76(5), 1072-1085) among many, many others. | Regression Developing Countries: GDP-Growth or GDP | The excellent textbook by Barro and Sala-i-Martin (Economic Growth, MIT press, 2004), can help you to choose your model.
However, as Peter Flom said, be careful with cross-section regression, it can | Regression Developing Countries: GDP-Growth or GDP
The excellent textbook by Barro and Sala-i-Martin (Economic Growth, MIT press, 2004), can help you to choose your model.
However, as Peter Flom said, be careful with cross-section regression, it can be misleading; you might need to apply a panel data methodology (see the paper by Islam, 1995, on The Quarterly Journal of Economics 110(4), 1127-1170). Again, see Barro and Sala-i-Martin (2004) for almost all references you might need.
It might be useful for you also to check at some classical (but old!) papers on growth economics such as Sala-i-Martin (1994, European Economic Review 38(3-4), 739-747) , Islam (1995, see above) and Baumol (1986, The American Economic Review 76(5), 1072-1085) among many, many others. | Regression Developing Countries: GDP-Growth or GDP
The excellent textbook by Barro and Sala-i-Martin (Economic Growth, MIT press, 2004), can help you to choose your model.
However, as Peter Flom said, be careful with cross-section regression, it can |
34,547 | Regression Developing Countries: GDP-Growth or GDP | A very important question is availability of data. Example, you can get the data from wto.org.
A very realistic answer to your question is that you should build your model in both ways and compare the accuracy prediction parameters e.g. $R^2$ , adjusted $R^2$, and residual graphs. Then start validating your hypotheses for which form of GDP you should take.
My hypotheses on this is that it is just equivalent to the transformation of the GDP variable. | Regression Developing Countries: GDP-Growth or GDP | A very important question is availability of data. Example, you can get the data from wto.org.
A very realistic answer to your question is that you should build your model in both ways and compare the | Regression Developing Countries: GDP-Growth or GDP
A very important question is availability of data. Example, you can get the data from wto.org.
A very realistic answer to your question is that you should build your model in both ways and compare the accuracy prediction parameters e.g. $R^2$ , adjusted $R^2$, and residual graphs. Then start validating your hypotheses for which form of GDP you should take.
My hypotheses on this is that it is just equivalent to the transformation of the GDP variable. | Regression Developing Countries: GDP-Growth or GDP
A very important question is availability of data. Example, you can get the data from wto.org.
A very realistic answer to your question is that you should build your model in both ways and compare the |
34,548 | First principal component of 2D data forming a rectangle? | Imagine data points filling a 2D rectangle in the center of the coordinate system, with its sides oriented along the coordinate axes: from $-a$ to $a$ along the $x$-axis, and from $-b$ to $b$ along the $y$-axis.
The projection on $x$ is a uniform distribution with variance $a^2/3$. The projection on $y$ is also a uniform distribution with variance $b^2/3$. Since $x$ and $y$ are obviously not correlated (if this is not obvious, ask yourself whether the correlation should be positive or negative?.. due to symmetry it can only be zero), the covariance between them is zero. This yields the covariance matrix $$\left(\begin{array}{c}a^2/3&0\\0&b^2/3\end{array}\right).$$ The task of PCA is to diagonalize the covariance matrix. But this one is already diagonal! This means that no rotation is necessary, and $x$-axis and $y$-axis are themselves principal axes. If e.g. $a>b$, then the $x$-axis is the first PC.
This might be a bit counter-intuitive: it might seem that a projection on the diagonal should have larger variance than the projection on the longer side; but it is in fact not so.
Bonus: Dzhanibekov effect
You seem to have meant a 3D rectangular parallelepiped instead of 2D rectangle. The arguments of course stay the same: covariance matrix is $3\times 3$ but still diagonal with principal axes being the coordinate axes.
Incidentally, there is a curious effect in mechanics concerning rotating solid body with three different moments of inertia (which is a mechanics analog of variance). It turns out that rotations around the axes with the largest and the smallest moment of inertia are stable, but rotation around the axis with the middle moment of inertia is unstable. Moreover, a rotating body will experience sudden "flips", which is known as Dzhanibekov effect -- after a Russian cosmonaut who observed it in space. One can easily observe it when spinning a book or a table tennis racket. See the following great threads on mathoverflow and on physics.SE and these videos:
Mathoverflow thread -- check out Terry Tao's answer!
Physics.SE thread
Youtube: Dzhanibekov demonstrating his effect
Youtube: rotating book in space
Youtube: something else spinning in space
Youtube: demonstration with a table tennis racket | First principal component of 2D data forming a rectangle? | Imagine data points filling a 2D rectangle in the center of the coordinate system, with its sides oriented along the coordinate axes: from $-a$ to $a$ along the $x$-axis, and from $-b$ to $b$ along th | First principal component of 2D data forming a rectangle?
Imagine data points filling a 2D rectangle in the center of the coordinate system, with its sides oriented along the coordinate axes: from $-a$ to $a$ along the $x$-axis, and from $-b$ to $b$ along the $y$-axis.
The projection on $x$ is a uniform distribution with variance $a^2/3$. The projection on $y$ is also a uniform distribution with variance $b^2/3$. Since $x$ and $y$ are obviously not correlated (if this is not obvious, ask yourself whether the correlation should be positive or negative?.. due to symmetry it can only be zero), the covariance between them is zero. This yields the covariance matrix $$\left(\begin{array}{c}a^2/3&0\\0&b^2/3\end{array}\right).$$ The task of PCA is to diagonalize the covariance matrix. But this one is already diagonal! This means that no rotation is necessary, and $x$-axis and $y$-axis are themselves principal axes. If e.g. $a>b$, then the $x$-axis is the first PC.
This might be a bit counter-intuitive: it might seem that a projection on the diagonal should have larger variance than the projection on the longer side; but it is in fact not so.
Bonus: Dzhanibekov effect
You seem to have meant a 3D rectangular parallelepiped instead of 2D rectangle. The arguments of course stay the same: covariance matrix is $3\times 3$ but still diagonal with principal axes being the coordinate axes.
Incidentally, there is a curious effect in mechanics concerning rotating solid body with three different moments of inertia (which is a mechanics analog of variance). It turns out that rotations around the axes with the largest and the smallest moment of inertia are stable, but rotation around the axis with the middle moment of inertia is unstable. Moreover, a rotating body will experience sudden "flips", which is known as Dzhanibekov effect -- after a Russian cosmonaut who observed it in space. One can easily observe it when spinning a book or a table tennis racket. See the following great threads on mathoverflow and on physics.SE and these videos:
Mathoverflow thread -- check out Terry Tao's answer!
Physics.SE thread
Youtube: Dzhanibekov demonstrating his effect
Youtube: rotating book in space
Youtube: something else spinning in space
Youtube: demonstration with a table tennis racket | First principal component of 2D data forming a rectangle?
Imagine data points filling a 2D rectangle in the center of the coordinate system, with its sides oriented along the coordinate axes: from $-a$ to $a$ along the $x$-axis, and from $-b$ to $b$ along th |
34,549 | First principal component of 2D data forming a rectangle? | amoeba's (great) answer says:
This might be a bit counter-intuitive: it might seem that a projection on the diagonal should have larger variance than the projection on the longer side; but it is in fact not so.
Indeed it is counter-intuitive, but maybe we can counter the counter-intuition?(As intuition is my goal here, I won't try to be rigorous. Beware.)
For that, let's look at a rectangle which is oriented along the axes, with $a=80$ and $b=20$ (sticking to amoeba's notations). In the first images, the black lines are the directions of the original basis, and the green lines - of the new basis. Images on the left show the world according to the original basis, and those on the right - according to the new basis.
A Warm Up: Is the new covariance matrix diagonal?
Recall that $\text{cov}(X,Y)=E[(X-E[X])(Y-E[Y])]$.
The center of the data is the origin (both before and after the change of basis), so $E[X]=0=E[Y]$ and $\text{cov}(X,Y)=E[XY]$. i.e. data points in the first and third quadrants "push" the value of the covariance up, while data points in the second and fourth quadrants "push" the value of the covariance down.
Looking at the image, you can see that:
In the original basis, for every data point that "pushes up" (purple), there is a data point that "pushes down" (orange) with the same "force" (i.e. $|XY|$ is equal for both), so that they cancel each other's efforts. Therefore, we get $\text{cov}(X,Y)=0$.
In the new basis, there are much more data points that "push down", and many of them with more "force" (i.e. $|XY|$ is higher).
Therefore, we get $\text{cov}(X,Y)<0$.
So in the new basis, the covariance matrix is of the shape $\left(\begin{array}{c}+&-\\-&+\end{array}\right)$, i.e. not diagonal. Good.
By the way, in case of a square, the covariance matrix is diagonal before and after the change of basis:
The Real Deal: Does the projection on the diagonal have a higher variance?
Recall that $\text{Var}(X)=E[(X-E[X])^2]$. As aforementioned, $E[X]=0$, and thus $\text{Var}(X)=E[X^2]$.
So in our case, the variance along the horizontal axis is a measure of the horizontal distances between data points and the origin.
To start building our intuition, let's check for how many data points $x > 70$, i.e. their horizontal distances from the origin are higher than $70$:
In the original basis - $33$ data points
In the new basis - $26$ data points
Looks good, but I am not convinced yet that on average the horizontal distances from the origin decrease after the change of basis.
Let's try another approach: examine the difference in the X value of each data point (caused by the change of basis):
So, what did the change of basis do to the horizontal distances from the origin?
increased them by more than $5$ for $0$ data points
increased them by around $4$ for $10.6\%$ of the data points
increased them by around $2$ for $18.6\%$ of the data points
left them quite the same for $19.3\%$ of the data points
decreased them by around $2$ for $17.7\%$ of the data points
decreased them by around $4$ for $19.1\%$ of the data points
decreased them by more than $5$ for $14.6\%$ of the data points
And more roughly:
increased them for $39.5\%$ of the data points
left it the same for $1$ data point (the origin)
decreased them for $60.3\%$ of the data points
It is quite obvious that on average the horizontal distances decreased after the change of basis (i.e. the variance along the horizontal axis is lower after the change of basis!), but for me, the intuition is still hard to grasp.
Let's try to look at the rough version of the same image:
Much better.
I still find it hard to articulate the intuition (mainly as I am still struggling with it myself), but I hope that the images speak for themselves.
Finally, for completeness' sake, the same image, but in the case of a square:
(The horizontal distance from the origin increased/decreased for exactly $50\%$ of the data points, and the variance along the horizontal axis is the same before and after the change of basis.) | First principal component of 2D data forming a rectangle? | amoeba's (great) answer says:
This might be a bit counter-intuitive: it might seem that a projection on the diagonal should have larger variance than the projection on the longer side; but it is in f | First principal component of 2D data forming a rectangle?
amoeba's (great) answer says:
This might be a bit counter-intuitive: it might seem that a projection on the diagonal should have larger variance than the projection on the longer side; but it is in fact not so.
Indeed it is counter-intuitive, but maybe we can counter the counter-intuition?(As intuition is my goal here, I won't try to be rigorous. Beware.)
For that, let's look at a rectangle which is oriented along the axes, with $a=80$ and $b=20$ (sticking to amoeba's notations). In the first images, the black lines are the directions of the original basis, and the green lines - of the new basis. Images on the left show the world according to the original basis, and those on the right - according to the new basis.
A Warm Up: Is the new covariance matrix diagonal?
Recall that $\text{cov}(X,Y)=E[(X-E[X])(Y-E[Y])]$.
The center of the data is the origin (both before and after the change of basis), so $E[X]=0=E[Y]$ and $\text{cov}(X,Y)=E[XY]$. i.e. data points in the first and third quadrants "push" the value of the covariance up, while data points in the second and fourth quadrants "push" the value of the covariance down.
Looking at the image, you can see that:
In the original basis, for every data point that "pushes up" (purple), there is a data point that "pushes down" (orange) with the same "force" (i.e. $|XY|$ is equal for both), so that they cancel each other's efforts. Therefore, we get $\text{cov}(X,Y)=0$.
In the new basis, there are much more data points that "push down", and many of them with more "force" (i.e. $|XY|$ is higher).
Therefore, we get $\text{cov}(X,Y)<0$.
So in the new basis, the covariance matrix is of the shape $\left(\begin{array}{c}+&-\\-&+\end{array}\right)$, i.e. not diagonal. Good.
By the way, in case of a square, the covariance matrix is diagonal before and after the change of basis:
The Real Deal: Does the projection on the diagonal have a higher variance?
Recall that $\text{Var}(X)=E[(X-E[X])^2]$. As aforementioned, $E[X]=0$, and thus $\text{Var}(X)=E[X^2]$.
So in our case, the variance along the horizontal axis is a measure of the horizontal distances between data points and the origin.
To start building our intuition, let's check for how many data points $x > 70$, i.e. their horizontal distances from the origin are higher than $70$:
In the original basis - $33$ data points
In the new basis - $26$ data points
Looks good, but I am not convinced yet that on average the horizontal distances from the origin decrease after the change of basis.
Let's try another approach: examine the difference in the X value of each data point (caused by the change of basis):
So, what did the change of basis do to the horizontal distances from the origin?
increased them by more than $5$ for $0$ data points
increased them by around $4$ for $10.6\%$ of the data points
increased them by around $2$ for $18.6\%$ of the data points
left them quite the same for $19.3\%$ of the data points
decreased them by around $2$ for $17.7\%$ of the data points
decreased them by around $4$ for $19.1\%$ of the data points
decreased them by more than $5$ for $14.6\%$ of the data points
And more roughly:
increased them for $39.5\%$ of the data points
left it the same for $1$ data point (the origin)
decreased them for $60.3\%$ of the data points
It is quite obvious that on average the horizontal distances decreased after the change of basis (i.e. the variance along the horizontal axis is lower after the change of basis!), but for me, the intuition is still hard to grasp.
Let's try to look at the rough version of the same image:
Much better.
I still find it hard to articulate the intuition (mainly as I am still struggling with it myself), but I hope that the images speak for themselves.
Finally, for completeness' sake, the same image, but in the case of a square:
(The horizontal distance from the origin increased/decreased for exactly $50\%$ of the data points, and the variance along the horizontal axis is the same before and after the change of basis.) | First principal component of 2D data forming a rectangle?
amoeba's (great) answer says:
This might be a bit counter-intuitive: it might seem that a projection on the diagonal should have larger variance than the projection on the longer side; but it is in f |
34,550 | Is the chi-squared test appropriate with many small counts in a 5x2 table? | The solution depends intimately on how the data were collected and summarized. This answer takes you through a process of thinking about the data, analyzing them, reflecting on the results, and improving the test until some insight is achieved. Along the way we develop and compare five variants of the $\chi^2$ test.
Fisher's test is not applicable because you have two independent samples. Assuming you decided beforehand how large each sample should be, the column counts ("marginals") are indeed fixed, as assumed by that test. But (I presume) you had no predetermined control over the total numbers of each ethnicity that would be observed, so the row counts (their marginals) are not fixed. That is contrary to what Fisher's test assumes.
(Fisher's test would indeed apply if these data had arisen from a single collection of $45$ subjects who were randomly divided by the experimenter into two groups of predetermined sizes $23$ and $22$, as is often done in controlled experiments.)
The Chi-squared Test
In these data the total count is $45$ for $5\times 2=10$ table entries, producing a mean count of $4.5$ spread through two columns of roughly equal totals ($23$ and $22$). This is starting to get into the range where rules of thumb suggest the $\chi^2$ statistic--which is just a number measuring a discrepancy between the two ethnicity distributions--may have an approximate $\chi^2$ distribution. Let us therefore begin by computing the statistic and its associated p-value. (I am using R for these calculations.)
x <- cbind(A=c(1,3,1,3,15), B=c(2,0,0,8,12))
chisq.test(x)
The output is
X-squared = 6.9206, df = 4, p-value = 0.1401
along with a warning that "Chi-squared approximation may be incorrect." Fair enough. But since the reported p-value is not extreme--so we're not reaching far into the tails of the distribution of the statistic--we can expect this p-value to be fairly accurate. Let's see.
Simulating the Chi-squared P-value
One way to check is to simulate the true distribution of the $\chi^2$ statistic. R offers a "Monte Carlo" test.
chisq.test(x, simulate.p.value=TRUE, B=1e5)
Using $100,000$ iterations (and repeating that several times), this test reports a p-value consistently near $0.130$: reasonably close to the original p-value of $0.1401$.
(If I am reading the R source code for chisq.test correctly, in each Monte-Carlo iteration it computes a $\chi^2$ statistic comparing the simulated data to the estimates obtained from the original data (rather than to estimates obtained from the marginals of the simulated data, as is performed in a true $\chi^2$ test). It is difficult to see how this is directly applicable to the original hypothesis. The R manual refers us to Hope, A. C. A. (1968) A simplified Monte Carlo significance test procedure. J. Roy. Statist. Soc. B 30, 582–598. I cannot find in that paper any justification for what R is doing; in particular, the paper uses independent tests of each simulated sample to assess goodness of fit for continuous distributions, whereas the R software conducts a series of dependent tests to assess independence among samples involving discrete distributions.)
Going Deeper
Another approach is to bootstrap the test. This procedure uses the data to estimate the parameters under the null hypothesis (that the two samples are from the same population), then repeatedly replicates the data-collection process by drawing new values according to that distribution. By studying the distribution of $\chi^2$ statistics that arise, we can see where the actual $\chi^2$ statistic fits--and decide whether it is sufficiently extreme to warrant rejection of the null hypothesis.
The row marginals let us estimate the relative proportions of each ethnicity under the null hypothesis: Ethnicity_1 was observed $(2+1)/45$ of the time, etc. Each bootstrap iteration draws two independent samples from this hypothesized distribution, one of size $23$ and another of size $22$, and computes the $\chi^2$ statistic for these two samples.
When you try that, you will stumble upon a very interesting phenomenon: because ethnicities 2 and 3 were observed rarely, in many simulated samples they are not observed at all. This makes it impossible to calculate a $\chi^2$ statistic based on all five ethnicities! (It would require you to divide by zero.) What to do?
You could just compute the $\chi^2$ statistic based on the ethnicities actually observed, even when only three or four different ones appear among the two samples. When I do this with $10,000$ iterations, I obtain a p-value of $0.086$.
You could compute the $\chi^2$ statistic only in those simulations where all five ethnicities were observed. This time I compute a p-value of $0.108$. (Less than $60\%$ of all simulations included all five ethnicities.)
Conclusions
We have obtained a range of p-values from $0.086$ through $0.140$, some more legitimately applicable than others. (The Fisher Exact test p-value of $0.119$, by the way, fits within this range.) If your criterion for a significant result is more stringent than $8.6\%$, there is no problem: you will not reject the null hypothesis and so you needn't worry over which tests really are applicable. But if your criterion lies within this range (such as $10\%$), then your choice of test matters.
As the preceding efforts at simulation showed so clearly, which test to use depends on your application. Do you know that only five ethnicities could have been observed? Or are you tracking only the ethnicities that happened to appear in your samples? From the gap in numbering between 2 and 4 I would guess that Ethnicity_3 might be possible but was not observed. As such, if you choose to use a $\chi^2$ statistic based only on the ethnicities observed, then you are in situation (1) and you should report a p-value of $0.086$. If you had collected the data differently--for example, by augmenting the sample sizes until at least one of each ethnicity appeared in the dataset--then an approach comparable to (2) would be more appropriate. The key is to reproduce faithfully all details of your actual sampling procedure within the simulation so that you obtain an honest representation of the distribution of your test statistic.
Planning Follow-on Studies
It may be worth remarking that even if you view this range of results as being immaterial--you would make the same decision regardless--the choice of test can nevertheless make a big difference if you plan to conduct additional experiments in the hope of demonstrating an effect. Under that assumption, by using a p-value of $0.086$ (and adopting a significance threshold of $0.05$) you would need a dataset approximately $(Z_{0.05}/Z_{0.086})^2 = 1.45$ times as great as the current one, whereas by using a p-value of $0.140$ you would want to collect $2.32$ times as much data, which will cost $60\%$ more to do.
(The "$Z_{*}$" are quantiles of a standard Normal distribution, invoked here as a rough approximation to a $\chi^2$ power and sample size analysis. The point is not to do an accurate power analysis, but only to observe that it takes relatively few additional data to lower a p-value that is near $0.05$ to below $0.05$ -- assuming the effect is real! -- compared to the amount of data needed to lower a p-value that is far from $0.05$ to below $0.05$.) | Is the chi-squared test appropriate with many small counts in a 5x2 table? | The solution depends intimately on how the data were collected and summarized. This answer takes you through a process of thinking about the data, analyzing them, reflecting on the results, and impro | Is the chi-squared test appropriate with many small counts in a 5x2 table?
The solution depends intimately on how the data were collected and summarized. This answer takes you through a process of thinking about the data, analyzing them, reflecting on the results, and improving the test until some insight is achieved. Along the way we develop and compare five variants of the $\chi^2$ test.
Fisher's test is not applicable because you have two independent samples. Assuming you decided beforehand how large each sample should be, the column counts ("marginals") are indeed fixed, as assumed by that test. But (I presume) you had no predetermined control over the total numbers of each ethnicity that would be observed, so the row counts (their marginals) are not fixed. That is contrary to what Fisher's test assumes.
(Fisher's test would indeed apply if these data had arisen from a single collection of $45$ subjects who were randomly divided by the experimenter into two groups of predetermined sizes $23$ and $22$, as is often done in controlled experiments.)
The Chi-squared Test
In these data the total count is $45$ for $5\times 2=10$ table entries, producing a mean count of $4.5$ spread through two columns of roughly equal totals ($23$ and $22$). This is starting to get into the range where rules of thumb suggest the $\chi^2$ statistic--which is just a number measuring a discrepancy between the two ethnicity distributions--may have an approximate $\chi^2$ distribution. Let us therefore begin by computing the statistic and its associated p-value. (I am using R for these calculations.)
x <- cbind(A=c(1,3,1,3,15), B=c(2,0,0,8,12))
chisq.test(x)
The output is
X-squared = 6.9206, df = 4, p-value = 0.1401
along with a warning that "Chi-squared approximation may be incorrect." Fair enough. But since the reported p-value is not extreme--so we're not reaching far into the tails of the distribution of the statistic--we can expect this p-value to be fairly accurate. Let's see.
Simulating the Chi-squared P-value
One way to check is to simulate the true distribution of the $\chi^2$ statistic. R offers a "Monte Carlo" test.
chisq.test(x, simulate.p.value=TRUE, B=1e5)
Using $100,000$ iterations (and repeating that several times), this test reports a p-value consistently near $0.130$: reasonably close to the original p-value of $0.1401$.
(If I am reading the R source code for chisq.test correctly, in each Monte-Carlo iteration it computes a $\chi^2$ statistic comparing the simulated data to the estimates obtained from the original data (rather than to estimates obtained from the marginals of the simulated data, as is performed in a true $\chi^2$ test). It is difficult to see how this is directly applicable to the original hypothesis. The R manual refers us to Hope, A. C. A. (1968) A simplified Monte Carlo significance test procedure. J. Roy. Statist. Soc. B 30, 582–598. I cannot find in that paper any justification for what R is doing; in particular, the paper uses independent tests of each simulated sample to assess goodness of fit for continuous distributions, whereas the R software conducts a series of dependent tests to assess independence among samples involving discrete distributions.)
Going Deeper
Another approach is to bootstrap the test. This procedure uses the data to estimate the parameters under the null hypothesis (that the two samples are from the same population), then repeatedly replicates the data-collection process by drawing new values according to that distribution. By studying the distribution of $\chi^2$ statistics that arise, we can see where the actual $\chi^2$ statistic fits--and decide whether it is sufficiently extreme to warrant rejection of the null hypothesis.
The row marginals let us estimate the relative proportions of each ethnicity under the null hypothesis: Ethnicity_1 was observed $(2+1)/45$ of the time, etc. Each bootstrap iteration draws two independent samples from this hypothesized distribution, one of size $23$ and another of size $22$, and computes the $\chi^2$ statistic for these two samples.
When you try that, you will stumble upon a very interesting phenomenon: because ethnicities 2 and 3 were observed rarely, in many simulated samples they are not observed at all. This makes it impossible to calculate a $\chi^2$ statistic based on all five ethnicities! (It would require you to divide by zero.) What to do?
You could just compute the $\chi^2$ statistic based on the ethnicities actually observed, even when only three or four different ones appear among the two samples. When I do this with $10,000$ iterations, I obtain a p-value of $0.086$.
You could compute the $\chi^2$ statistic only in those simulations where all five ethnicities were observed. This time I compute a p-value of $0.108$. (Less than $60\%$ of all simulations included all five ethnicities.)
Conclusions
We have obtained a range of p-values from $0.086$ through $0.140$, some more legitimately applicable than others. (The Fisher Exact test p-value of $0.119$, by the way, fits within this range.) If your criterion for a significant result is more stringent than $8.6\%$, there is no problem: you will not reject the null hypothesis and so you needn't worry over which tests really are applicable. But if your criterion lies within this range (such as $10\%$), then your choice of test matters.
As the preceding efforts at simulation showed so clearly, which test to use depends on your application. Do you know that only five ethnicities could have been observed? Or are you tracking only the ethnicities that happened to appear in your samples? From the gap in numbering between 2 and 4 I would guess that Ethnicity_3 might be possible but was not observed. As such, if you choose to use a $\chi^2$ statistic based only on the ethnicities observed, then you are in situation (1) and you should report a p-value of $0.086$. If you had collected the data differently--for example, by augmenting the sample sizes until at least one of each ethnicity appeared in the dataset--then an approach comparable to (2) would be more appropriate. The key is to reproduce faithfully all details of your actual sampling procedure within the simulation so that you obtain an honest representation of the distribution of your test statistic.
Planning Follow-on Studies
It may be worth remarking that even if you view this range of results as being immaterial--you would make the same decision regardless--the choice of test can nevertheless make a big difference if you plan to conduct additional experiments in the hope of demonstrating an effect. Under that assumption, by using a p-value of $0.086$ (and adopting a significance threshold of $0.05$) you would need a dataset approximately $(Z_{0.05}/Z_{0.086})^2 = 1.45$ times as great as the current one, whereas by using a p-value of $0.140$ you would want to collect $2.32$ times as much data, which will cost $60\%$ more to do.
(The "$Z_{*}$" are quantiles of a standard Normal distribution, invoked here as a rough approximation to a $\chi^2$ power and sample size analysis. The point is not to do an accurate power analysis, but only to observe that it takes relatively few additional data to lower a p-value that is near $0.05$ to below $0.05$ -- assuming the effect is real! -- compared to the amount of data needed to lower a p-value that is far from $0.05$ to below $0.05$.) | Is the chi-squared test appropriate with many small counts in a 5x2 table?
The solution depends intimately on how the data were collected and summarized. This answer takes you through a process of thinking about the data, analyzing them, reflecting on the results, and impro |
34,551 | Is the chi-squared test appropriate with many small counts in a 5x2 table? | The 50-year-old rule of thumb is that a Fisher's exact test is more appropriate when the expected counts drop below 5 or 10 (depending on your degrees of freedom). Your data has sample sizes that are too small for a chi-squared test to be accurate (although this is generally considered a pretty conservative rule and people quibble). Fisher conceived of the test as 2 x 2 because anything larger is too hard to do by hand, but you can do it with a monte carlo simulation.
Some statistical packages (R, stata, spss) have a ready-made function to do this. Is it important that the work be done in scipy?
A good discussion of contingency tables larger than 2 x 2: Fisher's Exact Test in contingency tables larger than 2x2
Some similar questions:
(In R) Fisher's exact test in 3x2 contingency table
(In Stata) Fisher's exact test or chi-square test
(In SPSS) Fisher's Exact Test in contingency tables larger than 2x2 | Is the chi-squared test appropriate with many small counts in a 5x2 table? | The 50-year-old rule of thumb is that a Fisher's exact test is more appropriate when the expected counts drop below 5 or 10 (depending on your degrees of freedom). Your data has sample sizes that are | Is the chi-squared test appropriate with many small counts in a 5x2 table?
The 50-year-old rule of thumb is that a Fisher's exact test is more appropriate when the expected counts drop below 5 or 10 (depending on your degrees of freedom). Your data has sample sizes that are too small for a chi-squared test to be accurate (although this is generally considered a pretty conservative rule and people quibble). Fisher conceived of the test as 2 x 2 because anything larger is too hard to do by hand, but you can do it with a monte carlo simulation.
Some statistical packages (R, stata, spss) have a ready-made function to do this. Is it important that the work be done in scipy?
A good discussion of contingency tables larger than 2 x 2: Fisher's Exact Test in contingency tables larger than 2x2
Some similar questions:
(In R) Fisher's exact test in 3x2 contingency table
(In Stata) Fisher's exact test or chi-square test
(In SPSS) Fisher's Exact Test in contingency tables larger than 2x2 | Is the chi-squared test appropriate with many small counts in a 5x2 table?
The 50-year-old rule of thumb is that a Fisher's exact test is more appropriate when the expected counts drop below 5 or 10 (depending on your degrees of freedom). Your data has sample sizes that are |
34,552 | Logistic regression: class probabilities | The predicted values only tell you how likely it is that an observation belongs to the class coded as 1 given its explanatory variables. For classification, you need to find a threshold $t$ which in some sense is optimal for your problem. This is e.g. affected by monetary costs or ethical boundaries.
If you don't have any of these costs or boundaries, i.e. is a cost function, one criterion could be to minimize the sum of the error frequencies. For this the following two terms are important:
Sensitivity denotes the fraction of positives that were correctly specified for a given $t$.
Specificity denotes the fraction of negatives that were correctly specified for a given $t$.
Denote $s_0$ as Senstitvity and $s_1$ as Specificty, minimizing the sum of the error frequencies is equivalent to finding maximum $s_0(t) + s_1(t)$ for all thresholds $t$.
Here, I recommend to use the pROC package in R. It provides a very useful function called roc. See the sample code below. Here, response is your vector of ones and zeros and predictor your predictions. Moreover, the code produces also the corresponding ROC curve and adds a vertical line where the optimal threshold was found.
Please note: You provided very little data so I simulated some myself to get more different Sensitivities and Specificities. You can find the code for the simulated data below the picture.
rm(list = ls()) # clear work space
#install and load package
install.packages("pROC")
library(pROC)
#apply roc function
analysis <- roc(response=p$Real, predictor=p$p)
#Find t that minimizes error
e <- cbind(analysis$thresholds,analysis$sensitivities+analysis$specificities)
opt_t <- subset(e,e[,2]==max(e[,2]))[,1]
#Plot ROC Curve
plot(1-analysis$specificities,analysis$sensitivities,type="l",
ylab="Sensitiviy",xlab="1-Specificity",col="black",lwd=2,
main = "ROC Curve for Simulated Data")
abline(a=0,b=1)
abline(v = opt_t) #add optimal t to ROC curve
opt_t #print t
##Simulate Data
set.seed(123456)
n <- 10000
q <- 0.8
#Simulate predictions
Real <- c(sample(c(0,1), n/2, replace = TRUE, prob = c(1-q,q)),
sample(c(0,1), n/2, replace = TRUE, prob = c(0.7,0.3)))
#Simulate Response
p <- c(rep(seq(0.4,0.9, length=100), 50),
rep(seq(0.2,0.6, length=100), 50))
p <- data.frame(cbind(Real, p)) | Logistic regression: class probabilities | The predicted values only tell you how likely it is that an observation belongs to the class coded as 1 given its explanatory variables. For classification, you need to find a threshold $t$ which in s | Logistic regression: class probabilities
The predicted values only tell you how likely it is that an observation belongs to the class coded as 1 given its explanatory variables. For classification, you need to find a threshold $t$ which in some sense is optimal for your problem. This is e.g. affected by monetary costs or ethical boundaries.
If you don't have any of these costs or boundaries, i.e. is a cost function, one criterion could be to minimize the sum of the error frequencies. For this the following two terms are important:
Sensitivity denotes the fraction of positives that were correctly specified for a given $t$.
Specificity denotes the fraction of negatives that were correctly specified for a given $t$.
Denote $s_0$ as Senstitvity and $s_1$ as Specificty, minimizing the sum of the error frequencies is equivalent to finding maximum $s_0(t) + s_1(t)$ for all thresholds $t$.
Here, I recommend to use the pROC package in R. It provides a very useful function called roc. See the sample code below. Here, response is your vector of ones and zeros and predictor your predictions. Moreover, the code produces also the corresponding ROC curve and adds a vertical line where the optimal threshold was found.
Please note: You provided very little data so I simulated some myself to get more different Sensitivities and Specificities. You can find the code for the simulated data below the picture.
rm(list = ls()) # clear work space
#install and load package
install.packages("pROC")
library(pROC)
#apply roc function
analysis <- roc(response=p$Real, predictor=p$p)
#Find t that minimizes error
e <- cbind(analysis$thresholds,analysis$sensitivities+analysis$specificities)
opt_t <- subset(e,e[,2]==max(e[,2]))[,1]
#Plot ROC Curve
plot(1-analysis$specificities,analysis$sensitivities,type="l",
ylab="Sensitiviy",xlab="1-Specificity",col="black",lwd=2,
main = "ROC Curve for Simulated Data")
abline(a=0,b=1)
abline(v = opt_t) #add optimal t to ROC curve
opt_t #print t
##Simulate Data
set.seed(123456)
n <- 10000
q <- 0.8
#Simulate predictions
Real <- c(sample(c(0,1), n/2, replace = TRUE, prob = c(1-q,q)),
sample(c(0,1), n/2, replace = TRUE, prob = c(0.7,0.3)))
#Simulate Response
p <- c(rep(seq(0.4,0.9, length=100), 50),
rep(seq(0.2,0.6, length=100), 50))
p <- data.frame(cbind(Real, p)) | Logistic regression: class probabilities
The predicted values only tell you how likely it is that an observation belongs to the class coded as 1 given its explanatory variables. For classification, you need to find a threshold $t$ which in s |
34,553 | Logistic regression: class probabilities | Put simply/blunty a logistic regression model is not a classifier. It is a model for the probability parameter of the binomial distribution. This is why predict() gives probabilities.
In order to make it a classifier you need to specify a function converts probabilities into classes. Choosing a cutoff is one way - although probably not ideal for a large number of predictions. Taking your example, and the "naive" cutoff of 0.5 leads to all observations being classified as $0$. This is despite the model expectation of the number of $1$ classes to be about $2.5$
A better approach could be taking a randomised decision - generate a bernoulli random variable with the probability, and set that as your classifier. | Logistic regression: class probabilities | Put simply/blunty a logistic regression model is not a classifier. It is a model for the probability parameter of the binomial distribution. This is why predict() gives probabilities.
In order to ma | Logistic regression: class probabilities
Put simply/blunty a logistic regression model is not a classifier. It is a model for the probability parameter of the binomial distribution. This is why predict() gives probabilities.
In order to make it a classifier you need to specify a function converts probabilities into classes. Choosing a cutoff is one way - although probably not ideal for a large number of predictions. Taking your example, and the "naive" cutoff of 0.5 leads to all observations being classified as $0$. This is despite the model expectation of the number of $1$ classes to be about $2.5$
A better approach could be taking a randomised decision - generate a bernoulli random variable with the probability, and set that as your classifier. | Logistic regression: class probabilities
Put simply/blunty a logistic regression model is not a classifier. It is a model for the probability parameter of the binomial distribution. This is why predict() gives probabilities.
In order to ma |
34,554 | Logistic regression: class probabilities | For finding which level is 0 and which level is 1, use levels() function on the response column.
You can change the order of levels by using the factor() function:
response <- factor(response,levels=c(level2,level1)) | Logistic regression: class probabilities | For finding which level is 0 and which level is 1, use levels() function on the response column.
You can change the order of levels by using the factor() function:
response <- factor(response,levels= | Logistic regression: class probabilities
For finding which level is 0 and which level is 1, use levels() function on the response column.
You can change the order of levels by using the factor() function:
response <- factor(response,levels=c(level2,level1)) | Logistic regression: class probabilities
For finding which level is 0 and which level is 1, use levels() function on the response column.
You can change the order of levels by using the factor() function:
response <- factor(response,levels= |
34,555 | Logistic / multinomial regression as two / multiple Poisson regressions? | Assume that you're fitting a multinomial regression model with J categories where the contrast between j and the last category J is modeled as
$$
\log \frac{\mu_{ij}}{\mu_{iJ}} = \alpha_j + X_i \beta_j
$$
where $X_i$ is a vector of covariates associated with the $i$th case. This is rather a hard optimisation problem because the parameters are coupled by the multinomial's conditioning on $N_i$, the $i$-th case's marginal total.
You can instead fit J separate Poisson regressions with linear predictor of the form
$$
\log\, \mu_{ij} = \eta + \theta_i + \alpha_j^* + X_i\beta_j^*
$$
where each case gets one nuisance parameter $\theta$.
Subtracting the expression for $\log\, \mu_{ij}$ from $\log\, \mu_{iJ}$ to construct $\log (\mu_{ij}/\mu_{iJ})$ shows that the parameters are related as $\alpha_j = \alpha^*_j − \alpha^*_J$ and $\beta_j = \beta^*_j - \beta^*_J$ (the nuisance parameters and $\eta$ cancel). Thus you get what you came for: fitting multiple Poisson regression models gives you the same result as fitting a single more complicated multinomial regression model.
This exposition is essentially the one in Gérman Rodríguez's always excellent notes, in the section on the 'equivalent log-linear model'.
In general this manoeuvre is called the multinomial-Poisson transformation and was written about by Baker in 1994.
Notice also that you don't need to adjust $X$ at all to use it. | Logistic / multinomial regression as two / multiple Poisson regressions? | Assume that you're fitting a multinomial regression model with J categories where the contrast between j and the last category J is modeled as
$$
\log \frac{\mu_{ij}}{\mu_{iJ}} = \alpha_j + X_i \beta | Logistic / multinomial regression as two / multiple Poisson regressions?
Assume that you're fitting a multinomial regression model with J categories where the contrast between j and the last category J is modeled as
$$
\log \frac{\mu_{ij}}{\mu_{iJ}} = \alpha_j + X_i \beta_j
$$
where $X_i$ is a vector of covariates associated with the $i$th case. This is rather a hard optimisation problem because the parameters are coupled by the multinomial's conditioning on $N_i$, the $i$-th case's marginal total.
You can instead fit J separate Poisson regressions with linear predictor of the form
$$
\log\, \mu_{ij} = \eta + \theta_i + \alpha_j^* + X_i\beta_j^*
$$
where each case gets one nuisance parameter $\theta$.
Subtracting the expression for $\log\, \mu_{ij}$ from $\log\, \mu_{iJ}$ to construct $\log (\mu_{ij}/\mu_{iJ})$ shows that the parameters are related as $\alpha_j = \alpha^*_j − \alpha^*_J$ and $\beta_j = \beta^*_j - \beta^*_J$ (the nuisance parameters and $\eta$ cancel). Thus you get what you came for: fitting multiple Poisson regression models gives you the same result as fitting a single more complicated multinomial regression model.
This exposition is essentially the one in Gérman Rodríguez's always excellent notes, in the section on the 'equivalent log-linear model'.
In general this manoeuvre is called the multinomial-Poisson transformation and was written about by Baker in 1994.
Notice also that you don't need to adjust $X$ at all to use it. | Logistic / multinomial regression as two / multiple Poisson regressions?
Assume that you're fitting a multinomial regression model with J categories where the contrast between j and the last category J is modeled as
$$
\log \frac{\mu_{ij}}{\mu_{iJ}} = \alpha_j + X_i \beta |
34,556 | Gradient descent based minimization algorithm that doesn't require initial guess to be near the global optimum | This is more of a workaround than a direct solution, but a common way to avoid local minima is to run your algorithm several times, from different starting locations. You can then take the best outcome or the average as your final result.
The reason why you might want to take the average rather than the best is to avoid overfitting. Many model types where local minima are a problem have lots of parameters: decision trees, neural networks, etc. Simply taking the best outcome risks obtaining a model that won't generalise well to future data. Taking the average guards against this.
You can get into arbitrarily complex ways of doing the averaging. Have a look at
https://en.wikipedia.org/wiki/Ensemble_learning
https://en.wikipedia.org/wiki/Ensemble_averaging | Gradient descent based minimization algorithm that doesn't require initial guess to be near the glob | This is more of a workaround than a direct solution, but a common way to avoid local minima is to run your algorithm several times, from different starting locations. You can then take the best outcom | Gradient descent based minimization algorithm that doesn't require initial guess to be near the global optimum
This is more of a workaround than a direct solution, but a common way to avoid local minima is to run your algorithm several times, from different starting locations. You can then take the best outcome or the average as your final result.
The reason why you might want to take the average rather than the best is to avoid overfitting. Many model types where local minima are a problem have lots of parameters: decision trees, neural networks, etc. Simply taking the best outcome risks obtaining a model that won't generalise well to future data. Taking the average guards against this.
You can get into arbitrarily complex ways of doing the averaging. Have a look at
https://en.wikipedia.org/wiki/Ensemble_learning
https://en.wikipedia.org/wiki/Ensemble_averaging | Gradient descent based minimization algorithm that doesn't require initial guess to be near the glob
This is more of a workaround than a direct solution, but a common way to avoid local minima is to run your algorithm several times, from different starting locations. You can then take the best outcom |
34,557 | Gradient descent based minimization algorithm that doesn't require initial guess to be near the global optimum | This could be achieved by giving the gradient decent "inertia", so as the algorithm alliterative moves down the partial derivatives, it gains momentum, and then once all the partial derivatives equal zero (i.e a minimum), the algorithm continues to move in the direction it was going before, but starts to slow down, as it is now going up the partial derivatives. This would help it escape local minimum and ...
... then head straight back in again, from the other side.
(Here the green is the global min, the red a nearby local min.)
If you have multiple local minima and the global minimum isn't "wide" enough for you to conveniently blunder into from almost anywhere, such simple ideas won't really help much. | Gradient descent based minimization algorithm that doesn't require initial guess to be near the glob | This could be achieved by giving the gradient decent "inertia", so as the algorithm alliterative moves down the partial derivatives, it gains momentum, and then once all the partial derivatives equal | Gradient descent based minimization algorithm that doesn't require initial guess to be near the global optimum
This could be achieved by giving the gradient decent "inertia", so as the algorithm alliterative moves down the partial derivatives, it gains momentum, and then once all the partial derivatives equal zero (i.e a minimum), the algorithm continues to move in the direction it was going before, but starts to slow down, as it is now going up the partial derivatives. This would help it escape local minimum and ...
... then head straight back in again, from the other side.
(Here the green is the global min, the red a nearby local min.)
If you have multiple local minima and the global minimum isn't "wide" enough for you to conveniently blunder into from almost anywhere, such simple ideas won't really help much. | Gradient descent based minimization algorithm that doesn't require initial guess to be near the glob
This could be achieved by giving the gradient decent "inertia", so as the algorithm alliterative moves down the partial derivatives, it gains momentum, and then once all the partial derivatives equal |
34,558 | Gradient descent based minimization algorithm that doesn't require initial guess to be near the global optimum | I think Stochastic Gradient Descent is pretty good at avoiding being stuck in a local minumum. The parameters are estimated for every observation, this gives SGD more randomness, and therefore it is more likely to jump out of a local minimum. | Gradient descent based minimization algorithm that doesn't require initial guess to be near the glob | I think Stochastic Gradient Descent is pretty good at avoiding being stuck in a local minumum. The parameters are estimated for every observation, this gives SGD more randomness, and therefore it is m | Gradient descent based minimization algorithm that doesn't require initial guess to be near the global optimum
I think Stochastic Gradient Descent is pretty good at avoiding being stuck in a local minumum. The parameters are estimated for every observation, this gives SGD more randomness, and therefore it is more likely to jump out of a local minimum. | Gradient descent based minimization algorithm that doesn't require initial guess to be near the glob
I think Stochastic Gradient Descent is pretty good at avoiding being stuck in a local minumum. The parameters are estimated for every observation, this gives SGD more randomness, and therefore it is m |
34,559 | How to specify a Bayesian binomial model with shrinkage to the population? | Try this: $p_i\stackrel{iid}{\sim} Be(\alpha,\beta)$ and $p(\alpha,\beta)\propto (\alpha+\beta)^{-5/2}$.
I believe the issue you are running into is that $ss\sim Ga(\gamma,\gamma)$ with $\gamma\to 0$ results in the improper $p(ss)\propto 1/ss$ prior. This prior, together with your uniform prior on mu, results in an improper posterior. Despite the fact that you are using a proper prior, it is close enough to this improper prior to cause issues.
You might want to take a look at page 110 of Bayesian Data Analysis (3rd ed) for a discussion of priors for this model as well as the prior suggested above. | How to specify a Bayesian binomial model with shrinkage to the population? | Try this: $p_i\stackrel{iid}{\sim} Be(\alpha,\beta)$ and $p(\alpha,\beta)\propto (\alpha+\beta)^{-5/2}$.
I believe the issue you are running into is that $ss\sim Ga(\gamma,\gamma)$ with $\gamma\to 0$ | How to specify a Bayesian binomial model with shrinkage to the population?
Try this: $p_i\stackrel{iid}{\sim} Be(\alpha,\beta)$ and $p(\alpha,\beta)\propto (\alpha+\beta)^{-5/2}$.
I believe the issue you are running into is that $ss\sim Ga(\gamma,\gamma)$ with $\gamma\to 0$ results in the improper $p(ss)\propto 1/ss$ prior. This prior, together with your uniform prior on mu, results in an improper posterior. Despite the fact that you are using a proper prior, it is close enough to this improper prior to cause issues.
You might want to take a look at page 110 of Bayesian Data Analysis (3rd ed) for a discussion of priors for this model as well as the prior suggested above. | How to specify a Bayesian binomial model with shrinkage to the population?
Try this: $p_i\stackrel{iid}{\sim} Be(\alpha,\beta)$ and $p(\alpha,\beta)\propto (\alpha+\beta)^{-5/2}$.
I believe the issue you are running into is that $ss\sim Ga(\gamma,\gamma)$ with $\gamma\to 0$ |
34,560 | How to specify a Bayesian binomial model with shrinkage to the population? | This Blog entry here http://lingpipe-blog.com/2009/09/23/bayesian-estimators-for-the-beta-binomial-model-of-batting-ability/ shows another possibility for modelling the prior on the parameters of the Beta population distribution, namely
$\frac{\alpha}{\alpha + \beta}$ ~ Uniform(0,1) = Beta(1,1)
$(a+b) \sim $ Pareto(1.5, 1)
Here is another resource regarding your question (slide 5): http://www.stat.cmu.edu/~brian/724/week06/lec15-mcmc.pdf. | How to specify a Bayesian binomial model with shrinkage to the population? | This Blog entry here http://lingpipe-blog.com/2009/09/23/bayesian-estimators-for-the-beta-binomial-model-of-batting-ability/ shows another possibility for modelling the prior on the parameters of the | How to specify a Bayesian binomial model with shrinkage to the population?
This Blog entry here http://lingpipe-blog.com/2009/09/23/bayesian-estimators-for-the-beta-binomial-model-of-batting-ability/ shows another possibility for modelling the prior on the parameters of the Beta population distribution, namely
$\frac{\alpha}{\alpha + \beta}$ ~ Uniform(0,1) = Beta(1,1)
$(a+b) \sim $ Pareto(1.5, 1)
Here is another resource regarding your question (slide 5): http://www.stat.cmu.edu/~brian/724/week06/lec15-mcmc.pdf. | How to specify a Bayesian binomial model with shrinkage to the population?
This Blog entry here http://lingpipe-blog.com/2009/09/23/bayesian-estimators-for-the-beta-binomial-model-of-batting-ability/ shows another possibility for modelling the prior on the parameters of the |
34,561 | What inferential method produces the empirical CDF? | In An Introduction to the Bootstrap, Efron and Tibshirani find it useful to characterize the empirical cumulative distribution function (ecdf) as the nonparametric maximum likelihood estimate of the "underlying population" $F$.
Given data $x_1, x_2, \ldots, x_n$, the likelihood function (by definition) is the product of the probabilities
$$L(F) = \prod_{i=1}^n {\Pr}_F(x_i).$$
E&T claim this is maximized by the ecdf. Since they leave it as an exercise, let's work out the solution here. It's not completely trivial, because we have to account for the possibility of duplicates among the data. Let's take care with the notation, then. Let $x_1, \ldots, x_m$ be the distinct data values, with $x_i$ appearing $k_i \ge 1$ times in the dataset. (Thus, $x_{m+1}, \ldots, x_n$ are all duplicates of the first $m$ values.) The ecdf is the discrete distribution that assigns probability $k_i/n$ to $x_i$ for $1 \le i \le m$.
For any distribution $F$, the likelihood $L(F)$ has $k_i$ terms equal to $p_i = {\Pr}_F(x_i)$ for each $i$. It therefore is completely determined by the vector $p=(p_1, p_2, \ldots, p_m)$ and can be computed as
$$L(F) = L(p) = \prod_{i=1}^m p_i^{k_i}.$$
Since the likelihood for the ecdf is nonzero, the maximum likelihood will be nonzero. Therefore, for any distribution $\hat F$ that maximizes the likelihood, $p_i = {\Pr}_{\hat F}(x_i)$ must be nonzero for all the data. The Axiom of Total Probability asserts the sum of the $p_i$ is at most $1$. This reduces the problem to a constrained optimization:
$$\text{Maximize } L(p) = \prod_{i=1}^m p_i^{k_i}$$
subject to
$$p_i \gt 0, i=1, 2, \ldots m;\quad \sum_{i=1}^m p_i \le 1.$$
This can be solved in many ways. Perhaps the most direct is to use a Lagrange multiplier $\lambda$ to optimize $\log L$, which produces the critical equations
$$\left(\frac{p_1}{k_1}, \frac{p_2}{k_2}, \ldots, \frac{p_m}{k_m}\right) = \lambda\left(1, 1, \ldots, 1\right)$$
with unique solution $$\hat p_i = \frac{k_i}{k_1+\cdots+k_m} = \frac{k_i}{n},$$
precisely the ecdf, QED.
Why is this point of view important? Here are E&T:
As a result, [any] functional statistic $t(\hat F)$ is the nonparametric maximum likelihood estimate of the parameter $t(F)$. In this sense, the nonparametric bootstrap carries out nonparametric maximum likelihood inference.
[Section 21.7, p. 310]
Some words of explanation: "as a result" follows from the (easily proven) fact that the MLE (maximum likelihood estimate) of any function of a parameter is that function of the MLE of the parameter. A "functional statistic" (or "plug-in" statistic) is one that depends only on the distribution function. As an example of this distinction, E&T point out that the usual unbiased variance estimator $s^2 = \sum (x_i-\bar x)^2/(n-1) $ is not a functional statistic because if you were to double all the data, the ecdf would not change, but the $s^2$ would be multiplied by $2(n-1)/(2n-1)$, which does change (albeit only slightly). Functional statistics are crucial to understanding and analyzing the Bootstrap.
Reference
Bradley Efron and Robert J. Tibshirani, An Introduction to the Bootstrap. Chapman & Hall, 1993. | What inferential method produces the empirical CDF? | In An Introduction to the Bootstrap, Efron and Tibshirani find it useful to characterize the empirical cumulative distribution function (ecdf) as the nonparametric maximum likelihood estimate of the " | What inferential method produces the empirical CDF?
In An Introduction to the Bootstrap, Efron and Tibshirani find it useful to characterize the empirical cumulative distribution function (ecdf) as the nonparametric maximum likelihood estimate of the "underlying population" $F$.
Given data $x_1, x_2, \ldots, x_n$, the likelihood function (by definition) is the product of the probabilities
$$L(F) = \prod_{i=1}^n {\Pr}_F(x_i).$$
E&T claim this is maximized by the ecdf. Since they leave it as an exercise, let's work out the solution here. It's not completely trivial, because we have to account for the possibility of duplicates among the data. Let's take care with the notation, then. Let $x_1, \ldots, x_m$ be the distinct data values, with $x_i$ appearing $k_i \ge 1$ times in the dataset. (Thus, $x_{m+1}, \ldots, x_n$ are all duplicates of the first $m$ values.) The ecdf is the discrete distribution that assigns probability $k_i/n$ to $x_i$ for $1 \le i \le m$.
For any distribution $F$, the likelihood $L(F)$ has $k_i$ terms equal to $p_i = {\Pr}_F(x_i)$ for each $i$. It therefore is completely determined by the vector $p=(p_1, p_2, \ldots, p_m)$ and can be computed as
$$L(F) = L(p) = \prod_{i=1}^m p_i^{k_i}.$$
Since the likelihood for the ecdf is nonzero, the maximum likelihood will be nonzero. Therefore, for any distribution $\hat F$ that maximizes the likelihood, $p_i = {\Pr}_{\hat F}(x_i)$ must be nonzero for all the data. The Axiom of Total Probability asserts the sum of the $p_i$ is at most $1$. This reduces the problem to a constrained optimization:
$$\text{Maximize } L(p) = \prod_{i=1}^m p_i^{k_i}$$
subject to
$$p_i \gt 0, i=1, 2, \ldots m;\quad \sum_{i=1}^m p_i \le 1.$$
This can be solved in many ways. Perhaps the most direct is to use a Lagrange multiplier $\lambda$ to optimize $\log L$, which produces the critical equations
$$\left(\frac{p_1}{k_1}, \frac{p_2}{k_2}, \ldots, \frac{p_m}{k_m}\right) = \lambda\left(1, 1, \ldots, 1\right)$$
with unique solution $$\hat p_i = \frac{k_i}{k_1+\cdots+k_m} = \frac{k_i}{n},$$
precisely the ecdf, QED.
Why is this point of view important? Here are E&T:
As a result, [any] functional statistic $t(\hat F)$ is the nonparametric maximum likelihood estimate of the parameter $t(F)$. In this sense, the nonparametric bootstrap carries out nonparametric maximum likelihood inference.
[Section 21.7, p. 310]
Some words of explanation: "as a result" follows from the (easily proven) fact that the MLE (maximum likelihood estimate) of any function of a parameter is that function of the MLE of the parameter. A "functional statistic" (or "plug-in" statistic) is one that depends only on the distribution function. As an example of this distinction, E&T point out that the usual unbiased variance estimator $s^2 = \sum (x_i-\bar x)^2/(n-1) $ is not a functional statistic because if you were to double all the data, the ecdf would not change, but the $s^2$ would be multiplied by $2(n-1)/(2n-1)$, which does change (albeit only slightly). Functional statistics are crucial to understanding and analyzing the Bootstrap.
Reference
Bradley Efron and Robert J. Tibshirani, An Introduction to the Bootstrap. Chapman & Hall, 1993. | What inferential method produces the empirical CDF?
In An Introduction to the Bootstrap, Efron and Tibshirani find it useful to characterize the empirical cumulative distribution function (ecdf) as the nonparametric maximum likelihood estimate of the " |
34,562 | What inferential method produces the empirical CDF? | For a discrete random variable, the standard definition of the empirical cumulative distribution function (cdf) can be seen as a Method-of-Moments estimator. Consider the discrete random variable $X$ taking values $\{k_1 <k_2 <...\}$. Then its cdf is defined as
$$F_X(k_m) =\Pr(X\le k_m)= \sum_{i=1}^m\Pr(X=k_i)$$
We have that $\Pr(X=k_i) = E[I_{\{X=k_i\}}]$, where $I_{\{X=k_i\}}$ is the indicator function taking values $1$ if $X=k_i$, $0$ otherwise. Substituting we have
$$F_X(k_m) = \sum_{i=1}^mE[I_{\{X=k_i\}}]$$
If we have available a sample of size $n$, $\{x_j,\, j=1,...,n\}$, of realizations of $X$, the sample analogue of the RHS is
$$\sum_{i=1}^m\left(\frac 1n\sum_{j=1}^nI_{\{x_j=k_i\}}\right) = \frac 1n\sum_{j=1}^nI_{\{x_j\le k_m\}} = \hat F_X(k_m)$$
i.e it is the standard expression for the empirical cumulative distribution function. So, since it uses the sample analogue of expected values (which here are moments of the indicator functions which in turn are Bernoulli r.v.'s), it can be seen as a Method-of-Moments estimator. | What inferential method produces the empirical CDF? | For a discrete random variable, the standard definition of the empirical cumulative distribution function (cdf) can be seen as a Method-of-Moments estimator. Consider the discrete random variable $X$ | What inferential method produces the empirical CDF?
For a discrete random variable, the standard definition of the empirical cumulative distribution function (cdf) can be seen as a Method-of-Moments estimator. Consider the discrete random variable $X$ taking values $\{k_1 <k_2 <...\}$. Then its cdf is defined as
$$F_X(k_m) =\Pr(X\le k_m)= \sum_{i=1}^m\Pr(X=k_i)$$
We have that $\Pr(X=k_i) = E[I_{\{X=k_i\}}]$, where $I_{\{X=k_i\}}$ is the indicator function taking values $1$ if $X=k_i$, $0$ otherwise. Substituting we have
$$F_X(k_m) = \sum_{i=1}^mE[I_{\{X=k_i\}}]$$
If we have available a sample of size $n$, $\{x_j,\, j=1,...,n\}$, of realizations of $X$, the sample analogue of the RHS is
$$\sum_{i=1}^m\left(\frac 1n\sum_{j=1}^nI_{\{x_j=k_i\}}\right) = \frac 1n\sum_{j=1}^nI_{\{x_j\le k_m\}} = \hat F_X(k_m)$$
i.e it is the standard expression for the empirical cumulative distribution function. So, since it uses the sample analogue of expected values (which here are moments of the indicator functions which in turn are Bernoulli r.v.'s), it can be seen as a Method-of-Moments estimator. | What inferential method produces the empirical CDF?
For a discrete random variable, the standard definition of the empirical cumulative distribution function (cdf) can be seen as a Method-of-Moments estimator. Consider the discrete random variable $X$ |
34,563 | Regression model where output is a probability | You can convert this into a logistic regression without having to blow the data set up to the size of the groups, you just have to use weights. For that first group, you would split that into [1,0,1,0,1] (last column is the response) and the weight would be [(# in that group) * 0.23], and [1,0,1,0,0] with a weight of [(# in that group) * 0.77]. See the R documentation for GLM with the parameter "weights" for how to execute that in R. T
After that, it's a straightforward logistic regression. This is equivalent to the binomial regression that someone else has suggested. | Regression model where output is a probability | You can convert this into a logistic regression without having to blow the data set up to the size of the groups, you just have to use weights. For that first group, you would split that into [1,0,1,0 | Regression model where output is a probability
You can convert this into a logistic regression without having to blow the data set up to the size of the groups, you just have to use weights. For that first group, you would split that into [1,0,1,0,1] (last column is the response) and the weight would be [(# in that group) * 0.23], and [1,0,1,0,0] with a weight of [(# in that group) * 0.77]. See the R documentation for GLM with the parameter "weights" for how to execute that in R. T
After that, it's a straightforward logistic regression. This is equivalent to the binomial regression that someone else has suggested. | Regression model where output is a probability
You can convert this into a logistic regression without having to blow the data set up to the size of the groups, you just have to use weights. For that first group, you would split that into [1,0,1,0 |
34,564 | Regression model where output is a probability | I think you should look at a Binomial regression model. Instead of modeling the proportions (percentages) you would model the counts of the decisions (still only one row for each group.) I.e. a generalized linear model with a Binomial rather than the usual Bernoulli likelihood. Here is a description of how it can be done in R. | Regression model where output is a probability | I think you should look at a Binomial regression model. Instead of modeling the proportions (percentages) you would model the counts of the decisions (still only one row for each group.) I.e. a genera | Regression model where output is a probability
I think you should look at a Binomial regression model. Instead of modeling the proportions (percentages) you would model the counts of the decisions (still only one row for each group.) I.e. a generalized linear model with a Binomial rather than the usual Bernoulli likelihood. Here is a description of how it can be done in R. | Regression model where output is a probability
I think you should look at a Binomial regression model. Instead of modeling the proportions (percentages) you would model the counts of the decisions (still only one row for each group.) I.e. a genera |
34,565 | Regression model where output is a probability | If I understand correctly, you're really trying to measure event counts among groups of different sizes. You could use a Poisson regression and add an offset for the size of each group. See When to use an offset in a Poisson regression? and poisson vs logistic regression for more explanation.
Not sure if this will be better from a predictive standpoint. I have had good predictive success using linear models or GBMs with ratios as the dependent variable. It would be a problem if you have many ratios close to 0 or 1 as a linear reg will start predicting ratios outside of that range.
After seeing your update, I would disaggregate the data and use the group as a factor variable in a logistic reg. What's wrong with a large dataset? | Regression model where output is a probability | If I understand correctly, you're really trying to measure event counts among groups of different sizes. You could use a Poisson regression and add an offset for the size of each group. See When to us | Regression model where output is a probability
If I understand correctly, you're really trying to measure event counts among groups of different sizes. You could use a Poisson regression and add an offset for the size of each group. See When to use an offset in a Poisson regression? and poisson vs logistic regression for more explanation.
Not sure if this will be better from a predictive standpoint. I have had good predictive success using linear models or GBMs with ratios as the dependent variable. It would be a problem if you have many ratios close to 0 or 1 as a linear reg will start predicting ratios outside of that range.
After seeing your update, I would disaggregate the data and use the group as a factor variable in a logistic reg. What's wrong with a large dataset? | Regression model where output is a probability
If I understand correctly, you're really trying to measure event counts among groups of different sizes. You could use a Poisson regression and add an offset for the size of each group. See When to us |
34,566 | Regression model where output is a probability | You should be able to work with only one row of data for each group :
With these two bits of information: "The data tracks what proportion of people made a decision", and "I also know how big each group is", you can turn the data into a count of success and trials (or fails): simply multiply the size of the group by the proportion in each group to get the number of successes, and subtract from the group size to get the number of fails. You can then fit a logistic regression. R, SAS, and (I assume) other packages can fit a logistic model with data specified as counts of successes and fails or trials.
For example in R the response variable can be "a two-column matrix with the columns giving the numbers of successes and failures" (quoting the documentation for the glm() function). Alternately, in R you can just fit the model with the response variable as a proportion and use the weights vector to specify the number of trials (group size).
"I am also concerned that treating the input as numbers, when they really represent binary values."
You could tell the software that these are categorical variables (e.g. converting them into factors in R or using a class statement in SAS), but with binary variables this is not strictly neccessary (categories get converted into binary dummy variables "under the hood" anyway). It might make the code clearer though. | Regression model where output is a probability | You should be able to work with only one row of data for each group :
With these two bits of information: "The data tracks what proportion of people made a decision", and "I also know how big each gro | Regression model where output is a probability
You should be able to work with only one row of data for each group :
With these two bits of information: "The data tracks what proportion of people made a decision", and "I also know how big each group is", you can turn the data into a count of success and trials (or fails): simply multiply the size of the group by the proportion in each group to get the number of successes, and subtract from the group size to get the number of fails. You can then fit a logistic regression. R, SAS, and (I assume) other packages can fit a logistic model with data specified as counts of successes and fails or trials.
For example in R the response variable can be "a two-column matrix with the columns giving the numbers of successes and failures" (quoting the documentation for the glm() function). Alternately, in R you can just fit the model with the response variable as a proportion and use the weights vector to specify the number of trials (group size).
"I am also concerned that treating the input as numbers, when they really represent binary values."
You could tell the software that these are categorical variables (e.g. converting them into factors in R or using a class statement in SAS), but with binary variables this is not strictly neccessary (categories get converted into binary dummy variables "under the hood" anyway). It might make the code clearer though. | Regression model where output is a probability
You should be able to work with only one row of data for each group :
With these two bits of information: "The data tracks what proportion of people made a decision", and "I also know how big each gro |
34,567 | Conditional Distribution of Poisson Variables, given $\sum X_i$ | The joint probability mass function of the $X_i$is
$$p_{\mathbf X}(\mathbf x) = \prod_{i=1}^n e^{-\lambda}\frac{\lambda^{x_i}}{x_i!}
= e^{-n\lambda}\frac{\lambda^{\sum_i x_i}}{x_1!x_2!\cdots x_n!}.$$
$Y = \sum_i X_i$ is a Poisson random variable with parameter $n\lambda$ and so $P\{Y = N\} = e^{-n\lambda}\frac{(n\lambda)^{N}}{N!}$. Now,
$$P\left\{(X_1=x_1, X_2=x_2, \ldots, X_n=x_n)\cap \{Y = N\}\right\}\\[1em]
= \begin{cases}e^{-n\lambda}\frac{\lambda^{\sum_i x_i}}{x_1!x_2!\cdots x_n!},
& \text{if}~\sum_i x_i = N,\\0, & \text{if}~\sum_i x_i \neq N,\end{cases}$$
and so
$$\begin{align}
p_{\mathbf X}(\mathbf x \mid Y=N) &=
\frac{P\{(X_1=x_1, X_2=x_2, \ldots, X_n=x_n)\cap (Y = N)\}}{P\{Y=N\}}\\[1ex]
&= \frac{N!}{n^Nx_1!x_2!\cdots x_n!} \quad\text{if}~ \sum_i x_i = N\\[1ex]
&= \frac{N!}{x_1!x_2!\cdots x_n!}\left(\frac{1}{n}\right)^{x_1}
\left(\frac{1}{n}\right)^{x_2}\cdots\left(\frac{1}{n}\right)^{x_n}
~\text{where}~ \sum_i x_i = N
\end{align}$$
which is a multinomial distribution. | Conditional Distribution of Poisson Variables, given $\sum X_i$ | The joint probability mass function of the $X_i$is
$$p_{\mathbf X}(\mathbf x) = \prod_{i=1}^n e^{-\lambda}\frac{\lambda^{x_i}}{x_i!}
= e^{-n\lambda}\frac{\lambda^{\sum_i x_i}}{x_1!x_2!\cdots x_n!}.$$
| Conditional Distribution of Poisson Variables, given $\sum X_i$
The joint probability mass function of the $X_i$is
$$p_{\mathbf X}(\mathbf x) = \prod_{i=1}^n e^{-\lambda}\frac{\lambda^{x_i}}{x_i!}
= e^{-n\lambda}\frac{\lambda^{\sum_i x_i}}{x_1!x_2!\cdots x_n!}.$$
$Y = \sum_i X_i$ is a Poisson random variable with parameter $n\lambda$ and so $P\{Y = N\} = e^{-n\lambda}\frac{(n\lambda)^{N}}{N!}$. Now,
$$P\left\{(X_1=x_1, X_2=x_2, \ldots, X_n=x_n)\cap \{Y = N\}\right\}\\[1em]
= \begin{cases}e^{-n\lambda}\frac{\lambda^{\sum_i x_i}}{x_1!x_2!\cdots x_n!},
& \text{if}~\sum_i x_i = N,\\0, & \text{if}~\sum_i x_i \neq N,\end{cases}$$
and so
$$\begin{align}
p_{\mathbf X}(\mathbf x \mid Y=N) &=
\frac{P\{(X_1=x_1, X_2=x_2, \ldots, X_n=x_n)\cap (Y = N)\}}{P\{Y=N\}}\\[1ex]
&= \frac{N!}{n^Nx_1!x_2!\cdots x_n!} \quad\text{if}~ \sum_i x_i = N\\[1ex]
&= \frac{N!}{x_1!x_2!\cdots x_n!}\left(\frac{1}{n}\right)^{x_1}
\left(\frac{1}{n}\right)^{x_2}\cdots\left(\frac{1}{n}\right)^{x_n}
~\text{where}~ \sum_i x_i = N
\end{align}$$
which is a multinomial distribution. | Conditional Distribution of Poisson Variables, given $\sum X_i$
The joint probability mass function of the $X_i$is
$$p_{\mathbf X}(\mathbf x) = \prod_{i=1}^n e^{-\lambda}\frac{\lambda^{x_i}}{x_i!}
= e^{-n\lambda}\frac{\lambda^{\sum_i x_i}}{x_1!x_2!\cdots x_n!}.$$
|
34,568 | Mathematical Explanations behind ANOVA | Here is a little note I wrote for myself when learning ANOVA. Hope it helps clarify things bit.
ANOVA is the classical method to compare means of multiple ($\ge 2$) groups. Suppose $N$ observations were sampled from $k$ groups and define $n=N/k$. Let $x_{ij}$ be the $j$th observation from the $i$th group. Here we assume a balanced design i.e the number of samples from each group remain the same. Denote $\bar{x}_.$ the grand sample mean and $\bar{x}_i$ the sample mean of group $i$. Observations can be re-written as
$$x_{ij} = \bar{x}_. + \left(\bar{x}_i - \bar{x}_.\right) + \left(x_{ij}-\bar{x}_i\right)$$
This leads to the following model
$$x_{ij} = \mu + \alpha_i + \epsilon_{ij}$$
where $\mu$ and $\alpha_i$ are grand mean and $i$th group mean respectively. The error term $\epsilon_{ij}$ is assumed to be iid from a normal distribution
$$\epsilon_{ij} \sim N(0,\sigma^2)$$
The null hypothesis in ANOVA is that all group means are the same i.e
$$\alpha_1 = \alpha_2 = \ldots = \alpha_k$$
If this is true, the error term for group differences $\bar{x}_i - \mu \sim N(0,\sigma^2/n=\bar{\sigma}^2).$ However, you cannot directly test this by using one-sample t-test (discard $x_{ij}$ and only use $\bar{x}_i$). Suppose you have $\sigma^2=5$ and $\bar{\sigma}^2 = 1000$ e.g between group difference is much larger than within group difference. In this case, data from individual groups are similar but groups are quit different, so we should reject the null hypothesis although the one sample t-test may fail to reject. It is really the RELATIVE magnitude of within & between group differences matters. You cannot say much by only looking at one of them.
Now consider the sum of squares for between group difference
$$\mathrm{SSD_B} = \sum_{i=1}^k \sum_{j=1}^n \left(\bar{x}_i - \bar{x}_.\right)^2 = n\sum_{i=1}^k \left(\bar{x}_i - \bar{x}_.\right)^2$$
and for within group difference
$$\mathrm{SSD_W} = \sum_{i=1}^k\sum_{j=1}^n \left(x_{ij} - \bar{x}_i \right)^2$$
where $\mathrm{SSD_B}$ has a degree of freedom of $k-1$ and $\mathrm{SSD_W}$ has a degree of freedom of $N-k$. If there is no systematic difference between the groups, we would expect the mean squares
$$\mathrm{MS_B} = \mathrm{SSD_B}/(k-1)$$
$$\mathrm{MS_W} = \mathrm{SSD_W}/(N-k)$$
would be similar. The test statistic in ANOVA is defined as the ratio of the above two quantities:
$$F = \mathrm{MS_B}/\mathrm{MS_W}$$
which follows a F-distribution with $k-1$ and $N-k$ degrees of freedom. If null hypothesis is true, $F$ would likely be close to 1. Otherwise, the between group mean square $\mathrm{MS_B}$ is likely to be large, which results in a large $F$ value. Basically, ANOVA examines the two sources of the total variance and sees which part contributes more. This is why it is called analysis of variance although the intention is to compare group means.
For unbalanced designs and general discussion, you can look at Introducing Anova and Ancova: A GLM Approach by Andrew Rutherford. | Mathematical Explanations behind ANOVA | Here is a little note I wrote for myself when learning ANOVA. Hope it helps clarify things bit.
ANOVA is the classical method to compare means of multiple ($\ge 2$) groups. Suppose $N$ observations we | Mathematical Explanations behind ANOVA
Here is a little note I wrote for myself when learning ANOVA. Hope it helps clarify things bit.
ANOVA is the classical method to compare means of multiple ($\ge 2$) groups. Suppose $N$ observations were sampled from $k$ groups and define $n=N/k$. Let $x_{ij}$ be the $j$th observation from the $i$th group. Here we assume a balanced design i.e the number of samples from each group remain the same. Denote $\bar{x}_.$ the grand sample mean and $\bar{x}_i$ the sample mean of group $i$. Observations can be re-written as
$$x_{ij} = \bar{x}_. + \left(\bar{x}_i - \bar{x}_.\right) + \left(x_{ij}-\bar{x}_i\right)$$
This leads to the following model
$$x_{ij} = \mu + \alpha_i + \epsilon_{ij}$$
where $\mu$ and $\alpha_i$ are grand mean and $i$th group mean respectively. The error term $\epsilon_{ij}$ is assumed to be iid from a normal distribution
$$\epsilon_{ij} \sim N(0,\sigma^2)$$
The null hypothesis in ANOVA is that all group means are the same i.e
$$\alpha_1 = \alpha_2 = \ldots = \alpha_k$$
If this is true, the error term for group differences $\bar{x}_i - \mu \sim N(0,\sigma^2/n=\bar{\sigma}^2).$ However, you cannot directly test this by using one-sample t-test (discard $x_{ij}$ and only use $\bar{x}_i$). Suppose you have $\sigma^2=5$ and $\bar{\sigma}^2 = 1000$ e.g between group difference is much larger than within group difference. In this case, data from individual groups are similar but groups are quit different, so we should reject the null hypothesis although the one sample t-test may fail to reject. It is really the RELATIVE magnitude of within & between group differences matters. You cannot say much by only looking at one of them.
Now consider the sum of squares for between group difference
$$\mathrm{SSD_B} = \sum_{i=1}^k \sum_{j=1}^n \left(\bar{x}_i - \bar{x}_.\right)^2 = n\sum_{i=1}^k \left(\bar{x}_i - \bar{x}_.\right)^2$$
and for within group difference
$$\mathrm{SSD_W} = \sum_{i=1}^k\sum_{j=1}^n \left(x_{ij} - \bar{x}_i \right)^2$$
where $\mathrm{SSD_B}$ has a degree of freedom of $k-1$ and $\mathrm{SSD_W}$ has a degree of freedom of $N-k$. If there is no systematic difference between the groups, we would expect the mean squares
$$\mathrm{MS_B} = \mathrm{SSD_B}/(k-1)$$
$$\mathrm{MS_W} = \mathrm{SSD_W}/(N-k)$$
would be similar. The test statistic in ANOVA is defined as the ratio of the above two quantities:
$$F = \mathrm{MS_B}/\mathrm{MS_W}$$
which follows a F-distribution with $k-1$ and $N-k$ degrees of freedom. If null hypothesis is true, $F$ would likely be close to 1. Otherwise, the between group mean square $\mathrm{MS_B}$ is likely to be large, which results in a large $F$ value. Basically, ANOVA examines the two sources of the total variance and sees which part contributes more. This is why it is called analysis of variance although the intention is to compare group means.
For unbalanced designs and general discussion, you can look at Introducing Anova and Ancova: A GLM Approach by Andrew Rutherford. | Mathematical Explanations behind ANOVA
Here is a little note I wrote for myself when learning ANOVA. Hope it helps clarify things bit.
ANOVA is the classical method to compare means of multiple ($\ge 2$) groups. Suppose $N$ observations we |
34,569 | Mathematical Explanations behind ANOVA | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Try: Linear Models by Shayle Searle | Mathematical Explanations behind ANOVA | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Mathematical Explanations behind ANOVA
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Try: Linear Models by Shayle Searle | Mathematical Explanations behind ANOVA
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
34,570 | Mathematical Explanations behind ANOVA | I don't know if mathy enough (but it should have the right references to get you started):
Gelman, A. (2005). Analysis of Variance: Why It Is More Important than Ever. The Annals of Statistics, 33(1), 1–31. doi:10.2307/3448650
Should be available from Andrew Gelman's website. | Mathematical Explanations behind ANOVA | I don't know if mathy enough (but it should have the right references to get you started):
Gelman, A. (2005). Analysis of Variance: Why It Is More Important than Ever. The Annals of Statistics, 33(1), | Mathematical Explanations behind ANOVA
I don't know if mathy enough (but it should have the right references to get you started):
Gelman, A. (2005). Analysis of Variance: Why It Is More Important than Ever. The Annals of Statistics, 33(1), 1–31. doi:10.2307/3448650
Should be available from Andrew Gelman's website. | Mathematical Explanations behind ANOVA
I don't know if mathy enough (but it should have the right references to get you started):
Gelman, A. (2005). Analysis of Variance: Why It Is More Important than Ever. The Annals of Statistics, 33(1), |
34,571 | Visualizing longitudinal data with binary outcome | There are quite a few ways to work around it.
Jittering the variables mildly to smear the lines apart
First, since both age and the outcome are nicely discrete, we can afford to mildly jitter them in order to show some trends. The trick is to use transparency in the line color so that it's easier to discern the magnitude of overlapping.
library(geepack)
set.seed(6277)
ohio2 <- ohio[2049:2148,]
head(ohio2, n=12)
jitteredResp <- ohio2$resp + rnorm(100,0,0.02) # $
jitteredAge <- ohio2$age+9 + rnorm(100,0,0.02) # $
age <- ohio2$age+9 # $
id <- ohio2$id # $
wheeze <- ohio2$resp # $
#### Variation 1 ####
plot(jitteredAge, jitteredResp, type="n", axes=F,
xlab="Age to the nearest year, jittered",
ylab="Wheeze status, jittered")
for (i in id){
par(new=T)
lines(age[id==i], jitteredResp[id==i], col="#FF000008", lwd=2)
}
axis(side=1, at=seq(7,10))
axis(side=2, at=c(0,1), label=c("No", "Yes"))
Getting fancy
It's also possible to use this kind of curves to show the flow of the subjects. It's just like a modification of the above chart, but using the width of the line to represent frequency rather using overlapping.
Show the fate of each case
This may sound counter-intuitive, but if you lay the cases out in a systematic manner, it works just as fine to tell the aggregated story. Here the outcome of each case is shown along a grey color reference line. I didn't add a legend there but using legend command it can be added quite easily. Blue is "resp = 0" and Red is "resp = 1". Time (age) is spread out on the x-axis. Your data are conveniently presorted by outcome pattern, so I didn't have to do anything. If they are not presorted, you'd have to use command like dcast in package reshape2 to massage the data a bit.
#### Variation 2 ####
my.col <- vector()
my.col[wheeze ==1] <- "#D7191C"
my.col[wheeze ==0] <- "#2C7BB6"
plot(age, id, type="n", frame=F, xlab="Age, year", ylab="", axes=F, xlim=c(7,10))
abline(h=id, col="#CCCCCC")
axis(side=1, at=seq(7,10))
mtext(side=2, line=1, "Individual cases")
points(age, id, col=my.col, pch=16)
Tabulation
Visualization is not the only way out. Since there would only be, at most, 16 different patterns, you can also tabulate them. Use + and - to create patterns like + + + + and + - - -, and then for each of these patterns, attach the counts and percentage. This can show the information equally effectively. | Visualizing longitudinal data with binary outcome | There are quite a few ways to work around it.
Jittering the variables mildly to smear the lines apart
First, since both age and the outcome are nicely discrete, we can afford to mildly jitter them in | Visualizing longitudinal data with binary outcome
There are quite a few ways to work around it.
Jittering the variables mildly to smear the lines apart
First, since both age and the outcome are nicely discrete, we can afford to mildly jitter them in order to show some trends. The trick is to use transparency in the line color so that it's easier to discern the magnitude of overlapping.
library(geepack)
set.seed(6277)
ohio2 <- ohio[2049:2148,]
head(ohio2, n=12)
jitteredResp <- ohio2$resp + rnorm(100,0,0.02) # $
jitteredAge <- ohio2$age+9 + rnorm(100,0,0.02) # $
age <- ohio2$age+9 # $
id <- ohio2$id # $
wheeze <- ohio2$resp # $
#### Variation 1 ####
plot(jitteredAge, jitteredResp, type="n", axes=F,
xlab="Age to the nearest year, jittered",
ylab="Wheeze status, jittered")
for (i in id){
par(new=T)
lines(age[id==i], jitteredResp[id==i], col="#FF000008", lwd=2)
}
axis(side=1, at=seq(7,10))
axis(side=2, at=c(0,1), label=c("No", "Yes"))
Getting fancy
It's also possible to use this kind of curves to show the flow of the subjects. It's just like a modification of the above chart, but using the width of the line to represent frequency rather using overlapping.
Show the fate of each case
This may sound counter-intuitive, but if you lay the cases out in a systematic manner, it works just as fine to tell the aggregated story. Here the outcome of each case is shown along a grey color reference line. I didn't add a legend there but using legend command it can be added quite easily. Blue is "resp = 0" and Red is "resp = 1". Time (age) is spread out on the x-axis. Your data are conveniently presorted by outcome pattern, so I didn't have to do anything. If they are not presorted, you'd have to use command like dcast in package reshape2 to massage the data a bit.
#### Variation 2 ####
my.col <- vector()
my.col[wheeze ==1] <- "#D7191C"
my.col[wheeze ==0] <- "#2C7BB6"
plot(age, id, type="n", frame=F, xlab="Age, year", ylab="", axes=F, xlim=c(7,10))
abline(h=id, col="#CCCCCC")
axis(side=1, at=seq(7,10))
mtext(side=2, line=1, "Individual cases")
points(age, id, col=my.col, pch=16)
Tabulation
Visualization is not the only way out. Since there would only be, at most, 16 different patterns, you can also tabulate them. Use + and - to create patterns like + + + + and + - - -, and then for each of these patterns, attach the counts and percentage. This can show the information equally effectively. | Visualizing longitudinal data with binary outcome
There are quite a few ways to work around it.
Jittering the variables mildly to smear the lines apart
First, since both age and the outcome are nicely discrete, we can afford to mildly jitter them in |
34,572 | Why can weather prediction be so correct? | Historical weather is able to predict future weather MUCH better than historical financial data can predict future financial data. Technologies of trading/investing change quickly and market mechanics of 80's is very different from current behaviour. Weather has periodicity and pretty smooth predictable patterns, unlike financial series where you can observe spikes, lack of mean reversion etc.
Good quality weather observations can go back to early 1900s cmp. to financial data which usually spans 2 decades or even less (again, early data wouldn't make any sense anyway). So it has much more training data.
Certainly, weather prediction takes into account not only the time series values (same for financial predictions). But even the most naive approach "predict December weather as average of past 10 years Decembers" will give pretty good approximation (financial prediction like this will be complete nonsense). There are laws of physics which come into play and they are much more strict than laws of financial markets - after all, the amount of randomness in cyclones and winds is much less than in market movements.
To improve your model you need to firstly understand the mechanics of time series you are trying to predict and base your model on it, not vice versa. Most likely, taking into account only the stock prices you won't be able to predict anything - try to see other parameters and indicators, maybe looking at volumes and open interest, trying to find some correlations there. It is much harder than it seems because if it would be easy everyone would do it which would force prices to converge to 'fair values', thus making it useless again. I suggest not making your model more complicated, but rather spending your time researching market mechanics and only then going into building it - you won't get far by just feeding data into some standard predictor and expecting it to make good predictions. | Why can weather prediction be so correct? | Historical weather is able to predict future weather MUCH better than historical financial data can predict future financial data. Technologies of trading/investing change quickly and market mechanics | Why can weather prediction be so correct?
Historical weather is able to predict future weather MUCH better than historical financial data can predict future financial data. Technologies of trading/investing change quickly and market mechanics of 80's is very different from current behaviour. Weather has periodicity and pretty smooth predictable patterns, unlike financial series where you can observe spikes, lack of mean reversion etc.
Good quality weather observations can go back to early 1900s cmp. to financial data which usually spans 2 decades or even less (again, early data wouldn't make any sense anyway). So it has much more training data.
Certainly, weather prediction takes into account not only the time series values (same for financial predictions). But even the most naive approach "predict December weather as average of past 10 years Decembers" will give pretty good approximation (financial prediction like this will be complete nonsense). There are laws of physics which come into play and they are much more strict than laws of financial markets - after all, the amount of randomness in cyclones and winds is much less than in market movements.
To improve your model you need to firstly understand the mechanics of time series you are trying to predict and base your model on it, not vice versa. Most likely, taking into account only the stock prices you won't be able to predict anything - try to see other parameters and indicators, maybe looking at volumes and open interest, trying to find some correlations there. It is much harder than it seems because if it would be easy everyone would do it which would force prices to converge to 'fair values', thus making it useless again. I suggest not making your model more complicated, but rather spending your time researching market mechanics and only then going into building it - you won't get far by just feeding data into some standard predictor and expecting it to make good predictions. | Why can weather prediction be so correct?
Historical weather is able to predict future weather MUCH better than historical financial data can predict future financial data. Technologies of trading/investing change quickly and market mechanics |
34,573 | Logistic regression classifier with non-negative weights constraint | I have done this successfully using projected gradient descent.
The algorithm is very simple - take a gradient step, then set all negative coefficients to zero (i.e. project onto the feasible set).
I started with Leon Bottou's code here: http://leon.bottou.org/projects/sgd. | Logistic regression classifier with non-negative weights constraint | I have done this successfully using projected gradient descent.
The algorithm is very simple - take a gradient step, then set all negative coefficients to zero (i.e. project onto the feasible set).
I | Logistic regression classifier with non-negative weights constraint
I have done this successfully using projected gradient descent.
The algorithm is very simple - take a gradient step, then set all negative coefficients to zero (i.e. project onto the feasible set).
I started with Leon Bottou's code here: http://leon.bottou.org/projects/sgd. | Logistic regression classifier with non-negative weights constraint
I have done this successfully using projected gradient descent.
The algorithm is very simple - take a gradient step, then set all negative coefficients to zero (i.e. project onto the feasible set).
I |
34,574 | Logistic regression classifier with non-negative weights constraint | If you are familiar with convex optimization, I suspect this can be formalized as a quadratic programming problem (or some other convex problem) and then solved with a QP solver. If this direction interests you, I can elaborate further.
Regardless of the method which is used, if the problem is convex (there are a number of ways to check this and I know that unconstrained logistic regression is convex) you are guaranteed that a local minimum will also be a global minimum.
I might be mistaken, but it seems to be convex because if the original solution space is convex and the set of positive solutions is also convex (it is a cone), the new problem would be convex since an intersection of convex sets is convex. But this is hand-waving, best to examine it formally. | Logistic regression classifier with non-negative weights constraint | If you are familiar with convex optimization, I suspect this can be formalized as a quadratic programming problem (or some other convex problem) and then solved with a QP solver. If this direction int | Logistic regression classifier with non-negative weights constraint
If you are familiar with convex optimization, I suspect this can be formalized as a quadratic programming problem (or some other convex problem) and then solved with a QP solver. If this direction interests you, I can elaborate further.
Regardless of the method which is used, if the problem is convex (there are a number of ways to check this and I know that unconstrained logistic regression is convex) you are guaranteed that a local minimum will also be a global minimum.
I might be mistaken, but it seems to be convex because if the original solution space is convex and the set of positive solutions is also convex (it is a cone), the new problem would be convex since an intersection of convex sets is convex. But this is hand-waving, best to examine it formally. | Logistic regression classifier with non-negative weights constraint
If you are familiar with convex optimization, I suspect this can be formalized as a quadratic programming problem (or some other convex problem) and then solved with a QP solver. If this direction int |
34,575 | Logistic regression classifier with non-negative weights constraint | Your objective can be achieved using an R package "glmnet". It is a machine learning package in R that fits generalized linear models via penalized maximum likelihood. But, it also allows coefficients to be constrained. This can be used to solve your problem. For more details please follow the link below:
https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html
Coming back to your problem, glmnet works only on matrices so please be sure to get all your data in the matrix form. Create two separate matrices one for the predictors (X) and another for the response (Y).
Run the following code:
library(glmnet)
loReg <- glmnet(x=X, y=Y, family = "binomial", lower.limits = 0, lambda = 0, standardize=TRUE)
The above line will create a logistic model with penalizing coefficient equal to zero (which is what you want). Since the lower limit of all of your variables is the same (i.e. zero), setting lower.limits=0 will do the job.
To predict new observation: Suppose you want to predict m new observations. Get these observations in an mxp matrix, where p is the number of predictors. Let this matrix of new observations be newX.
predict(loReg, newx = newX, type="response")
Hope this would help. | Logistic regression classifier with non-negative weights constraint | Your objective can be achieved using an R package "glmnet". It is a machine learning package in R that fits generalized linear models via penalized maximum likelihood. But, it also allows coefficients | Logistic regression classifier with non-negative weights constraint
Your objective can be achieved using an R package "glmnet". It is a machine learning package in R that fits generalized linear models via penalized maximum likelihood. But, it also allows coefficients to be constrained. This can be used to solve your problem. For more details please follow the link below:
https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html
Coming back to your problem, glmnet works only on matrices so please be sure to get all your data in the matrix form. Create two separate matrices one for the predictors (X) and another for the response (Y).
Run the following code:
library(glmnet)
loReg <- glmnet(x=X, y=Y, family = "binomial", lower.limits = 0, lambda = 0, standardize=TRUE)
The above line will create a logistic model with penalizing coefficient equal to zero (which is what you want). Since the lower limit of all of your variables is the same (i.e. zero), setting lower.limits=0 will do the job.
To predict new observation: Suppose you want to predict m new observations. Get these observations in an mxp matrix, where p is the number of predictors. Let this matrix of new observations be newX.
predict(loReg, newx = newX, type="response")
Hope this would help. | Logistic regression classifier with non-negative weights constraint
Your objective can be achieved using an R package "glmnet". It is a machine learning package in R that fits generalized linear models via penalized maximum likelihood. But, it also allows coefficients |
34,576 | Logistic regression classifier with non-negative weights constraint | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I have answered to that question in this blog post. It is about adding some dummy data but the real solution is of course to do Projected Gradient Descent in the primal as user1149913 said. | Logistic regression classifier with non-negative weights constraint | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Logistic regression classifier with non-negative weights constraint
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I have answered to that question in this blog post. It is about adding some dummy data but the real solution is of course to do Projected Gradient Descent in the primal as user1149913 said. | Logistic regression classifier with non-negative weights constraint
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
34,577 | Logistic regression classifier with non-negative weights constraint | I wanted to mention a few more solutions that are quite straightforward and have not been mention, yet.
Bayesian logistic regression
We can use Bayesian logistic regression and set prior distributions that put no prior weight on negative coefficient values. Of course, your exact choice of prior distribution may have an influence on the inference, but since you seem to have some prior information that may be a good thing. A plus of this approach is that you get credible intervals for parameters.
For example, if you use R and the brms package, something like this should work in theory, but runs into some challenges:
library(tidyverse)
library(brms)
set.seed(1234)
mydata = tibble(y=rep(c(0,1), each=25),
x1=rnorm(n=50, mean=y, sd=1),
x2=rnorm(n=50, mean=y, sd=1))
fit1 = brm(data=mydata,
formula=y~0+x1+x2,
family = bernoulli(link = "logit"),
prior=set_prior(class="b", prior="uniform(0,10)"),
control=list(adapt_delta=0.999))
We can of course debate, whether a uniform(0,10) prior is sensible, but an log-odds ratio of 10 is huge, so I thought I'd cut of the distribution there. However, we run into sampling problems, because we did not tell the sampler that the parameter is always >=0. With e.g. a exponential(0.01) prior on the coefficients, that issue still occurs. There's two solutions to this (and possibly a third if you can tell
brms about the constraint):
Firstly, one of the cool things about MCMC sampling is that you can introduce constraints retrospectively, e.g. by sampling without a constraint and then subsetting the samples:
fit2 = brm(data=mydata,
formula=y~x1+x2,
family = bernoulli(link = "logit"),
prior=c(prior(class="b", prior="normal(0,10)"),
prior(class="Intercept", prior="normal(0,10)")),
control=list(adapt_delta=0.999))
library(tidybayes) # Only needed for the median_qi function
as.data.frame(fit2) %>%
filter(b_x1>=0 & b_x2>=0) %>%
dplyr::select(-lp__) %>%
median_qi()
This gets us:
b_Intercept b_Intercept.lower b_Intercept.upper b_x1 b_x1.lower b_x1.upper b_x2 b_x2.lower b_x2.upper .width .point .interval
1 -0.7483035 -1.638089 0.03658298 1.136726 0.2859683 2.247933 1.011623 0.354213 1.851013 0.95 median qi
The main limitation of this approach is that when you have lots of coefficients (curse of dimensionality) or the data kind of favors a negative coefficient for some, when in either case you may end up without (many) samples that are all positive.
The more general solution is to tell the sample about the constraint (which is easy enough to do in rstan by coding a Stan model ourselves):
library(rstan)
scode = "
data {
int n;
int<lower=0, upper=1> y[n];
real x1[n];
real x2[n];
}
parameters {
real<lower=0> beta[2];
}
model {
beta ~ uniform(0,10);
for (record in 1:n){
y[record] ~ binomial_logit(1, beta[1]*x1[record] + beta[2]+x2[record]);
}
}
"
smodel = stan_model(model_code = scode)
sfit = sampling(object = smodel,
data=list(n=dim(mydata)[1], y=mydata$y, x1=mydata$x1, x2=mydata$x2))
summary(sfit)$summary
Note that this is rather inefficiently coded (the loop over records and not using a matrix of covariates so that we can just do matrix multiplication to get the "xbeta"s).
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
beta[1] 1.166326 0.012648042 0.5134742 0.215388737 0.80786712 1.12813511 1.5047327 2.2471028 1648.1265 1.001271
beta[2] 0.122852 0.002284301 0.1072845 0.004358814 0.04249053 0.09305318 0.1763618 0.4019377 2205.8067 1.002139
lp__ -33.951458 0.041777118 1.1712864 -37.180331160 -34.42739042 -33.57252903 -33.1131970 -32.8095330 786.0483 1.004368
General minimization tools such as PyTorch
If you don't mind not having credible intervals/standard errors, then using any generic minimization tool and defining a suitable averaging function works. E.g. you can use a tool like PyTorch like this, if you want no intercept term and not just positivity, but also a sum-to-zero constraint:
class MyAverager(nn.Module):
def __init__(self, n_models, n_targets):
super(MyAverager, self).__init__()
self.betas = nn.Parameter(torch.randn(size=(n_models, n_targets)))
self.softmax = nn.Softmax(dim=0)
def forward(self, inputs):
# Assume input tensors indexed as sample, model, target
# self.betas for weights indexed as model, target
wgts = self.softmax(self.betas)
x = torch.mul(inputs, wgts).sum(dim=1)
return x
And like this if you only want the positive coefficients (but no sum-to-zero constraint) and no intercept:
class MyAverager(nn.Module):
def __init__(self, n_models, n_targets):
super(MyAverager, self).__init__()
self.betas = nn.Parameter(torch.randn(size=(n_models, n_targets)))
def forward(self, inputs):
# Assume input tensors indexed as sample, model, target
# self.betas for weights indexed as model, target
wgts = torch.exp(self.betas)
x = torch.mul(inputs, wgts).sum(dim=1)
return x
Obviously, there's nothing unique about PyTorch that enables this and you can just as easily do this with Keras / TensorFlow, or if you are not a Python user with the corresponding R (or whatever else you are using) packages. | Logistic regression classifier with non-negative weights constraint | I wanted to mention a few more solutions that are quite straightforward and have not been mention, yet.
Bayesian logistic regression
We can use Bayesian logistic regression and set prior distributions | Logistic regression classifier with non-negative weights constraint
I wanted to mention a few more solutions that are quite straightforward and have not been mention, yet.
Bayesian logistic regression
We can use Bayesian logistic regression and set prior distributions that put no prior weight on negative coefficient values. Of course, your exact choice of prior distribution may have an influence on the inference, but since you seem to have some prior information that may be a good thing. A plus of this approach is that you get credible intervals for parameters.
For example, if you use R and the brms package, something like this should work in theory, but runs into some challenges:
library(tidyverse)
library(brms)
set.seed(1234)
mydata = tibble(y=rep(c(0,1), each=25),
x1=rnorm(n=50, mean=y, sd=1),
x2=rnorm(n=50, mean=y, sd=1))
fit1 = brm(data=mydata,
formula=y~0+x1+x2,
family = bernoulli(link = "logit"),
prior=set_prior(class="b", prior="uniform(0,10)"),
control=list(adapt_delta=0.999))
We can of course debate, whether a uniform(0,10) prior is sensible, but an log-odds ratio of 10 is huge, so I thought I'd cut of the distribution there. However, we run into sampling problems, because we did not tell the sampler that the parameter is always >=0. With e.g. a exponential(0.01) prior on the coefficients, that issue still occurs. There's two solutions to this (and possibly a third if you can tell
brms about the constraint):
Firstly, one of the cool things about MCMC sampling is that you can introduce constraints retrospectively, e.g. by sampling without a constraint and then subsetting the samples:
fit2 = brm(data=mydata,
formula=y~x1+x2,
family = bernoulli(link = "logit"),
prior=c(prior(class="b", prior="normal(0,10)"),
prior(class="Intercept", prior="normal(0,10)")),
control=list(adapt_delta=0.999))
library(tidybayes) # Only needed for the median_qi function
as.data.frame(fit2) %>%
filter(b_x1>=0 & b_x2>=0) %>%
dplyr::select(-lp__) %>%
median_qi()
This gets us:
b_Intercept b_Intercept.lower b_Intercept.upper b_x1 b_x1.lower b_x1.upper b_x2 b_x2.lower b_x2.upper .width .point .interval
1 -0.7483035 -1.638089 0.03658298 1.136726 0.2859683 2.247933 1.011623 0.354213 1.851013 0.95 median qi
The main limitation of this approach is that when you have lots of coefficients (curse of dimensionality) or the data kind of favors a negative coefficient for some, when in either case you may end up without (many) samples that are all positive.
The more general solution is to tell the sample about the constraint (which is easy enough to do in rstan by coding a Stan model ourselves):
library(rstan)
scode = "
data {
int n;
int<lower=0, upper=1> y[n];
real x1[n];
real x2[n];
}
parameters {
real<lower=0> beta[2];
}
model {
beta ~ uniform(0,10);
for (record in 1:n){
y[record] ~ binomial_logit(1, beta[1]*x1[record] + beta[2]+x2[record]);
}
}
"
smodel = stan_model(model_code = scode)
sfit = sampling(object = smodel,
data=list(n=dim(mydata)[1], y=mydata$y, x1=mydata$x1, x2=mydata$x2))
summary(sfit)$summary
Note that this is rather inefficiently coded (the loop over records and not using a matrix of covariates so that we can just do matrix multiplication to get the "xbeta"s).
mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
beta[1] 1.166326 0.012648042 0.5134742 0.215388737 0.80786712 1.12813511 1.5047327 2.2471028 1648.1265 1.001271
beta[2] 0.122852 0.002284301 0.1072845 0.004358814 0.04249053 0.09305318 0.1763618 0.4019377 2205.8067 1.002139
lp__ -33.951458 0.041777118 1.1712864 -37.180331160 -34.42739042 -33.57252903 -33.1131970 -32.8095330 786.0483 1.004368
General minimization tools such as PyTorch
If you don't mind not having credible intervals/standard errors, then using any generic minimization tool and defining a suitable averaging function works. E.g. you can use a tool like PyTorch like this, if you want no intercept term and not just positivity, but also a sum-to-zero constraint:
class MyAverager(nn.Module):
def __init__(self, n_models, n_targets):
super(MyAverager, self).__init__()
self.betas = nn.Parameter(torch.randn(size=(n_models, n_targets)))
self.softmax = nn.Softmax(dim=0)
def forward(self, inputs):
# Assume input tensors indexed as sample, model, target
# self.betas for weights indexed as model, target
wgts = self.softmax(self.betas)
x = torch.mul(inputs, wgts).sum(dim=1)
return x
And like this if you only want the positive coefficients (but no sum-to-zero constraint) and no intercept:
class MyAverager(nn.Module):
def __init__(self, n_models, n_targets):
super(MyAverager, self).__init__()
self.betas = nn.Parameter(torch.randn(size=(n_models, n_targets)))
def forward(self, inputs):
# Assume input tensors indexed as sample, model, target
# self.betas for weights indexed as model, target
wgts = torch.exp(self.betas)
x = torch.mul(inputs, wgts).sum(dim=1)
return x
Obviously, there's nothing unique about PyTorch that enables this and you can just as easily do this with Keras / TensorFlow, or if you are not a Python user with the corresponding R (or whatever else you are using) packages. | Logistic regression classifier with non-negative weights constraint
I wanted to mention a few more solutions that are quite straightforward and have not been mention, yet.
Bayesian logistic regression
We can use Bayesian logistic regression and set prior distributions |
34,578 | Logistic regression classifier with non-negative weights constraint | For Python users: LogisticRegressionBinaryClassifier() in nimbusml has an option enforce_non_negativity that imposes a non-negativity constraint. See here. | Logistic regression classifier with non-negative weights constraint | For Python users: LogisticRegressionBinaryClassifier() in nimbusml has an option enforce_non_negativity that imposes a non-negativity constraint. See here. | Logistic regression classifier with non-negative weights constraint
For Python users: LogisticRegressionBinaryClassifier() in nimbusml has an option enforce_non_negativity that imposes a non-negativity constraint. See here. | Logistic regression classifier with non-negative weights constraint
For Python users: LogisticRegressionBinaryClassifier() in nimbusml has an option enforce_non_negativity that imposes a non-negativity constraint. See here. |
34,579 | Is a heat-map of gene expression more informative if Z-scores are used instead of actual expression measurement values? | What the reviewer may be referring to is the bottom legend of your figure. It goes from 1 to 12, with 4 right in the middle, which is discomforting. This makes your absolute log expression values difficult to interpret, because when a gene goes from bright green to black, its expression level is multiplied by 16, but when it goes from black to bright red, it is multiplied by 256. In short, I don't think your figure could be "more informative", but the information could be more intuitive.
As explained by @fosgen, Z-scores are centered and normalized, so the user can interpret a color as $x$ standard deviations from the mean and have an intuitive idea of the relative variation of that value.
Like @fosgen, I think you should go for standardization by gene (standardization by cell type does not make sense to me in that context). Black will be the average expression across different cell types (set to 0) and the color distribution will be symmetrical on both sides.
Showing the (relative) gene-wise variation of expression is standard in the field, but you might have specific reasons to show the (absolute) log2-microarray measurements, in which case you can expose them to the reviewers. But I would still straigthen the color gradient to ease interpretation. | Is a heat-map of gene expression more informative if Z-scores are used instead of actual expression | What the reviewer may be referring to is the bottom legend of your figure. It goes from 1 to 12, with 4 right in the middle, which is discomforting. This makes your absolute log expression values diff | Is a heat-map of gene expression more informative if Z-scores are used instead of actual expression measurement values?
What the reviewer may be referring to is the bottom legend of your figure. It goes from 1 to 12, with 4 right in the middle, which is discomforting. This makes your absolute log expression values difficult to interpret, because when a gene goes from bright green to black, its expression level is multiplied by 16, but when it goes from black to bright red, it is multiplied by 256. In short, I don't think your figure could be "more informative", but the information could be more intuitive.
As explained by @fosgen, Z-scores are centered and normalized, so the user can interpret a color as $x$ standard deviations from the mean and have an intuitive idea of the relative variation of that value.
Like @fosgen, I think you should go for standardization by gene (standardization by cell type does not make sense to me in that context). Black will be the average expression across different cell types (set to 0) and the color distribution will be symmetrical on both sides.
Showing the (relative) gene-wise variation of expression is standard in the field, but you might have specific reasons to show the (absolute) log2-microarray measurements, in which case you can expose them to the reviewers. But I would still straigthen the color gradient to ease interpretation. | Is a heat-map of gene expression more informative if Z-scores are used instead of actual expression
What the reviewer may be referring to is the bottom legend of your figure. It goes from 1 to 12, with 4 right in the middle, which is discomforting. This makes your absolute log expression values diff |
34,580 | Is a heat-map of gene expression more informative if Z-scores are used instead of actual expression measurement values? | The answer depends on what kind of comparison have to be shown on the figure. If we want to show differences between genes, it is good to make Z-score by samples (force each sample to have zero mean and standard deviation=1). If we want to show differences between samples, it is good to make Z-score by genes (force each gene to have zero mean and standard deviation=1). Original heat-map contains both information. So the phrase that it is "less informative" does not suit here. But redundant information makes useful information hard to see. Z-scoring does not reduce dimensionality, but throw away information about means and standard deviations in rows or columns (genes or samples). Think what information and what comparison you are discussing in paper and make proper Z-scoring if some information is redundant, else if all is useful - leave the original heat-map and explain this point to your reviewer. | Is a heat-map of gene expression more informative if Z-scores are used instead of actual expression | The answer depends on what kind of comparison have to be shown on the figure. If we want to show differences between genes, it is good to make Z-score by samples (force each sample to have zero mean a | Is a heat-map of gene expression more informative if Z-scores are used instead of actual expression measurement values?
The answer depends on what kind of comparison have to be shown on the figure. If we want to show differences between genes, it is good to make Z-score by samples (force each sample to have zero mean and standard deviation=1). If we want to show differences between samples, it is good to make Z-score by genes (force each gene to have zero mean and standard deviation=1). Original heat-map contains both information. So the phrase that it is "less informative" does not suit here. But redundant information makes useful information hard to see. Z-scoring does not reduce dimensionality, but throw away information about means and standard deviations in rows or columns (genes or samples). Think what information and what comparison you are discussing in paper and make proper Z-scoring if some information is redundant, else if all is useful - leave the original heat-map and explain this point to your reviewer. | Is a heat-map of gene expression more informative if Z-scores are used instead of actual expression
The answer depends on what kind of comparison have to be shown on the figure. If we want to show differences between genes, it is good to make Z-score by samples (force each sample to have zero mean a |
34,581 | Estimating the spectral density | It is literally mostly implemented via the FFT, but it is usually derived with the aid of the DFT. The idea is that much of the statistical properties of the periodogram can be obtained in this way (e.g., you would like not only to know where the peak is but also the significance of that peak!). If it is or not directly the FFT depends on the algorithm; the "classical" periodogram is 'just" the FFT, but it is not only noisy, but also has other serious problems (see the paper from Scargle that I cite below).
The most used form of the periodogram is the Lomb-Scargle (LS) Periodogram (Scargle, 1989; in this paper in fact Scargle does a critique to the "classical" periodogram; it is a MUST READ if you are trying to detect the period of a time series!). You can look at a fast implementation of it in this paper of Press & Rybicki (1989) (if you get lost, you can also look at the explicit implementation in the Numerical Recipes).
I should also add that there are generalizations of the LS Periodogram that account for floating means of the sinusoid, or for unequal variances among the points. The implementation is straightforward once you read the papers that I just gave ;-). | Estimating the spectral density | It is literally mostly implemented via the FFT, but it is usually derived with the aid of the DFT. The idea is that much of the statistical properties of the periodogram can be obtained in this way (e | Estimating the spectral density
It is literally mostly implemented via the FFT, but it is usually derived with the aid of the DFT. The idea is that much of the statistical properties of the periodogram can be obtained in this way (e.g., you would like not only to know where the peak is but also the significance of that peak!). If it is or not directly the FFT depends on the algorithm; the "classical" periodogram is 'just" the FFT, but it is not only noisy, but also has other serious problems (see the paper from Scargle that I cite below).
The most used form of the periodogram is the Lomb-Scargle (LS) Periodogram (Scargle, 1989; in this paper in fact Scargle does a critique to the "classical" periodogram; it is a MUST READ if you are trying to detect the period of a time series!). You can look at a fast implementation of it in this paper of Press & Rybicki (1989) (if you get lost, you can also look at the explicit implementation in the Numerical Recipes).
I should also add that there are generalizations of the LS Periodogram that account for floating means of the sinusoid, or for unequal variances among the points. The implementation is straightforward once you read the papers that I just gave ;-). | Estimating the spectral density
It is literally mostly implemented via the FFT, but it is usually derived with the aid of the DFT. The idea is that much of the statistical properties of the periodogram can be obtained in this way (e |
34,582 | Estimating the spectral density | The spectral density is defined for stationary time series. It is the Fourier transform of the autocorrelation function. The periodogram is the Fourier transform of the sample autocorrelation function. The periodogram is discrete while the spectral density is continuous and is usually estimated by a kernel smoothing of the periodogram. The Fast Fourier Transform is a quick way to compute a Fourier transform due to the work of Tukey and Cooley. | Estimating the spectral density | The spectral density is defined for stationary time series. It is the Fourier transform of the autocorrelation function. The periodogram is the Fourier transform of the sample autocorrelation functio | Estimating the spectral density
The spectral density is defined for stationary time series. It is the Fourier transform of the autocorrelation function. The periodogram is the Fourier transform of the sample autocorrelation function. The periodogram is discrete while the spectral density is continuous and is usually estimated by a kernel smoothing of the periodogram. The Fast Fourier Transform is a quick way to compute a Fourier transform due to the work of Tukey and Cooley. | Estimating the spectral density
The spectral density is defined for stationary time series. It is the Fourier transform of the autocorrelation function. The periodogram is the Fourier transform of the sample autocorrelation functio |
34,583 | How do I complete the square with normal likelihood and normal prior? | I'll start from scratch, since the original post has some math typos like wrong signs, dropping the $V$ matrix, etc.
You've specified prior $p(\beta)=\mathcal{N}( 0, \sigma^2 V )$ and likelihood: $p(y | \beta ) = \mathcal{N}( B\beta, \sigma^2I )$.
We can write each of these purely as expressions of terms inside the $\exp$ that depend on $\beta$, grouping all terms unrelated to $\beta$ into a single constant:
$\log p( \beta ) + \mbox{const} = -\frac{1}{2\sigma^2} \beta^T V^{-1} \beta$
$\log p( y | \beta ) + \mbox{const} = -\frac{1}{2\sigma^2}( \beta^T B^TB \beta - 2y^T B \beta ) \quad$ (note that $y^TB\beta = \beta^T B^T y$ always)
Added these in log space and collecting like terms yields the unnormalized log posterior
$\log p( \beta | y ) + \mbox{const} = -\frac{1}{2\sigma^2}( \beta^T(V^{-1} + B^TB)\beta - 2y^T B \beta )\quad$ (1)
... here, we've used the standard identity $x^TAx + x^TCx = x^T(A+C)x$ for any vectors $x$ and matrices $A,C$ of appropriate size.
OK, our goal is now to "complete" the square. We'd like an expression of the form below, which would indicate that the posterior for $\beta$ is Gaussian.
$\log p( \beta | y ) + \mbox{const} = (\beta - \mu_p)^T \Lambda_p (\beta - \mu_p )
= \beta^T \Lambda_p \beta -2\mu_p^T \Lambda_p \beta + \mu_p^T \Lambda_p \mu_p$
where parameters $\mu_p, \Lambda_p$ define the posterior mean and inverse covariance matrix respectively.
Well, by inspection eqn. (1) looks a lot like this form if we set
$\Lambda_p = V^{-1} + B^TB \quad$ and
$\quad \mu_p = \Lambda_p^{-1}B^Ty$
In detail, we can show that this substitution creates each necessary term from (1):
quadratic term: $\beta^T \Lambda_p \beta = \beta^T( V^{-1} + B^TB)\beta$
linear term: $\mu_p^T \Lambda_p \beta = ( \Lambda_p^{-1}B^Ty )^T \Lambda_p \beta = y^T B \Lambda_p^{-1} \Lambda_p \beta = y^T B \beta$
.... here we used facts $(AB)^T = B^T A^T$ and $(\Lambda_p^{-1})^T =\Lambda_p^{-1}$ due to symmetry ($\Lambda_p$ is symmetric, then so is its inverse).
However, this leaves us with a pesky extra term $\mu_p^T \Lambda_p \mu_p$. To avoid this, we just subtract this term from our final result. Thus, we can directly substitute our $\mu_p, \Lambda_p$ parameters into (1) to get
$\log p( \beta | y ) + \mbox{const} = -\frac{1}{2\sigma^2}[ (\beta-\mu_p)^T\Lambda_p(\beta-\mu_p) - \mu_p\Lambda_p\mu_p ]$
since that last term is constant with respect to $\beta$, we can just smash it into the big normalization constant on the left hand side and we've achieved our goal. | How do I complete the square with normal likelihood and normal prior? | I'll start from scratch, since the original post has some math typos like wrong signs, dropping the $V$ matrix, etc.
You've specified prior $p(\beta)=\mathcal{N}( 0, \sigma^2 V )$ and likelihood: $p(y | How do I complete the square with normal likelihood and normal prior?
I'll start from scratch, since the original post has some math typos like wrong signs, dropping the $V$ matrix, etc.
You've specified prior $p(\beta)=\mathcal{N}( 0, \sigma^2 V )$ and likelihood: $p(y | \beta ) = \mathcal{N}( B\beta, \sigma^2I )$.
We can write each of these purely as expressions of terms inside the $\exp$ that depend on $\beta$, grouping all terms unrelated to $\beta$ into a single constant:
$\log p( \beta ) + \mbox{const} = -\frac{1}{2\sigma^2} \beta^T V^{-1} \beta$
$\log p( y | \beta ) + \mbox{const} = -\frac{1}{2\sigma^2}( \beta^T B^TB \beta - 2y^T B \beta ) \quad$ (note that $y^TB\beta = \beta^T B^T y$ always)
Added these in log space and collecting like terms yields the unnormalized log posterior
$\log p( \beta | y ) + \mbox{const} = -\frac{1}{2\sigma^2}( \beta^T(V^{-1} + B^TB)\beta - 2y^T B \beta )\quad$ (1)
... here, we've used the standard identity $x^TAx + x^TCx = x^T(A+C)x$ for any vectors $x$ and matrices $A,C$ of appropriate size.
OK, our goal is now to "complete" the square. We'd like an expression of the form below, which would indicate that the posterior for $\beta$ is Gaussian.
$\log p( \beta | y ) + \mbox{const} = (\beta - \mu_p)^T \Lambda_p (\beta - \mu_p )
= \beta^T \Lambda_p \beta -2\mu_p^T \Lambda_p \beta + \mu_p^T \Lambda_p \mu_p$
where parameters $\mu_p, \Lambda_p$ define the posterior mean and inverse covariance matrix respectively.
Well, by inspection eqn. (1) looks a lot like this form if we set
$\Lambda_p = V^{-1} + B^TB \quad$ and
$\quad \mu_p = \Lambda_p^{-1}B^Ty$
In detail, we can show that this substitution creates each necessary term from (1):
quadratic term: $\beta^T \Lambda_p \beta = \beta^T( V^{-1} + B^TB)\beta$
linear term: $\mu_p^T \Lambda_p \beta = ( \Lambda_p^{-1}B^Ty )^T \Lambda_p \beta = y^T B \Lambda_p^{-1} \Lambda_p \beta = y^T B \beta$
.... here we used facts $(AB)^T = B^T A^T$ and $(\Lambda_p^{-1})^T =\Lambda_p^{-1}$ due to symmetry ($\Lambda_p$ is symmetric, then so is its inverse).
However, this leaves us with a pesky extra term $\mu_p^T \Lambda_p \mu_p$. To avoid this, we just subtract this term from our final result. Thus, we can directly substitute our $\mu_p, \Lambda_p$ parameters into (1) to get
$\log p( \beta | y ) + \mbox{const} = -\frac{1}{2\sigma^2}[ (\beta-\mu_p)^T\Lambda_p(\beta-\mu_p) - \mu_p\Lambda_p\mu_p ]$
since that last term is constant with respect to $\beta$, we can just smash it into the big normalization constant on the left hand side and we've achieved our goal. | How do I complete the square with normal likelihood and normal prior?
I'll start from scratch, since the original post has some math typos like wrong signs, dropping the $V$ matrix, etc.
You've specified prior $p(\beta)=\mathcal{N}( 0, \sigma^2 V )$ and likelihood: $p(y |
34,584 | Non-Correlated errors from Generalized Least Square model (GLS) | The residuals from gls will indeed have the same autocorrelation structure, but that does not mean the coefficient estimates and their standard errors have not been adjusted appropriately. (There's obviously no requirement that $\Omega$ be diagonal, either.) This is because the residuals are defined as $e = Y - X\hat{\beta}^{\text{GLS}}$. If the covariance matrix of $e$ was equal to $\sigma^2\text{I}$, there would be no need to use GLS!
In short, you haven't done anything wrong, there's no need to adjust the residuals, and the routines are all working correctly.
predict.gls does take the structure of the covariance matrix into account when forming standard errors of the prediction vector. However, it doesn't have the convenient "predict a few observations ahead" feature of predict.Arima, which takes into account the relevant residuals at the end of the data series and the structure of the residuals when generating predicted values. arima has the ability to incorporate a matrix of predictors in the estimation, and if you're interested in prediction a few steps ahead, it may be a better choice.
EDIT: Prompted by a comment from Michael Chernick (+1), I'm adding an example comparing GLS with ARMAX (arima) results, showing that coefficient estimates, log likelihoods, etc. all come out the same, at least to four decimal places (a reasonable degree of accuracy given that two different algorithms are used):
# Generating data
eta <- rnorm(5000)
for (j in 2:5000) eta[j] <- eta[j] + 0.4*eta[j-1]
e <- eta[4001:5000]
x <- rnorm(1000)
y <- x + e
> summary(gls(y~x, correlation=corARMA(p=1), method='ML'))
Generalized least squares fit by maximum likelihood
Model: y ~ x
Data: NULL
AIC BIC logLik
2833.377 2853.008 -1412.688
Correlation Structure: AR(1)
Formula: ~1
Parameter estimate(s):
Phi
0.4229375
Coefficients:
Value Std.Error t-value p-value
(Intercept) -0.0375764 0.05448021 -0.68973 0.4905
x 0.9730496 0.03011741 32.30854 0.0000
Correlation:
(Intr)
x -0.022
Standardized residuals:
Min Q1 Med Q3 Max
-2.97562731 -0.65969048 0.01350339 0.70718362 3.32913451
Residual standard error: 1.096575
Degrees of freedom: 1000 total; 998 residual
>
> arima(y, order=c(1,0,0), xreg=x)
Call:
arima(x = y, order = c(1, 0, 0), xreg = x)
Coefficients:
ar1 intercept x
0.4229 -0.0376 0.9730
s.e. 0.0287 0.0544 0.0301
sigma^2 estimated as 0.9874: log likelihood = -1412.69, aic = 2833.38
EDIT: Prompted by a comment from anand (OP), here's a comparison of predictions from gls and arima with the same basic data structure as above and some extraneous output lines removed:
df.est <- data.frame(list(y = y[1:995], x=x[1:995]))
df.pred <- data.frame(list(y=NA, x=x[996:1000]))
model.gls <- gls(y~x, correlation=corARMA(p=1), method='ML', data=df.est)
model.armax <- arima(df.est$y, order=c(1,0,0), xreg=df.est$x)
> predict(model.gls, newdata=df.pred)
[1] -0.3451556 -1.5085599 0.8999332 0.1125310 1.0966663
> predict(model.armax, n.ahead=5, newxreg=df.pred$x)$pred
[1] -0.79666213 -1.70825775 0.81159072 0.07344052 1.07935410
As we can see, the predicted values are different, although they are converging as we move farther into the future. This is because gls doesn't treat the data as a time series and take the specific value of the residual at observation 995 into account when forming predictions, but arima does. The effect of the residual at obs. 995 decreases as the forecast horizon increases, leading to the convergence of predicted values.
Consequently, for short-term predictions of time series data, arima will be better. | Non-Correlated errors from Generalized Least Square model (GLS) | The residuals from gls will indeed have the same autocorrelation structure, but that does not mean the coefficient estimates and their standard errors have not been adjusted appropriately. (There's o | Non-Correlated errors from Generalized Least Square model (GLS)
The residuals from gls will indeed have the same autocorrelation structure, but that does not mean the coefficient estimates and their standard errors have not been adjusted appropriately. (There's obviously no requirement that $\Omega$ be diagonal, either.) This is because the residuals are defined as $e = Y - X\hat{\beta}^{\text{GLS}}$. If the covariance matrix of $e$ was equal to $\sigma^2\text{I}$, there would be no need to use GLS!
In short, you haven't done anything wrong, there's no need to adjust the residuals, and the routines are all working correctly.
predict.gls does take the structure of the covariance matrix into account when forming standard errors of the prediction vector. However, it doesn't have the convenient "predict a few observations ahead" feature of predict.Arima, which takes into account the relevant residuals at the end of the data series and the structure of the residuals when generating predicted values. arima has the ability to incorporate a matrix of predictors in the estimation, and if you're interested in prediction a few steps ahead, it may be a better choice.
EDIT: Prompted by a comment from Michael Chernick (+1), I'm adding an example comparing GLS with ARMAX (arima) results, showing that coefficient estimates, log likelihoods, etc. all come out the same, at least to four decimal places (a reasonable degree of accuracy given that two different algorithms are used):
# Generating data
eta <- rnorm(5000)
for (j in 2:5000) eta[j] <- eta[j] + 0.4*eta[j-1]
e <- eta[4001:5000]
x <- rnorm(1000)
y <- x + e
> summary(gls(y~x, correlation=corARMA(p=1), method='ML'))
Generalized least squares fit by maximum likelihood
Model: y ~ x
Data: NULL
AIC BIC logLik
2833.377 2853.008 -1412.688
Correlation Structure: AR(1)
Formula: ~1
Parameter estimate(s):
Phi
0.4229375
Coefficients:
Value Std.Error t-value p-value
(Intercept) -0.0375764 0.05448021 -0.68973 0.4905
x 0.9730496 0.03011741 32.30854 0.0000
Correlation:
(Intr)
x -0.022
Standardized residuals:
Min Q1 Med Q3 Max
-2.97562731 -0.65969048 0.01350339 0.70718362 3.32913451
Residual standard error: 1.096575
Degrees of freedom: 1000 total; 998 residual
>
> arima(y, order=c(1,0,0), xreg=x)
Call:
arima(x = y, order = c(1, 0, 0), xreg = x)
Coefficients:
ar1 intercept x
0.4229 -0.0376 0.9730
s.e. 0.0287 0.0544 0.0301
sigma^2 estimated as 0.9874: log likelihood = -1412.69, aic = 2833.38
EDIT: Prompted by a comment from anand (OP), here's a comparison of predictions from gls and arima with the same basic data structure as above and some extraneous output lines removed:
df.est <- data.frame(list(y = y[1:995], x=x[1:995]))
df.pred <- data.frame(list(y=NA, x=x[996:1000]))
model.gls <- gls(y~x, correlation=corARMA(p=1), method='ML', data=df.est)
model.armax <- arima(df.est$y, order=c(1,0,0), xreg=df.est$x)
> predict(model.gls, newdata=df.pred)
[1] -0.3451556 -1.5085599 0.8999332 0.1125310 1.0966663
> predict(model.armax, n.ahead=5, newxreg=df.pred$x)$pred
[1] -0.79666213 -1.70825775 0.81159072 0.07344052 1.07935410
As we can see, the predicted values are different, although they are converging as we move farther into the future. This is because gls doesn't treat the data as a time series and take the specific value of the residual at observation 995 into account when forming predictions, but arima does. The effect of the residual at obs. 995 decreases as the forecast horizon increases, leading to the convergence of predicted values.
Consequently, for short-term predictions of time series data, arima will be better. | Non-Correlated errors from Generalized Least Square model (GLS)
The residuals from gls will indeed have the same autocorrelation structure, but that does not mean the coefficient estimates and their standard errors have not been adjusted appropriately. (There's o |
34,585 | Non-Correlated errors from Generalized Least Square model (GLS) | You want the normalized residuals. See ?residuals.lme.
#Reproducible code from ?corARMA
fm1Ovar.lme <- lme(follicles ~ sin(2*pi*Time) + cos(2*pi*Time),
data = Ovary, random = pdDiag(~sin(2*pi*Time)))
fm5Ovar.lme <- update(fm1Ovar.lme,
corr = corARMA(p = 1, q = 1))
#raw residuals divided by the corresponding standard errors
acf(residuals(fm5Ovar.lme),type="partial")
#standardized residuals pre-multiplied
#by the inverse square-root factor of the estimated error correlation matrix
acf(residuals(fm5Ovar.lme,type="normalized"),type="partial") | Non-Correlated errors from Generalized Least Square model (GLS) | You want the normalized residuals. See ?residuals.lme.
#Reproducible code from ?corARMA
fm1Ovar.lme <- lme(follicles ~ sin(2*pi*Time) + cos(2*pi*Time),
data = Ovary, random = pdDiag | Non-Correlated errors from Generalized Least Square model (GLS)
You want the normalized residuals. See ?residuals.lme.
#Reproducible code from ?corARMA
fm1Ovar.lme <- lme(follicles ~ sin(2*pi*Time) + cos(2*pi*Time),
data = Ovary, random = pdDiag(~sin(2*pi*Time)))
fm5Ovar.lme <- update(fm1Ovar.lme,
corr = corARMA(p = 1, q = 1))
#raw residuals divided by the corresponding standard errors
acf(residuals(fm5Ovar.lme),type="partial")
#standardized residuals pre-multiplied
#by the inverse square-root factor of the estimated error correlation matrix
acf(residuals(fm5Ovar.lme,type="normalized"),type="partial") | Non-Correlated errors from Generalized Least Square model (GLS)
You want the normalized residuals. See ?residuals.lme.
#Reproducible code from ?corARMA
fm1Ovar.lme <- lme(follicles ~ sin(2*pi*Time) + cos(2*pi*Time),
data = Ovary, random = pdDiag |
34,586 | How to set limits using constrOptim in R? | Here's an example that we can use to illustrate ui and ci, with some extraneous output removed for brevity. It's maximizing the log likelihood of a normal distribution. In the first part, we use the optim function with box constraints, and in the second part, we use the constrOptim function with its version of the same box constraints.
# function to be optimized
> foo.unconstr <- function(par, x) -sum(dnorm(x, par[1], par[2], log=TRUE))
> x <- rnorm(100,1,1)
> optim(c(1,1), foo.unconstr, lower=c(0,0), upper=c(5,5), method="L-BFGS-B", x=x)
$par
[1] 1.147652 1.077654
$value
[1] 149.3724
>
> # constrOptim example
>
> ui <- cbind(c(1,-1,0,0),c(0,0,1,-1))
> ui
[,1] [,2]
[1,] 1 0
[2,] -1 0
[3,] 0 1
[4,] 0 -1
> ci <- c(0, -5, 0, -5)
>
> constrOptim(c(1,1), foo.unconstr, grad=NULL, ui=u1, ci=c1, x=x)
$par
[1] 1.147690 1.077712
$value
[1] 149.3724
... blah blah blah ...
outer.iterations
[1] 2
$barrier.value
[1] -0.001079475
>
If you look at the ui matrix and imagine multiplying by the parameter vector to be optimized, call it $\theta$, you'll see that the result has four rows, the first of which is $\theta_1$, the second $-\theta_1$, the third $\theta_2$, and the fourth $-\theta_2$. Subtracting off the ci vector and enforcing the $\ge 0$ constraint on each row results in $\theta_1 \ge 0$, $-\theta_1 + 5 \ge 0$, $\theta_2 \ge 0$ and $-\theta_2 + 5 \ge 0$. Obviously, multiplying the second and fourth constraints by -1 and moving the constant to the right hand side gets you to $\theta_1 \le 5$ and $\theta_2 \le 5$, the upper bound constraints.
Just substitute your own values into the ci vector and add appropriate columns (if any) to the ui vector to get the box constraint set you want. | How to set limits using constrOptim in R? | Here's an example that we can use to illustrate ui and ci, with some extraneous output removed for brevity. It's maximizing the log likelihood of a normal distribution. In the first part, we use the | How to set limits using constrOptim in R?
Here's an example that we can use to illustrate ui and ci, with some extraneous output removed for brevity. It's maximizing the log likelihood of a normal distribution. In the first part, we use the optim function with box constraints, and in the second part, we use the constrOptim function with its version of the same box constraints.
# function to be optimized
> foo.unconstr <- function(par, x) -sum(dnorm(x, par[1], par[2], log=TRUE))
> x <- rnorm(100,1,1)
> optim(c(1,1), foo.unconstr, lower=c(0,0), upper=c(5,5), method="L-BFGS-B", x=x)
$par
[1] 1.147652 1.077654
$value
[1] 149.3724
>
> # constrOptim example
>
> ui <- cbind(c(1,-1,0,0),c(0,0,1,-1))
> ui
[,1] [,2]
[1,] 1 0
[2,] -1 0
[3,] 0 1
[4,] 0 -1
> ci <- c(0, -5, 0, -5)
>
> constrOptim(c(1,1), foo.unconstr, grad=NULL, ui=u1, ci=c1, x=x)
$par
[1] 1.147690 1.077712
$value
[1] 149.3724
... blah blah blah ...
outer.iterations
[1] 2
$barrier.value
[1] -0.001079475
>
If you look at the ui matrix and imagine multiplying by the parameter vector to be optimized, call it $\theta$, you'll see that the result has four rows, the first of which is $\theta_1$, the second $-\theta_1$, the third $\theta_2$, and the fourth $-\theta_2$. Subtracting off the ci vector and enforcing the $\ge 0$ constraint on each row results in $\theta_1 \ge 0$, $-\theta_1 + 5 \ge 0$, $\theta_2 \ge 0$ and $-\theta_2 + 5 \ge 0$. Obviously, multiplying the second and fourth constraints by -1 and moving the constant to the right hand side gets you to $\theta_1 \le 5$ and $\theta_2 \le 5$, the upper bound constraints.
Just substitute your own values into the ci vector and add appropriate columns (if any) to the ui vector to get the box constraint set you want. | How to set limits using constrOptim in R?
Here's an example that we can use to illustrate ui and ci, with some extraneous output removed for brevity. It's maximizing the log likelihood of a normal distribution. In the first part, we use the |
34,587 | How to set limits using constrOptim in R? | Your constraints are of two types,
either $\theta_i \geq a_i$,
or $\theta_i \leq b_i$.
The first ones are already in the right form
(and the matrix ui is just the identity matrix),
while the others can be written as
$-\theta_i \geq - b_i$:
ui is then $-I_n$ and ci is $-b$.
# Constraints
bounds <- matrix(c(
0,5,
0,Inf,
0,Inf,
0,1
), nc=2, byrow=TRUE)
colnames(bounds) <- c("lower", "upper")
# Convert the constraints to the ui and ci matrices
n <- nrow(bounds)
ui <- rbind( diag(n), -diag(n) )
ci <- c( bounds[,1], - bounds[,2] )
# Remove the infinite values
i <- as.vector(is.finite(bounds))
ui <- ui[i,]
ci <- ci[i]
# Constrained minimization
f <- function(u) sum((u+1)^2)
constrOptim(c(1,1,.01,.1), f, grad=NULL, ui=ui, ci=ci)
We can check how the constraint matrices ci and ui are interpreted:
# Print the constraints
k <- length(ci)
n <- dim(ui)[2]
for(i in seq_len(k)) {
j <- which( ui[i,] != 0 )
cat(paste( ui[i,j], " * ", "x[", (1:n)[j], "]", sep="", collapse=" + " ))
cat(" >= " )
cat( ci[i], "\n" )
}
# 1 * x[1] >= 0
# 1 * x[2] >= 0
# 1 * x[3] >= 0
# 1 * x[4] >= 0
# -1 * x[1] >= -5
# -1 * x[4] >= -1
Some of the algorithms in optim allow you
to specify the lower and upper bounds directly:
that is probably easier to use. | How to set limits using constrOptim in R? | Your constraints are of two types,
either $\theta_i \geq a_i$,
or $\theta_i \leq b_i$.
The first ones are already in the right form
(and the matrix ui is just the identity matrix),
while the others | How to set limits using constrOptim in R?
Your constraints are of two types,
either $\theta_i \geq a_i$,
or $\theta_i \leq b_i$.
The first ones are already in the right form
(and the matrix ui is just the identity matrix),
while the others can be written as
$-\theta_i \geq - b_i$:
ui is then $-I_n$ and ci is $-b$.
# Constraints
bounds <- matrix(c(
0,5,
0,Inf,
0,Inf,
0,1
), nc=2, byrow=TRUE)
colnames(bounds) <- c("lower", "upper")
# Convert the constraints to the ui and ci matrices
n <- nrow(bounds)
ui <- rbind( diag(n), -diag(n) )
ci <- c( bounds[,1], - bounds[,2] )
# Remove the infinite values
i <- as.vector(is.finite(bounds))
ui <- ui[i,]
ci <- ci[i]
# Constrained minimization
f <- function(u) sum((u+1)^2)
constrOptim(c(1,1,.01,.1), f, grad=NULL, ui=ui, ci=ci)
We can check how the constraint matrices ci and ui are interpreted:
# Print the constraints
k <- length(ci)
n <- dim(ui)[2]
for(i in seq_len(k)) {
j <- which( ui[i,] != 0 )
cat(paste( ui[i,j], " * ", "x[", (1:n)[j], "]", sep="", collapse=" + " ))
cat(" >= " )
cat( ci[i], "\n" )
}
# 1 * x[1] >= 0
# 1 * x[2] >= 0
# 1 * x[3] >= 0
# 1 * x[4] >= 0
# -1 * x[1] >= -5
# -1 * x[4] >= -1
Some of the algorithms in optim allow you
to specify the lower and upper bounds directly:
that is probably easier to use. | How to set limits using constrOptim in R?
Your constraints are of two types,
either $\theta_i \geq a_i$,
or $\theta_i \leq b_i$.
The first ones are already in the right form
(and the matrix ui is just the identity matrix),
while the others |
34,588 | How to measure distance for features with different scales? | A very common solution for this very common problem (ie, over-weighting variables) is to standardize your data.
To do this, you just perform two successive column-wise operations on your data:
subtract the mean and
divide by the standard deviation
The rationale of these two operations is to ensure the values have zero-mean (when subtracting the mean in the numerator) and unit-variance (by dividing with the standard deviation in the denominator).
For instance, in NumPy:
>>> # first create a small data matrix comprised of three variables
>>> # having three different 'scales' (means and variances)
>>> a = 10*NP.random.rand(6)
>>> b = 50*NP.random.rand(6)
>>> c = 2*NP.random.rand(6)
>>> A = NP.column_stack((a, b, c))
>>> A # the pre-standardized data
array([[ 1.753, 37.809, 1.181],
[ 1.386, 8.333, 0.235],
[ 2.827, 40.5 , 0.625],
[ 5.516, 47.202, 0.183],
[ 0.599, 27.017, 1.054],
[ 8.918, 35.398, 1.602]])
>>> # mean center the data (columnwise)
>>> A -= NP.mean(A, axis=0)
>>> A
array([[ -1.747, 5.099, 0.368],
[ -2.114, -24.377, -0.578],
[ -0.673, 7.79 , -0.189],
[ 2.016, 14.493, -0.631],
[ -2.901, -5.693, 0.24 ],
[ 5.418, 2.688, 0.789]])
>>> # divide by the standard deviation
>>> A /= NP.std(A, axis=0)
>>> A
array([[-0.606, 0.409, 0.716],
[-0.734, -1.957, -1.125],
[-0.233, 0.626, -0.367],
[ 0.7 , 1.164, -1.228],
[-1.007, -0.457, 0.468],
[ 1.881, 0.216, 1.536]]) | How to measure distance for features with different scales? | A very common solution for this very common problem (ie, over-weighting variables) is to standardize your data.
To do this, you just perform two successive column-wise operations on your data:
subtra | How to measure distance for features with different scales?
A very common solution for this very common problem (ie, over-weighting variables) is to standardize your data.
To do this, you just perform two successive column-wise operations on your data:
subtract the mean and
divide by the standard deviation
The rationale of these two operations is to ensure the values have zero-mean (when subtracting the mean in the numerator) and unit-variance (by dividing with the standard deviation in the denominator).
For instance, in NumPy:
>>> # first create a small data matrix comprised of three variables
>>> # having three different 'scales' (means and variances)
>>> a = 10*NP.random.rand(6)
>>> b = 50*NP.random.rand(6)
>>> c = 2*NP.random.rand(6)
>>> A = NP.column_stack((a, b, c))
>>> A # the pre-standardized data
array([[ 1.753, 37.809, 1.181],
[ 1.386, 8.333, 0.235],
[ 2.827, 40.5 , 0.625],
[ 5.516, 47.202, 0.183],
[ 0.599, 27.017, 1.054],
[ 8.918, 35.398, 1.602]])
>>> # mean center the data (columnwise)
>>> A -= NP.mean(A, axis=0)
>>> A
array([[ -1.747, 5.099, 0.368],
[ -2.114, -24.377, -0.578],
[ -0.673, 7.79 , -0.189],
[ 2.016, 14.493, -0.631],
[ -2.901, -5.693, 0.24 ],
[ 5.418, 2.688, 0.789]])
>>> # divide by the standard deviation
>>> A /= NP.std(A, axis=0)
>>> A
array([[-0.606, 0.409, 0.716],
[-0.734, -1.957, -1.125],
[-0.233, 0.626, -0.367],
[ 0.7 , 1.164, -1.228],
[-1.007, -0.457, 0.468],
[ 1.881, 0.216, 1.536]]) | How to measure distance for features with different scales?
A very common solution for this very common problem (ie, over-weighting variables) is to standardize your data.
To do this, you just perform two successive column-wise operations on your data:
subtra |
34,589 | How to measure distance for features with different scales? | One option is Gower's generalised (dis)simmilarity coefficient. It is defined as
$$s_{ij} = \frac{\sum \limits_{k = 1}^{m} w_{ijk} s_{ijk}}{\sum \limits_{k = 1}^{m} w_{ijk}}$$
where $s_{ij}$ is the overall similarity between samples $i$ and $j$, $s_{ijk}$ is the similarity between $i$ and $j$ for the $k$th variable and $w_{ijk}$ is the weight applied to the $k$th variable for samples $i$ and $j$.
The $s_{ijk}$ are computed separately for each of the $k$ variables, which is what allows the function to work on data in different units. If the data are binary (0/1) or nomial (classes) then $s_{ijk} = 1$ if and only if both $i$ and $j$ are in the same class or (both present or both absent).
For continuous data, $s_{ijk}$ is computed as
$$s_{ijk} = \frac{1 - |x_{ik} - x_{jk}|}{r_k}$$
where $x$ are the observed data for $i$ or $j$ on the $k$th variable and $r_K$ is the range of the $k$th variable. $r_k$ is an implicit standardisation, again accounting for the fact that each variable is in different units.
The weights $w_{ijk}$ allow additional flexibility and also a facility to handle missing data. If the data are missing for $i$ or $j$ on the $k$th variable then that comparison doesn't contribute to the overall similarity between the two variables as that pairing gets weight 0. If the data are available for $x_i$ and $x_j$ for the $k$th variable then $w_{ijk} = 1$. Weights between 0 and 1 allow the user to give weight to different variables. | How to measure distance for features with different scales? | One option is Gower's generalised (dis)simmilarity coefficient. It is defined as
$$s_{ij} = \frac{\sum \limits_{k = 1}^{m} w_{ijk} s_{ijk}}{\sum \limits_{k = 1}^{m} w_{ijk}}$$
where $s_{ij}$ is the ov | How to measure distance for features with different scales?
One option is Gower's generalised (dis)simmilarity coefficient. It is defined as
$$s_{ij} = \frac{\sum \limits_{k = 1}^{m} w_{ijk} s_{ijk}}{\sum \limits_{k = 1}^{m} w_{ijk}}$$
where $s_{ij}$ is the overall similarity between samples $i$ and $j$, $s_{ijk}$ is the similarity between $i$ and $j$ for the $k$th variable and $w_{ijk}$ is the weight applied to the $k$th variable for samples $i$ and $j$.
The $s_{ijk}$ are computed separately for each of the $k$ variables, which is what allows the function to work on data in different units. If the data are binary (0/1) or nomial (classes) then $s_{ijk} = 1$ if and only if both $i$ and $j$ are in the same class or (both present or both absent).
For continuous data, $s_{ijk}$ is computed as
$$s_{ijk} = \frac{1 - |x_{ik} - x_{jk}|}{r_k}$$
where $x$ are the observed data for $i$ or $j$ on the $k$th variable and $r_K$ is the range of the $k$th variable. $r_k$ is an implicit standardisation, again accounting for the fact that each variable is in different units.
The weights $w_{ijk}$ allow additional flexibility and also a facility to handle missing data. If the data are missing for $i$ or $j$ on the $k$th variable then that comparison doesn't contribute to the overall similarity between the two variables as that pairing gets weight 0. If the data are available for $x_i$ and $x_j$ for the $k$th variable then $w_{ijk} = 1$. Weights between 0 and 1 allow the user to give weight to different variables. | How to measure distance for features with different scales?
One option is Gower's generalised (dis)simmilarity coefficient. It is defined as
$$s_{ij} = \frac{\sum \limits_{k = 1}^{m} w_{ijk} s_{ijk}}{\sum \limits_{k = 1}^{m} w_{ijk}}$$
where $s_{ij}$ is the ov |
34,590 | How to measure distance for features with different scales? | Despite that there is a huge diversity of proximity measures you could still use euclidean distance in your case. The prerequisite for euclidean distance is interval level of measurement of all variables. All your 4 variables height, width, weight, and the ratio are interval. So, after standardization (such as the one suggested by @doug) of the variables you may apply euclidean distance. In your place, however, I would perhaps consider taking logarithm or angular transformation of the ratio variable first (before standardization). And I don't think you really need Gower coefficient (suggested by @Gavin) in your case, because all your variables are of the same level of measurement. | How to measure distance for features with different scales? | Despite that there is a huge diversity of proximity measures you could still use euclidean distance in your case. The prerequisite for euclidean distance is interval level of measurement of all variab | How to measure distance for features with different scales?
Despite that there is a huge diversity of proximity measures you could still use euclidean distance in your case. The prerequisite for euclidean distance is interval level of measurement of all variables. All your 4 variables height, width, weight, and the ratio are interval. So, after standardization (such as the one suggested by @doug) of the variables you may apply euclidean distance. In your place, however, I would perhaps consider taking logarithm or angular transformation of the ratio variable first (before standardization). And I don't think you really need Gower coefficient (suggested by @Gavin) in your case, because all your variables are of the same level of measurement. | How to measure distance for features with different scales?
Despite that there is a huge diversity of proximity measures you could still use euclidean distance in your case. The prerequisite for euclidean distance is interval level of measurement of all variab |
34,591 | Good reference book for epidemiology | The introductory book by Ken Rothman (which will affectionately be known as "Baby Rothman" from here on out) is not a representation of the quality of Modern Epidemiology by Rothman, Greenland and Lash (ME3).
Baby Rothman is meant to be a very basic introductory book, of the kind suited to a class non-Epidemiologists are taking for distribution requirements, or as a first step to someone who hasn't encountered much Epidemiology before.
ME3 on the other hand is essentially the definitive reference book for most epidemiological methods. It is the only Epidemiology textbook I've had that has always come with me, regardless of the project I'm doing, and it's proved invaluable. There's more than a few questions I've answered here with citations from it.
Beyond ME3, a few of the books I use regularly:
Survival Analysis Using SAS: A Practical Guide by Paul Allison. If you're a SAS user (or possibly even if you aren't), its a very good treatment of the doing of survival analysis.
Survival Analysis by Klein and Moeschberger is a more theoretical treatment and reference on survival analysis, but makes for a good supplement to Allison's book.
Modeling Infectious Diseases in Humans and Animals by Keeling and Rohani, if you're interested in mathematical epidemiology, is a good introductory book that keeps a balance of practice and math.
Most other references I use are either very domain specific, or programming books.
But seriously, if you have to buy one book, that book should be Modern Epidemiology. | Good reference book for epidemiology | The introductory book by Ken Rothman (which will affectionately be known as "Baby Rothman" from here on out) is not a representation of the quality of Modern Epidemiology by Rothman, Greenland and Las | Good reference book for epidemiology
The introductory book by Ken Rothman (which will affectionately be known as "Baby Rothman" from here on out) is not a representation of the quality of Modern Epidemiology by Rothman, Greenland and Lash (ME3).
Baby Rothman is meant to be a very basic introductory book, of the kind suited to a class non-Epidemiologists are taking for distribution requirements, or as a first step to someone who hasn't encountered much Epidemiology before.
ME3 on the other hand is essentially the definitive reference book for most epidemiological methods. It is the only Epidemiology textbook I've had that has always come with me, regardless of the project I'm doing, and it's proved invaluable. There's more than a few questions I've answered here with citations from it.
Beyond ME3, a few of the books I use regularly:
Survival Analysis Using SAS: A Practical Guide by Paul Allison. If you're a SAS user (or possibly even if you aren't), its a very good treatment of the doing of survival analysis.
Survival Analysis by Klein and Moeschberger is a more theoretical treatment and reference on survival analysis, but makes for a good supplement to Allison's book.
Modeling Infectious Diseases in Humans and Animals by Keeling and Rohani, if you're interested in mathematical epidemiology, is a good introductory book that keeps a balance of practice and math.
Most other references I use are either very domain specific, or programming books.
But seriously, if you have to buy one book, that book should be Modern Epidemiology. | Good reference book for epidemiology
The introductory book by Ken Rothman (which will affectionately be known as "Baby Rothman" from here on out) is not a representation of the quality of Modern Epidemiology by Rothman, Greenland and Las |
34,592 | Good reference book for epidemiology | ME3 (Modern Epidemiology 3rd edition) by Rothman et al is the standard for doctoral programs in Epidemiology in the US. My opinion is based on my experience teaching in the epidemiologic methods core curriculum sequence at Hopkins and UNC for 15 years.
I would supplement with Causal Inference by Hernan and Robins.
I have used Paul Allison's and Klein and Moscheberger's survival books. The former will be easier without instruction.
I would also recommend Statistical Inference by Casella and Berger.
Finally, Understanding Uncertainty by Lindley is a great place to start. | Good reference book for epidemiology | ME3 (Modern Epidemiology 3rd edition) by Rothman et al is the standard for doctoral programs in Epidemiology in the US. My opinion is based on my experience teaching in the epidemiologic methods core | Good reference book for epidemiology
ME3 (Modern Epidemiology 3rd edition) by Rothman et al is the standard for doctoral programs in Epidemiology in the US. My opinion is based on my experience teaching in the epidemiologic methods core curriculum sequence at Hopkins and UNC for 15 years.
I would supplement with Causal Inference by Hernan and Robins.
I have used Paul Allison's and Klein and Moscheberger's survival books. The former will be easier without instruction.
I would also recommend Statistical Inference by Casella and Berger.
Finally, Understanding Uncertainty by Lindley is a great place to start. | Good reference book for epidemiology
ME3 (Modern Epidemiology 3rd edition) by Rothman et al is the standard for doctoral programs in Epidemiology in the US. My opinion is based on my experience teaching in the epidemiologic methods core |
34,593 | k-fold CV of forecasting financial time series -- is performance on last fold more relevant? | With time series, you cannot test a forecasting model via cross-validation in the normal way because you are then using future observations to predict the past. You must use only past observations to predict the future. The time series equivalent of LOO CV is to use a rolling forecast origin instead. I've written about it in this blog post. I'm not sure if k-fold CV has a direct time series equivalent. | k-fold CV of forecasting financial time series -- is performance on last fold more relevant? | With time series, you cannot test a forecasting model via cross-validation in the normal way because you are then using future observations to predict the past. You must use only past observations to | k-fold CV of forecasting financial time series -- is performance on last fold more relevant?
With time series, you cannot test a forecasting model via cross-validation in the normal way because you are then using future observations to predict the past. You must use only past observations to predict the future. The time series equivalent of LOO CV is to use a rolling forecast origin instead. I've written about it in this blog post. I'm not sure if k-fold CV has a direct time series equivalent. | k-fold CV of forecasting financial time series -- is performance on last fold more relevant?
With time series, you cannot test a forecasting model via cross-validation in the normal way because you are then using future observations to predict the past. You must use only past observations to |
34,594 | k-fold CV of forecasting financial time series -- is performance on last fold more relevant? | In Sci-Kit Learn Python Kit they have something called "TimeSeriesSplit" which basically looks like the set of training/test samples you would get from a Walk Forward Optimization. Rob was right, you cannot use future datapoints to train for past test sets.... so the best way to cross-validate is to split your training sets into as many "folds" as possible while keeping the test set "Walking Forward". The consequence is have each successive training set a superset of those before it, and each test set more and more recent data to keep ahead of the "walk forward". | k-fold CV of forecasting financial time series -- is performance on last fold more relevant? | In Sci-Kit Learn Python Kit they have something called "TimeSeriesSplit" which basically looks like the set of training/test samples you would get from a Walk Forward Optimization. Rob was right, you | k-fold CV of forecasting financial time series -- is performance on last fold more relevant?
In Sci-Kit Learn Python Kit they have something called "TimeSeriesSplit" which basically looks like the set of training/test samples you would get from a Walk Forward Optimization. Rob was right, you cannot use future datapoints to train for past test sets.... so the best way to cross-validate is to split your training sets into as many "folds" as possible while keeping the test set "Walking Forward". The consequence is have each successive training set a superset of those before it, and each test set more and more recent data to keep ahead of the "walk forward". | k-fold CV of forecasting financial time series -- is performance on last fold more relevant?
In Sci-Kit Learn Python Kit they have something called "TimeSeriesSplit" which basically looks like the set of training/test samples you would get from a Walk Forward Optimization. Rob was right, you |
34,595 | k-fold CV of forecasting financial time series -- is performance on last fold more relevant? | I'm actually working on this same topic of the "Cross-validation with Financial Predictive Modeling" type of problem. So here are some of my findings.
Basically, I think that Cross-Validation by itself needs additional important considerations in order to produce valid and useful results, for the case of Financial Time Series Predictive Modeling.
A short answer for your question is in two parts:
Performance on the last fold is better than average, sounds like an example of the Simpson's paradox
The results you are getting could be due to back-test overfitting, and/or an information leakage between explanatory variables in the fold 2 and the target variable in the fold 1. Particularly in FTS, out-of-sample generalization does not guarantee out-of-distribution generalization.
More resources, reformulated questions, known effects in Financial Time Series.
No unbiased estimator for the variance
We normally perform CV in order to increase our confidence in the model performance whenever new data arrives. Such endeavor ultimately leads to a trade-off we have to make (it would be good to do that consciously and explicitly), and that would be the Bias-Variance trade-off, which happens when we are trying to simultaneously minimize these Biased results and variance in the results (two different sources of error), being those the main reasons of preventing supervised learning algorithms from generalizing (learning) beyond your training data. Theoretically, we can have certainty that Cross-Validation does help to reduce bias, but we do not have (yet) a way to prove that there exists an estimator to express correctly some properties of the variance, so it goes frequently underestimated, as stated in this work. And so,
The case of Financial Time Series (FTS)
In my current opinion, this work does provide various extra and special considerations when working with FTS. A quick list of important effects will be: Leakage of information, backtest-overfitting, Memory-loss. And the respective techniques, as proposed in the cited work, are: Purge&Embargo, Deflated performance metrics, fractional differentiation. I do mention those concepts because it could be the real causes of performance variation in your methods, a more "deep" reason since you are working with FTS, not just the cross-validation perspective by itself.
Visual "classical" examples.
I am working on a python library that will provide methods, visualizations, and tests for this particular question of "What type of Cross-Validation is useful for FTS". Here are some early examples I draw for that.
Hope this comment helps new visitors, maybe not for answering the question directly (there can only be 1 accepted answer), but to provide more new questions, sources, and terms for further explorations. | k-fold CV of forecasting financial time series -- is performance on last fold more relevant? | I'm actually working on this same topic of the "Cross-validation with Financial Predictive Modeling" type of problem. So here are some of my findings.
Basically, I think that Cross-Validation by itsel | k-fold CV of forecasting financial time series -- is performance on last fold more relevant?
I'm actually working on this same topic of the "Cross-validation with Financial Predictive Modeling" type of problem. So here are some of my findings.
Basically, I think that Cross-Validation by itself needs additional important considerations in order to produce valid and useful results, for the case of Financial Time Series Predictive Modeling.
A short answer for your question is in two parts:
Performance on the last fold is better than average, sounds like an example of the Simpson's paradox
The results you are getting could be due to back-test overfitting, and/or an information leakage between explanatory variables in the fold 2 and the target variable in the fold 1. Particularly in FTS, out-of-sample generalization does not guarantee out-of-distribution generalization.
More resources, reformulated questions, known effects in Financial Time Series.
No unbiased estimator for the variance
We normally perform CV in order to increase our confidence in the model performance whenever new data arrives. Such endeavor ultimately leads to a trade-off we have to make (it would be good to do that consciously and explicitly), and that would be the Bias-Variance trade-off, which happens when we are trying to simultaneously minimize these Biased results and variance in the results (two different sources of error), being those the main reasons of preventing supervised learning algorithms from generalizing (learning) beyond your training data. Theoretically, we can have certainty that Cross-Validation does help to reduce bias, but we do not have (yet) a way to prove that there exists an estimator to express correctly some properties of the variance, so it goes frequently underestimated, as stated in this work. And so,
The case of Financial Time Series (FTS)
In my current opinion, this work does provide various extra and special considerations when working with FTS. A quick list of important effects will be: Leakage of information, backtest-overfitting, Memory-loss. And the respective techniques, as proposed in the cited work, are: Purge&Embargo, Deflated performance metrics, fractional differentiation. I do mention those concepts because it could be the real causes of performance variation in your methods, a more "deep" reason since you are working with FTS, not just the cross-validation perspective by itself.
Visual "classical" examples.
I am working on a python library that will provide methods, visualizations, and tests for this particular question of "What type of Cross-Validation is useful for FTS". Here are some early examples I draw for that.
Hope this comment helps new visitors, maybe not for answering the question directly (there can only be 1 accepted answer), but to provide more new questions, sources, and terms for further explorations. | k-fold CV of forecasting financial time series -- is performance on last fold more relevant?
I'm actually working on this same topic of the "Cross-validation with Financial Predictive Modeling" type of problem. So here are some of my findings.
Basically, I think that Cross-Validation by itsel |
34,596 | A/B testing in Python or R [closed] | Sure, for both python and R, there are a few interesting and usable packages/libraries.
First, for python, i highly recommend reading this StackOverflow Answer directed to a question about A/B Testing in Python/Django. It's a one-page Master's thesis on the subject.
Akoha is fairly recent (a little more than one year old) package directed to AB Testing in Django. I haven't used this package but it is apparently the most widely used Django package of this type (based on number of downloads). It is available on bitbucket.
Django-AB is the other Django package i am aware of and the only one i have used.
As you would expect of Packages to support a web framework, each provides a micro-framework to setup, configure, conduct, and record the results of AB Tests. As you would expect, they both work by dynamically switching the (django) template (skeleton html page)referenced in the views.py file.
For R, i highly recommend the agricolae Package, authored and maintained by a University in Peru. available on CRAN. This is part of the core distribution. (See also agridat, which is comprised of very useful datasets from completed AB and multi-variate tests).
As far as i know, and i have referred to the agricolae documentation quite a few times, web applications or web sites are never mentioned as the test/analytical subject. From the package name, you can tell that the domain is agriculture, but the analogy with testing on the Web is nearly perfect.
This package nicely complements the two Django packages because agricolae is directed to the beginning (test design and establishing success/termination criterion) and end (analysis of the results) of the AB Test workflow. | A/B testing in Python or R [closed] | Sure, for both python and R, there are a few interesting and usable packages/libraries.
First, for python, i highly recommend reading this StackOverflow Answer directed to a question about A/B Testing | A/B testing in Python or R [closed]
Sure, for both python and R, there are a few interesting and usable packages/libraries.
First, for python, i highly recommend reading this StackOverflow Answer directed to a question about A/B Testing in Python/Django. It's a one-page Master's thesis on the subject.
Akoha is fairly recent (a little more than one year old) package directed to AB Testing in Django. I haven't used this package but it is apparently the most widely used Django package of this type (based on number of downloads). It is available on bitbucket.
Django-AB is the other Django package i am aware of and the only one i have used.
As you would expect of Packages to support a web framework, each provides a micro-framework to setup, configure, conduct, and record the results of AB Tests. As you would expect, they both work by dynamically switching the (django) template (skeleton html page)referenced in the views.py file.
For R, i highly recommend the agricolae Package, authored and maintained by a University in Peru. available on CRAN. This is part of the core distribution. (See also agridat, which is comprised of very useful datasets from completed AB and multi-variate tests).
As far as i know, and i have referred to the agricolae documentation quite a few times, web applications or web sites are never mentioned as the test/analytical subject. From the package name, you can tell that the domain is agriculture, but the analogy with testing on the Web is nearly perfect.
This package nicely complements the two Django packages because agricolae is directed to the beginning (test design and establishing success/termination criterion) and end (analysis of the results) of the AB Test workflow. | A/B testing in Python or R [closed]
Sure, for both python and R, there are a few interesting and usable packages/libraries.
First, for python, i highly recommend reading this StackOverflow Answer directed to a question about A/B Testing |
34,597 | A/B testing in Python or R [closed] | Depending on the approach you want to take to the subject, the below offered two alternatives. The first is traditional Chi-Squared Testing for Split Testing and the second is a Bayesian approach to split testing. Depending on your organizational stakeholders requirements for the analysis, you might as well do both if you have the data.
Chi-Squared Testing (Traditional) A/B Testing with Python:
http://okomestudio.net/biboroku/?p=2375
Bayesian A/B Testing with Python: http://www.bayesianwitch.com/blog/2014/bayesian_ab_test.html | A/B testing in Python or R [closed] | Depending on the approach you want to take to the subject, the below offered two alternatives. The first is traditional Chi-Squared Testing for Split Testing and the second is a Bayesian approach to s | A/B testing in Python or R [closed]
Depending on the approach you want to take to the subject, the below offered two alternatives. The first is traditional Chi-Squared Testing for Split Testing and the second is a Bayesian approach to split testing. Depending on your organizational stakeholders requirements for the analysis, you might as well do both if you have the data.
Chi-Squared Testing (Traditional) A/B Testing with Python:
http://okomestudio.net/biboroku/?p=2375
Bayesian A/B Testing with Python: http://www.bayesianwitch.com/blog/2014/bayesian_ab_test.html | A/B testing in Python or R [closed]
Depending on the approach you want to take to the subject, the below offered two alternatives. The first is traditional Chi-Squared Testing for Split Testing and the second is a Bayesian approach to s |
34,598 | How to make glmnet give the same results as glm? | You will get the same results as glm when you pass alpha=1 (the default) and lambda=0. especially that last one: it means no penalization.
Note that the method of fitting is different in both, so although theoretically you should get the same result, there may still be tiny differences depending on your data. | How to make glmnet give the same results as glm? | You will get the same results as glm when you pass alpha=1 (the default) and lambda=0. especially that last one: it means no penalization.
Note that the method of fitting is different in both, so alth | How to make glmnet give the same results as glm?
You will get the same results as glm when you pass alpha=1 (the default) and lambda=0. especially that last one: it means no penalization.
Note that the method of fitting is different in both, so although theoretically you should get the same result, there may still be tiny differences depending on your data. | How to make glmnet give the same results as glm?
You will get the same results as glm when you pass alpha=1 (the default) and lambda=0. especially that last one: it means no penalization.
Note that the method of fitting is different in both, so alth |
34,599 | How to make glmnet give the same results as glm? | Although unregularized glmnet should behave like glm, glmnet will override lambda=0 in some circumstances. It's safer to use the base glm function, especially if you're training a large number of models.
x <- structure(c(0.028, 0.023, 0.0077, 0.14, 0.027, 0.084, 0.018,
0.055, 0.0089, 0.016, 0.037, 0.043, 0.046, 0.031, 0.034, 0.056,
0.016, 0.048, 0.013, 0.02, 0.067, 0.046, 0.058, 0.054, 0.036,
0.043, 0.009, 0.12, 0.024, 0.018, 0.066, 0.046, 0.057, 0.054,
0.036, 0.043, 0.009, 0.12, 0.024, 0.018, 0.051, 0.043, 0.047,
0.045, 0.034, 0.04, 0.009, 0.085, 0.022, 0.016, 0.028, 0.023,
0.0089, 0.14, 0.028, 0.084, 0.02, 0.055, 0.0089, 0.016, 0.067,
0.049, 0.058, 0.055, 0.038, 0.043, 0.009, 0.12, 0.024, 0.018,
0.067, 0.046, 0.058, 0.054, 0.036, 0.043, 0.009, 0.12, 0.024,
0.018), .Dim = c(10L, 8L), .Dimnames = list(NULL, NULL))
y <- gl(2, 5)
fit <- glmnet::glmnet(x, y, family = "binomial", lambda = 0)
fit$lambda # should be 0 but actually infinity
Warning messages:
1: In lognet(xd, is.sparse, ix, jx, y, weights, offset, alpha, nobs, :
one multinomial or binomial class has fewer than 8 observations; dangerous ground
2: from glmnet Fortran code (error code -1); Convergence for 1th lambda value not reached after maxit=100000 iterations; solutions for larger lambdas returned
3: In getcoef(fit, nvars, nx, vnames) :
an empty model has been returned; probably a convergence issue | How to make glmnet give the same results as glm? | Although unregularized glmnet should behave like glm, glmnet will override lambda=0 in some circumstances. It's safer to use the base glm function, especially if you're training a large number of mode | How to make glmnet give the same results as glm?
Although unregularized glmnet should behave like glm, glmnet will override lambda=0 in some circumstances. It's safer to use the base glm function, especially if you're training a large number of models.
x <- structure(c(0.028, 0.023, 0.0077, 0.14, 0.027, 0.084, 0.018,
0.055, 0.0089, 0.016, 0.037, 0.043, 0.046, 0.031, 0.034, 0.056,
0.016, 0.048, 0.013, 0.02, 0.067, 0.046, 0.058, 0.054, 0.036,
0.043, 0.009, 0.12, 0.024, 0.018, 0.066, 0.046, 0.057, 0.054,
0.036, 0.043, 0.009, 0.12, 0.024, 0.018, 0.051, 0.043, 0.047,
0.045, 0.034, 0.04, 0.009, 0.085, 0.022, 0.016, 0.028, 0.023,
0.0089, 0.14, 0.028, 0.084, 0.02, 0.055, 0.0089, 0.016, 0.067,
0.049, 0.058, 0.055, 0.038, 0.043, 0.009, 0.12, 0.024, 0.018,
0.067, 0.046, 0.058, 0.054, 0.036, 0.043, 0.009, 0.12, 0.024,
0.018), .Dim = c(10L, 8L), .Dimnames = list(NULL, NULL))
y <- gl(2, 5)
fit <- glmnet::glmnet(x, y, family = "binomial", lambda = 0)
fit$lambda # should be 0 but actually infinity
Warning messages:
1: In lognet(xd, is.sparse, ix, jx, y, weights, offset, alpha, nobs, :
one multinomial or binomial class has fewer than 8 observations; dangerous ground
2: from glmnet Fortran code (error code -1); Convergence for 1th lambda value not reached after maxit=100000 iterations; solutions for larger lambdas returned
3: In getcoef(fit, nvars, nx, vnames) :
an empty model has been returned; probably a convergence issue | How to make glmnet give the same results as glm?
Although unregularized glmnet should behave like glm, glmnet will override lambda=0 in some circumstances. It's safer to use the base glm function, especially if you're training a large number of mode |
34,600 | Bayesian two-factor ANOVA | Simon Jackman has some working code for fitting ANOVA and regression models with JAGS (which is pretty like BUGS), e.g. two-way ANOVA via JAGS (R code) or maybe among his handouts on bayesian analysis for the social sciences.
A lot of WinBUGS code, including one- and two-way ANOVA, seem to be available on the companion website for Bayesian Modeling Using WinBUGS: An introduction. | Bayesian two-factor ANOVA | Simon Jackman has some working code for fitting ANOVA and regression models with JAGS (which is pretty like BUGS), e.g. two-way ANOVA via JAGS (R code) or maybe among his handouts on bayesian analysis | Bayesian two-factor ANOVA
Simon Jackman has some working code for fitting ANOVA and regression models with JAGS (which is pretty like BUGS), e.g. two-way ANOVA via JAGS (R code) or maybe among his handouts on bayesian analysis for the social sciences.
A lot of WinBUGS code, including one- and two-way ANOVA, seem to be available on the companion website for Bayesian Modeling Using WinBUGS: An introduction. | Bayesian two-factor ANOVA
Simon Jackman has some working code for fitting ANOVA and regression models with JAGS (which is pretty like BUGS), e.g. two-way ANOVA via JAGS (R code) or maybe among his handouts on bayesian analysis |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.