idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
30,801 | Why aren't all tests scored via item analysis/response theory? | A first argument has do with transparency. @rolando2 has already made this point. The students want to know ex-ante how much each item is worth.
A second argument is that the weights do not only reflect the degree of difficulty of a question, but also the degree of importance the instructor attaches to a question. Indeed, the aim of an exam is testing and certifying knowledge and competencies. As such, the weights attributed to different questions and items have to be set beforehand by the teacher. You should not forget that "all models are wrong, and only some are useful". In this case one can have some doubts on the usefulness.
This being said, I think that (more or less fancy) statistical analysis could come in ex-post, for the analysis of the results. There it can yield some interesting insights. Now, if this is done and to what degree it is done, depends certainly on the statistical skills of the teacher. | Why aren't all tests scored via item analysis/response theory? | A first argument has do with transparency. @rolando2 has already made this point. The students want to know ex-ante how much each item is worth.
A second argument is that the weights do not only refl | Why aren't all tests scored via item analysis/response theory?
A first argument has do with transparency. @rolando2 has already made this point. The students want to know ex-ante how much each item is worth.
A second argument is that the weights do not only reflect the degree of difficulty of a question, but also the degree of importance the instructor attaches to a question. Indeed, the aim of an exam is testing and certifying knowledge and competencies. As such, the weights attributed to different questions and items have to be set beforehand by the teacher. You should not forget that "all models are wrong, and only some are useful". In this case one can have some doubts on the usefulness.
This being said, I think that (more or less fancy) statistical analysis could come in ex-post, for the analysis of the results. There it can yield some interesting insights. Now, if this is done and to what degree it is done, depends certainly on the statistical skills of the teacher. | Why aren't all tests scored via item analysis/response theory?
A first argument has do with transparency. @rolando2 has already made this point. The students want to know ex-ante how much each item is worth.
A second argument is that the weights do not only refl |
30,802 | Why aren't all tests scored via item analysis/response theory? | I wanted to make a clarification regarding the original question. In item response theory, the discrimination (i.e. item slope or factor loading) is not indicative of difficulty. Using a model that allows for varying discrimination for each item is effectively weighting them according to their estimated correlation to the latent variable, not by their difficulty.
In other words, a more difficult item could be weighted down if it estimated to be fairly uncorrelated with the dimension of interest and vice versa, an easier item could be weighted up if is estimated to be highly correlated.
I agree with previous answers that point to (a) the lack of awareness of item response methods among practitioners, (b) the fact that using these models require some technical expertise even if one is aware of their advantages (specially the ability of evaluating the fit of the measurement model), (c) the student's expectations as pointed out by @rolando2, and last but not least (d) the theoretical considerations that instructors may have for weighting different items differently. However, I did want to mention that:
Not all item response theory models allow for variation of the discrimination parameter, where the Rasch model is probably the best known example of a model where the discriminations across items are held constant. Under the Rasch family of models, the sum score is a sufficient statistic for the item response score, therefore, there will be no difference in the order of the respondents, and the only practical differences will be appreciated if the 'distances' between the score groups are considered.
There are researchers that defend the use of classical test theory (which relies on the traditional use of sum scores or average correct) for both theoretical and empirical reasons. Perhaps the most used argument is the fact that scores generated under item response theory are effectively very similar to the ones produced under classical test theory. See for example the work by Xu & Stone (2011), Using IRT Trait Estimates Versus Summated Scores in Predicting Outcomes, Educational and Psychological Measurement, where they report correlations over .97 under a wide array of conditions. | Why aren't all tests scored via item analysis/response theory? | I wanted to make a clarification regarding the original question. In item response theory, the discrimination (i.e. item slope or factor loading) is not indicative of difficulty. Using a model that al | Why aren't all tests scored via item analysis/response theory?
I wanted to make a clarification regarding the original question. In item response theory, the discrimination (i.e. item slope or factor loading) is not indicative of difficulty. Using a model that allows for varying discrimination for each item is effectively weighting them according to their estimated correlation to the latent variable, not by their difficulty.
In other words, a more difficult item could be weighted down if it estimated to be fairly uncorrelated with the dimension of interest and vice versa, an easier item could be weighted up if is estimated to be highly correlated.
I agree with previous answers that point to (a) the lack of awareness of item response methods among practitioners, (b) the fact that using these models require some technical expertise even if one is aware of their advantages (specially the ability of evaluating the fit of the measurement model), (c) the student's expectations as pointed out by @rolando2, and last but not least (d) the theoretical considerations that instructors may have for weighting different items differently. However, I did want to mention that:
Not all item response theory models allow for variation of the discrimination parameter, where the Rasch model is probably the best known example of a model where the discriminations across items are held constant. Under the Rasch family of models, the sum score is a sufficient statistic for the item response score, therefore, there will be no difference in the order of the respondents, and the only practical differences will be appreciated if the 'distances' between the score groups are considered.
There are researchers that defend the use of classical test theory (which relies on the traditional use of sum scores or average correct) for both theoretical and empirical reasons. Perhaps the most used argument is the fact that scores generated under item response theory are effectively very similar to the ones produced under classical test theory. See for example the work by Xu & Stone (2011), Using IRT Trait Estimates Versus Summated Scores in Predicting Outcomes, Educational and Psychological Measurement, where they report correlations over .97 under a wide array of conditions. | Why aren't all tests scored via item analysis/response theory?
I wanted to make a clarification regarding the original question. In item response theory, the discrimination (i.e. item slope or factor loading) is not indicative of difficulty. Using a model that al |
30,803 | Why aren't all tests scored via item analysis/response theory? | Shouldn't a student's score be based on what they know and answer on the test rather than what everyone else in the class does?
If you gave the same test 2 different years and you had 2 students (1 in each) who answered the exact same questions correctly (without cheating), does it really make sense that they would received different grades based on how much the other students in their class studied?
And personally, I don't want to give any students motivation to sabatoge their class mates in place of learning the material themselves.
IRT can give some insight into the test, but I would not use it to actively weight the scores.
When I think of weights, I think that someone should get more points for getting a hard question correct, but they should lose more points for getting an easy question wrong. Combine those and you still end up with equal weighting. Or I actually try to weight based on time or effort needed to answer the question, so that someone who answers the questions in a different order does not have an advantage on a timed test. | Why aren't all tests scored via item analysis/response theory? | Shouldn't a student's score be based on what they know and answer on the test rather than what everyone else in the class does?
If you gave the same test 2 different years and you had 2 students (1 | Why aren't all tests scored via item analysis/response theory?
Shouldn't a student's score be based on what they know and answer on the test rather than what everyone else in the class does?
If you gave the same test 2 different years and you had 2 students (1 in each) who answered the exact same questions correctly (without cheating), does it really make sense that they would received different grades based on how much the other students in their class studied?
And personally, I don't want to give any students motivation to sabatoge their class mates in place of learning the material themselves.
IRT can give some insight into the test, but I would not use it to actively weight the scores.
When I think of weights, I think that someone should get more points for getting a hard question correct, but they should lose more points for getting an easy question wrong. Combine those and you still end up with equal weighting. Or I actually try to weight based on time or effort needed to answer the question, so that someone who answers the questions in a different order does not have an advantage on a timed test. | Why aren't all tests scored via item analysis/response theory?
Shouldn't a student's score be based on what they know and answer on the test rather than what everyone else in the class does?
If you gave the same test 2 different years and you had 2 students (1 |
30,804 | Is sampling from a folded normal distribution equivalent to sampling from a normal distribution truncated at 0? | Yes, the approaches give the same results for a zero-mean Normal distribution.
It suffices to check that probabilities agree on intervals, because these generate the sigma algebra of all (Lebesgue) measurable sets. Let $\Phi$ be the standard Normal density: $\Phi((a,b])$ gives the probability that a standard Normal variate lies in the interval $(a,b]$. Then, for $0 \le a \le b$, the truncated probability is
$$\Phi_{\text{truncated}}((a,b]) = \Phi((a,b]) / \Phi([0, \infty]) = 2\Phi((a,b])$$
(because $\Phi([0, \infty]) = 1/2$) and the folded probability is
$$\Phi_{\text{folded}}((a,b]) = \Phi((a,b]) + \Phi([-b,-a)) = 2\Phi((a,b])$$
due to the symmetry of $\Phi$ about $0$.
This analysis holds for any distribution that is symmetric about $0$ and has zero probability of being $0$. If the mean is nonzero, however, the distribution is not symmetric and the two approaches do not give the same result, as the same calculations show.
This graph shows the probability density functions for a Normal(1,1) distribution (yellow), a folded Normal(1,1) distribution (red), and a truncated Normal(1,1) distribution (blue). Note how the folded distribution does not share the characteristic bell-curve shape with the other two. The blue curve (truncated distribution) is the positive part of the yellow curve, scaled up to have unit area, whereas the red curve (folded distribution) is the sum of the positive part of the yellow curve and its negative tail (as reflected around the y-axis). | Is sampling from a folded normal distribution equivalent to sampling from a normal distribution trun | Yes, the approaches give the same results for a zero-mean Normal distribution.
It suffices to check that probabilities agree on intervals, because these generate the sigma algebra of all (Lebesgue) me | Is sampling from a folded normal distribution equivalent to sampling from a normal distribution truncated at 0?
Yes, the approaches give the same results for a zero-mean Normal distribution.
It suffices to check that probabilities agree on intervals, because these generate the sigma algebra of all (Lebesgue) measurable sets. Let $\Phi$ be the standard Normal density: $\Phi((a,b])$ gives the probability that a standard Normal variate lies in the interval $(a,b]$. Then, for $0 \le a \le b$, the truncated probability is
$$\Phi_{\text{truncated}}((a,b]) = \Phi((a,b]) / \Phi([0, \infty]) = 2\Phi((a,b])$$
(because $\Phi([0, \infty]) = 1/2$) and the folded probability is
$$\Phi_{\text{folded}}((a,b]) = \Phi((a,b]) + \Phi([-b,-a)) = 2\Phi((a,b])$$
due to the symmetry of $\Phi$ about $0$.
This analysis holds for any distribution that is symmetric about $0$ and has zero probability of being $0$. If the mean is nonzero, however, the distribution is not symmetric and the two approaches do not give the same result, as the same calculations show.
This graph shows the probability density functions for a Normal(1,1) distribution (yellow), a folded Normal(1,1) distribution (red), and a truncated Normal(1,1) distribution (blue). Note how the folded distribution does not share the characteristic bell-curve shape with the other two. The blue curve (truncated distribution) is the positive part of the yellow curve, scaled up to have unit area, whereas the red curve (folded distribution) is the sum of the positive part of the yellow curve and its negative tail (as reflected around the y-axis). | Is sampling from a folded normal distribution equivalent to sampling from a normal distribution trun
Yes, the approaches give the same results for a zero-mean Normal distribution.
It suffices to check that probabilities agree on intervals, because these generate the sigma algebra of all (Lebesgue) me |
30,805 | Is sampling from a folded normal distribution equivalent to sampling from a normal distribution truncated at 0? | Let $X \sim N(\mu = 1, SD=1)$. The distribution of $X|X > 0$ is definitely not the same as that of $|X|$.
A quick test in R:
x <- rnorm(10000, 1, 1)
par(mfrow=c(2,1))
hist(abs(x), breaks=100)
hist(x[x > 0], breaks=100)
This gives the following. | Is sampling from a folded normal distribution equivalent to sampling from a normal distribution trun | Let $X \sim N(\mu = 1, SD=1)$. The distribution of $X|X > 0$ is definitely not the same as that of $|X|$.
A quick test in R:
x <- rnorm(10000, 1, 1)
par(mfrow=c(2,1))
hist(abs(x), breaks=100)
hist(x[ | Is sampling from a folded normal distribution equivalent to sampling from a normal distribution truncated at 0?
Let $X \sim N(\mu = 1, SD=1)$. The distribution of $X|X > 0$ is definitely not the same as that of $|X|$.
A quick test in R:
x <- rnorm(10000, 1, 1)
par(mfrow=c(2,1))
hist(abs(x), breaks=100)
hist(x[x > 0], breaks=100)
This gives the following. | Is sampling from a folded normal distribution equivalent to sampling from a normal distribution trun
Let $X \sim N(\mu = 1, SD=1)$. The distribution of $X|X > 0$ is definitely not the same as that of $|X|$.
A quick test in R:
x <- rnorm(10000, 1, 1)
par(mfrow=c(2,1))
hist(abs(x), breaks=100)
hist(x[ |
30,806 | How to interpret p-values of 0 or 1? | All that the 0 and 1 mean are that they are very very close to 0 or 1. If you look carefully you'll see that when the adjusted p is 1 then the effect is almost 0 and when the adjusted p is 0 the nearer bound of the effect is very far away. Therefore, there's nothing "wrong" per se. Now look at how many significant digits you have. The 1 or 0 just means that it's closer to that value than can be represented by a number with that many digits. Feel free to report something like < 0.0001, or > 0.9999. | How to interpret p-values of 0 or 1? | All that the 0 and 1 mean are that they are very very close to 0 or 1. If you look carefully you'll see that when the adjusted p is 1 then the effect is almost 0 and when the adjusted p is 0 the near | How to interpret p-values of 0 or 1?
All that the 0 and 1 mean are that they are very very close to 0 or 1. If you look carefully you'll see that when the adjusted p is 1 then the effect is almost 0 and when the adjusted p is 0 the nearer bound of the effect is very far away. Therefore, there's nothing "wrong" per se. Now look at how many significant digits you have. The 1 or 0 just means that it's closer to that value than can be represented by a number with that many digits. Feel free to report something like < 0.0001, or > 0.9999. | How to interpret p-values of 0 or 1?
All that the 0 and 1 mean are that they are very very close to 0 or 1. If you look carefully you'll see that when the adjusted p is 1 then the effect is almost 0 and when the adjusted p is 0 the near |
30,807 | How to create a dataset with conditional probability? | You know the following marginal probabilities
Symptom Total
Yes No
Disease Yes a b 0.003
No c d 0.997
Total 0.005 0.995 1.000
and that a/(a+b) = 0.3 so this becomes
Symptom Total
Yes No
Disease Yes 0.0009 0.0021 0.003
No 0.0041 0.9929 0.997
Total 0.005 0.995 1.000
and indeed a/(a+c) = 0.18 as you stated.
So in R you could code something like
diseaserate <- 3/1000
symptomrate <- 5/1000
symptomgivendisease <- 0.3
status <- sample(c("SYDY", "SNDY", "SYDN", "SNDN"), 1000,
prob=c(diseaserate * symptomgivendisease,
diseaserate * (1-symptomgivendisease),
symptomrate - diseaserate * symptomgivendisease,
1 - symptomrate - diseaserate * (1-symptomgivendisease)),
rep=TRUE)
symptom <- status %in% c("SYDY","SYDN")
disease <- status %in% c("SYDY","SNDY")
though you should note that 1000 is a small sample when one of the events has a probability of 0.0009 of happening. | How to create a dataset with conditional probability? | You know the following marginal probabilities
Symptom Total
Yes No
Disease Yes a b 0.003
No c d 0.997
Total | How to create a dataset with conditional probability?
You know the following marginal probabilities
Symptom Total
Yes No
Disease Yes a b 0.003
No c d 0.997
Total 0.005 0.995 1.000
and that a/(a+b) = 0.3 so this becomes
Symptom Total
Yes No
Disease Yes 0.0009 0.0021 0.003
No 0.0041 0.9929 0.997
Total 0.005 0.995 1.000
and indeed a/(a+c) = 0.18 as you stated.
So in R you could code something like
diseaserate <- 3/1000
symptomrate <- 5/1000
symptomgivendisease <- 0.3
status <- sample(c("SYDY", "SNDY", "SYDN", "SNDN"), 1000,
prob=c(diseaserate * symptomgivendisease,
diseaserate * (1-symptomgivendisease),
symptomrate - diseaserate * symptomgivendisease,
1 - symptomrate - diseaserate * (1-symptomgivendisease)),
rep=TRUE)
symptom <- status %in% c("SYDY","SYDN")
disease <- status %in% c("SYDY","SNDY")
though you should note that 1000 is a small sample when one of the events has a probability of 0.0009 of happening. | How to create a dataset with conditional probability?
You know the following marginal probabilities
Symptom Total
Yes No
Disease Yes a b 0.003
No c d 0.997
Total |
30,808 | How to create a dataset with conditional probability? | The table function returns a matrix-like object:
> symptom <- sample(c("yes","no"), 100, prob=c(0.2, 0.8), rep=TRUE)
> disease <- sample(c("yes","no"), 100, prob=c(0.2, 0.8), rep=TRUE)
> dataset <- data.frame(symptom, disease)
> dst_S_D <-with(dataset, table(symptom, disease))
> dst_S_D
disease
symptom no yes
no 65 13
yes 17 5
So the Pr(D|S="yes") =
> probD_Sy <- dst_S_D[2, 2]/sum(dst_S_D[2, ] )
> probD_Sy
[1] 0.2272727
I changed the problem because the first time I ran it with your parameters, I got:
> dst_S_D <-with(dataset, table(symptom, disease)); dst_S_D
disease
symptom no yes
no 9954 22
yes 24 0
And I thought a Pr(D|S="yes") of 0 was rather boring. If you are going to run this many times you should construct a function and use that function with the replicate function.
Here is a method of constructing a dataset that applies a different probability of disease in the symptomatic group that it 3 times higher than is used in the asymptomatic group:
symptom <- sample(c("yes","no"), 10000, prob=c(0.02, 0.98), rep=TRUE)
dataset <- data.frame(symptom, disease=NA)
dataset$disease[dataset$symptom == "yes"] <-
sample(c("yes","no"), sum(dataset$symptom == "yes"), prob=c(0.15, 1-0.15), rep=TRUE)
dataset$disease[dataset$symptom == "no"] <-
sample(c("yes","no"), sum(dataset$symptom == "no"), prob=c(0.05, 1-0.05), rep=TRUE)
dst_S_D <-with(dataset, table(symptom, disease)); dst_S_D
# disease
symptom no yes
no 9284 509
yes 176 31 | How to create a dataset with conditional probability? | The table function returns a matrix-like object:
> symptom <- sample(c("yes","no"), 100, prob=c(0.2, 0.8), rep=TRUE)
> disease <- sample(c("yes","no"), 100, prob=c(0.2, 0.8), rep=TRUE)
> dataset <- da | How to create a dataset with conditional probability?
The table function returns a matrix-like object:
> symptom <- sample(c("yes","no"), 100, prob=c(0.2, 0.8), rep=TRUE)
> disease <- sample(c("yes","no"), 100, prob=c(0.2, 0.8), rep=TRUE)
> dataset <- data.frame(symptom, disease)
> dst_S_D <-with(dataset, table(symptom, disease))
> dst_S_D
disease
symptom no yes
no 65 13
yes 17 5
So the Pr(D|S="yes") =
> probD_Sy <- dst_S_D[2, 2]/sum(dst_S_D[2, ] )
> probD_Sy
[1] 0.2272727
I changed the problem because the first time I ran it with your parameters, I got:
> dst_S_D <-with(dataset, table(symptom, disease)); dst_S_D
disease
symptom no yes
no 9954 22
yes 24 0
And I thought a Pr(D|S="yes") of 0 was rather boring. If you are going to run this many times you should construct a function and use that function with the replicate function.
Here is a method of constructing a dataset that applies a different probability of disease in the symptomatic group that it 3 times higher than is used in the asymptomatic group:
symptom <- sample(c("yes","no"), 10000, prob=c(0.02, 0.98), rep=TRUE)
dataset <- data.frame(symptom, disease=NA)
dataset$disease[dataset$symptom == "yes"] <-
sample(c("yes","no"), sum(dataset$symptom == "yes"), prob=c(0.15, 1-0.15), rep=TRUE)
dataset$disease[dataset$symptom == "no"] <-
sample(c("yes","no"), sum(dataset$symptom == "no"), prob=c(0.05, 1-0.05), rep=TRUE)
dst_S_D <-with(dataset, table(symptom, disease)); dst_S_D
# disease
symptom no yes
no 9284 509
yes 176 31 | How to create a dataset with conditional probability?
The table function returns a matrix-like object:
> symptom <- sample(c("yes","no"), 100, prob=c(0.2, 0.8), rep=TRUE)
> disease <- sample(c("yes","no"), 100, prob=c(0.2, 0.8), rep=TRUE)
> dataset <- da |
30,809 | How to create a dataset with conditional probability? | I'd argue your question isn't really that heavily dependent on the R language, and more appropriate here, because - to be blunt - the generation of data like this is mostly a statistical task, rather than a programming one.
First Question: p(S|D) is the risk of having symptom S in a population with disease D. It can be directly comparable to the prevalence with certain caveats, like the symptom having no impact on disease duration. Consider the following example: One of the symptoms of SuperEbola is Instant Death, with p(Death | Super Ebola) = 0.99. Here, your prevalence of the symptom would actually be extremely low (indeed, 0.00) as no one you can sample with the disease has the symptom.
Second Question: I would back into this in a somewhat stepwise fashion. First, calculate the baseline risk of the symptom you'll need to get 0.15 in the whole population, taking into account that 0.03% of your population will be at higher rate. Then essentially generate two probabilities:
Risk of disease = 0.003
Risk of symptom = calculated baseline risk + relative increase due to disease * binary indicator of disease status
Then generate two uniform random numbers. If the first is less than 0.003, they've got the disease. That then gets fed into the risk calculation for the second, and if the random number for each individual is less than their risk, they've got the symptom.
This is sort of a plodding, inelegant way to do things, and its likely someone will come by with a far more efficient approach. But I find in simulation studies spelling each step out in the code, and keeping it as close to how I would see a data set in the real world is useful. | How to create a dataset with conditional probability? | I'd argue your question isn't really that heavily dependent on the R language, and more appropriate here, because - to be blunt - the generation of data like this is mostly a statistical task, rather | How to create a dataset with conditional probability?
I'd argue your question isn't really that heavily dependent on the R language, and more appropriate here, because - to be blunt - the generation of data like this is mostly a statistical task, rather than a programming one.
First Question: p(S|D) is the risk of having symptom S in a population with disease D. It can be directly comparable to the prevalence with certain caveats, like the symptom having no impact on disease duration. Consider the following example: One of the symptoms of SuperEbola is Instant Death, with p(Death | Super Ebola) = 0.99. Here, your prevalence of the symptom would actually be extremely low (indeed, 0.00) as no one you can sample with the disease has the symptom.
Second Question: I would back into this in a somewhat stepwise fashion. First, calculate the baseline risk of the symptom you'll need to get 0.15 in the whole population, taking into account that 0.03% of your population will be at higher rate. Then essentially generate two probabilities:
Risk of disease = 0.003
Risk of symptom = calculated baseline risk + relative increase due to disease * binary indicator of disease status
Then generate two uniform random numbers. If the first is less than 0.003, they've got the disease. That then gets fed into the risk calculation for the second, and if the random number for each individual is less than their risk, they've got the symptom.
This is sort of a plodding, inelegant way to do things, and its likely someone will come by with a far more efficient approach. But I find in simulation studies spelling each step out in the code, and keeping it as close to how I would see a data set in the real world is useful. | How to create a dataset with conditional probability?
I'd argue your question isn't really that heavily dependent on the R language, and more appropriate here, because - to be blunt - the generation of data like this is mostly a statistical task, rather |
30,810 | How to create a dataset with conditional probability? | First question:
Yes of course that is almost the definition, although you will have some error associated with your sample size. i.e. This is only exactly correct at an infinite sample size.
Second question:
This is called Bayes Theorem, but I presume you already know that. Now given the information you have provided I get the probability of P(D|S) as 0.18 or 18%:
P(S|D)P(D)
----------
P(S)
0.3*(3/1000)
= ------------
(5/1000)
= 0.18
Now unfortunately, I am not too familiar with R so can't really help you out with an exact program. But surely the quantities of people that fall into each group are quite easy to calculate:
For your 10000 sample set you need:
50 people with symptoms (population*P(S))
9 people should have symptoms and the disease (50*P(D|S))
21 people with the disease and no symptoms (population*P(D)=30 and we already have 9)
Which should make generating a suitable population fairly trivial. | How to create a dataset with conditional probability? | First question:
Yes of course that is almost the definition, although you will have some error associated with your sample size. i.e. This is only exactly correct at an infinite sample size.
Second qu | How to create a dataset with conditional probability?
First question:
Yes of course that is almost the definition, although you will have some error associated with your sample size. i.e. This is only exactly correct at an infinite sample size.
Second question:
This is called Bayes Theorem, but I presume you already know that. Now given the information you have provided I get the probability of P(D|S) as 0.18 or 18%:
P(S|D)P(D)
----------
P(S)
0.3*(3/1000)
= ------------
(5/1000)
= 0.18
Now unfortunately, I am not too familiar with R so can't really help you out with an exact program. But surely the quantities of people that fall into each group are quite easy to calculate:
For your 10000 sample set you need:
50 people with symptoms (population*P(S))
9 people should have symptoms and the disease (50*P(D|S))
21 people with the disease and no symptoms (population*P(D)=30 and we already have 9)
Which should make generating a suitable population fairly trivial. | How to create a dataset with conditional probability?
First question:
Yes of course that is almost the definition, although you will have some error associated with your sample size. i.e. This is only exactly correct at an infinite sample size.
Second qu |
30,811 | General advice on modeling | In my opinion, Frank Harrell's "Regression Modeling Strategies" is a good reference. In fact, it is probably my favourite statistics book.
I've only studied less than half of the book so far, but have got lots of good stuff out of it, for example, representing predictors as splines to avoid assuming linearity, multiple imputation for missing data, and bootstrap model validation. Perhaps my favourite thing about the book is the general theme that an important goal is to get results which will replicate on new data, not results that only hold on the current data.
Additional benefits are Frank Harrell's R package rms which makes it easy to do many of the things described in the book, and his willingness to answer questions here and on R-help. | General advice on modeling | In my opinion, Frank Harrell's "Regression Modeling Strategies" is a good reference. In fact, it is probably my favourite statistics book.
I've only studied less than half of the book so far, but have | General advice on modeling
In my opinion, Frank Harrell's "Regression Modeling Strategies" is a good reference. In fact, it is probably my favourite statistics book.
I've only studied less than half of the book so far, but have got lots of good stuff out of it, for example, representing predictors as splines to avoid assuming linearity, multiple imputation for missing data, and bootstrap model validation. Perhaps my favourite thing about the book is the general theme that an important goal is to get results which will replicate on new data, not results that only hold on the current data.
Additional benefits are Frank Harrell's R package rms which makes it easy to do many of the things described in the book, and his willingness to answer questions here and on R-help. | General advice on modeling
In my opinion, Frank Harrell's "Regression Modeling Strategies" is a good reference. In fact, it is probably my favourite statistics book.
I've only studied less than half of the book so far, but have |
30,812 | General advice on modeling | The latter statement seems to be in spirit of Sims critique ((1980) Macroeconomics and Reality, Econometrica, January, pp. 1-48.) where he
...advocates the use of VAR models as a theory-free method to estimate economic relationships, thus being an alternative to the "incredible identification restrictions" in structural models [from wiki]
But probably S.Johansen (one of the pioneers of cointegration analysis) could follow the same spirit. From what I was taught the model building sequence is like:
Clarify the primary aim of the model: forecasting, structural relationships (simulations), causal relationships, latent factors, etc.
Abstract model is the real world that could be "too real" to cover completely in your application, but it gives feeling (or understanding) about what is going on
Verbal model brings some theory or translates your understanding into statements and hypothesis to be tested, empirical (sometimes called stylized) facts are collected at this step
Mathematical model only now you can formulate your theory in the form of equations (difference, differential), such models are often to be deterministic (though one can merge this step with the latter one and consider stochastic differential equations for instance) thus you need...
Econometric (statistical) model adding stochastic parts, the theory and methods of applied statistics and probability theory, micro- and macro-econometrics.
Hope this was helpful. | General advice on modeling | The latter statement seems to be in spirit of Sims critique ((1980) Macroeconomics and Reality, Econometrica, January, pp. 1-48.) where he
...advocates the use of VAR models as a theory-free method | General advice on modeling
The latter statement seems to be in spirit of Sims critique ((1980) Macroeconomics and Reality, Econometrica, January, pp. 1-48.) where he
...advocates the use of VAR models as a theory-free method to estimate economic relationships, thus being an alternative to the "incredible identification restrictions" in structural models [from wiki]
But probably S.Johansen (one of the pioneers of cointegration analysis) could follow the same spirit. From what I was taught the model building sequence is like:
Clarify the primary aim of the model: forecasting, structural relationships (simulations), causal relationships, latent factors, etc.
Abstract model is the real world that could be "too real" to cover completely in your application, but it gives feeling (or understanding) about what is going on
Verbal model brings some theory or translates your understanding into statements and hypothesis to be tested, empirical (sometimes called stylized) facts are collected at this step
Mathematical model only now you can formulate your theory in the form of equations (difference, differential), such models are often to be deterministic (though one can merge this step with the latter one and consider stochastic differential equations for instance) thus you need...
Econometric (statistical) model adding stochastic parts, the theory and methods of applied statistics and probability theory, micro- and macro-econometrics.
Hope this was helpful. | General advice on modeling
The latter statement seems to be in spirit of Sims critique ((1980) Macroeconomics and Reality, Econometrica, January, pp. 1-48.) where he
...advocates the use of VAR models as a theory-free method |
30,813 | General advice on modeling | The reference to "letting the data guide the model" can be attributed to George E. P. Box and Gwilym M. Jenkins. In Chapter 2 of their classic textbook, Time Series Analysis: Forecasting and Control (1976), it is said that:
The obtaining of sample estimates of the autocorrelation function and
of the spectrum are non-structural approaches, analogous to the
representation of an empirical distribution function by a histogram.
They are both ways of letting the data from stationary series ``speak
for themselves'' and provide a first step in the analysis of time
series, just as a histogram can provide a first step in the
distributional analysis of data, pointing the way to some parametric
model on which subsequent analysis will be based.
This modelling procedure of letting the data do the talking, as advocated by Box & Jenkins, is obviously referred to throughout the literature on ARIMA modelling. For example, in the context of identifying tentative ARIMA models, Pankratz (1983) says:
Note that we do not approach the available data with a rigid,
preconceived idea about which model we will use. Instead, we let the
available data ``talk to us'' in the form of an estimated
autocorrelation function and partial autocorrelation function.
So, it can be said that the idea of ''letting the data guide the model'' is a prevalent feature in time-series analysis.
Similar notions can, however, be found in other (sub)fields of study. For example, @Dmitrij Celov has correctly made reference to Christopher Sims' path breaking article, Macroeconomics and Reality (1980), which was a reaction against the use of large-scale simultaneous equation models in macroeconomics.
The traditional approach in macroeconomics was to use economic theory as a guide to build macroeconomic models. Often, the models were made up of hundreds of equations, and restrictions, such as pre-deciding the signs of some coefficients, would be imposed on them. Sims (1980) was critical of using this a priori knowledge to build macroeconomic models:
The fact that large macroeconomic models are dynamic is a rich source
of spurious `a priori' restrictions.
As already mentioned by @Dmitrij Celov, the alternative approach advocated by Sims (1980) was to specify vector autoregressive equations - which are (essentially) based on a variables' own lagged values and of lagged values of other variables.
Although I am a fan of the notion of ``letting the data speak for itself'', I'm not too sure if this methodology can be extended fully into all areas of study. For example, consider doing a study in labour economics to try to explain the difference between wage rates among males and females within a given country. Selecting the set of regressors in such a model will probably be guided by human capital theory. In other contexts, the set of regressors can be selected based upon what interests us and what common sense tells us. Verbeek (2008) says:
It is good practice to select the set of potentially relevant
variables on the basis of economic arguments rather than statistical
ones. Although it is sometimes suggested otherwise, statistical
arguments are never certainty arguments.
Really, I can only scratch the surface here because it's such a large topic, but the best reference that I've come across on modelling is Granger (1991). If your background is not economics, don't let the title of the book put you off. Most of the discussion does take place in the context of modelling economic series, but I'm sure those from other fields would get a lot out of it and find it useful.
The book contains excellent discussions about different modelling methodologies such as:
The general-to-specific approach (or LSE methodology) as advocated by David Hendry.
The specific-to-general approach.
Edward Leamer's methodology (usually associated with the terms "sensitivity (or extreme bounds) analysis" & "Bayesian") .
Coincidentally, Christophers Sims' approach is covered too.
It's worth noting that Granger (1991) is actually a collection of papers, so rather than trying to get a copy of the book, you can, of course, look up the table of contents and try find the articles on their own. (See link below.)
Hope this has proved helpful!
References:
Box, G. E., & Jenkins, G. M. (1976). Time series analysis: Forecasting and control. Holden-Day series in time series analysis.
Granger, C. W. (Ed.). (1991). Modelling economic series: readings in econometric methodology. Oxford University Press.
Pankratz, A. (1983) Forecasting with univariate Box–Jenkins models: concepts and cases. New York: John Wiley & Sons.
Sims, C. A. (1980). Macroeconomics and Reality. Econometrica, 48(1), 1-48.
Verbeek, M. (2008). A guide to modern econometrics. Wiley. | General advice on modeling | The reference to "letting the data guide the model" can be attributed to George E. P. Box and Gwilym M. Jenkins. In Chapter 2 of their classic textbook, Time Series Analysis: Forecasting and Control ( | General advice on modeling
The reference to "letting the data guide the model" can be attributed to George E. P. Box and Gwilym M. Jenkins. In Chapter 2 of their classic textbook, Time Series Analysis: Forecasting and Control (1976), it is said that:
The obtaining of sample estimates of the autocorrelation function and
of the spectrum are non-structural approaches, analogous to the
representation of an empirical distribution function by a histogram.
They are both ways of letting the data from stationary series ``speak
for themselves'' and provide a first step in the analysis of time
series, just as a histogram can provide a first step in the
distributional analysis of data, pointing the way to some parametric
model on which subsequent analysis will be based.
This modelling procedure of letting the data do the talking, as advocated by Box & Jenkins, is obviously referred to throughout the literature on ARIMA modelling. For example, in the context of identifying tentative ARIMA models, Pankratz (1983) says:
Note that we do not approach the available data with a rigid,
preconceived idea about which model we will use. Instead, we let the
available data ``talk to us'' in the form of an estimated
autocorrelation function and partial autocorrelation function.
So, it can be said that the idea of ''letting the data guide the model'' is a prevalent feature in time-series analysis.
Similar notions can, however, be found in other (sub)fields of study. For example, @Dmitrij Celov has correctly made reference to Christopher Sims' path breaking article, Macroeconomics and Reality (1980), which was a reaction against the use of large-scale simultaneous equation models in macroeconomics.
The traditional approach in macroeconomics was to use economic theory as a guide to build macroeconomic models. Often, the models were made up of hundreds of equations, and restrictions, such as pre-deciding the signs of some coefficients, would be imposed on them. Sims (1980) was critical of using this a priori knowledge to build macroeconomic models:
The fact that large macroeconomic models are dynamic is a rich source
of spurious `a priori' restrictions.
As already mentioned by @Dmitrij Celov, the alternative approach advocated by Sims (1980) was to specify vector autoregressive equations - which are (essentially) based on a variables' own lagged values and of lagged values of other variables.
Although I am a fan of the notion of ``letting the data speak for itself'', I'm not too sure if this methodology can be extended fully into all areas of study. For example, consider doing a study in labour economics to try to explain the difference between wage rates among males and females within a given country. Selecting the set of regressors in such a model will probably be guided by human capital theory. In other contexts, the set of regressors can be selected based upon what interests us and what common sense tells us. Verbeek (2008) says:
It is good practice to select the set of potentially relevant
variables on the basis of economic arguments rather than statistical
ones. Although it is sometimes suggested otherwise, statistical
arguments are never certainty arguments.
Really, I can only scratch the surface here because it's such a large topic, but the best reference that I've come across on modelling is Granger (1991). If your background is not economics, don't let the title of the book put you off. Most of the discussion does take place in the context of modelling economic series, but I'm sure those from other fields would get a lot out of it and find it useful.
The book contains excellent discussions about different modelling methodologies such as:
The general-to-specific approach (or LSE methodology) as advocated by David Hendry.
The specific-to-general approach.
Edward Leamer's methodology (usually associated with the terms "sensitivity (or extreme bounds) analysis" & "Bayesian") .
Coincidentally, Christophers Sims' approach is covered too.
It's worth noting that Granger (1991) is actually a collection of papers, so rather than trying to get a copy of the book, you can, of course, look up the table of contents and try find the articles on their own. (See link below.)
Hope this has proved helpful!
References:
Box, G. E., & Jenkins, G. M. (1976). Time series analysis: Forecasting and control. Holden-Day series in time series analysis.
Granger, C. W. (Ed.). (1991). Modelling economic series: readings in econometric methodology. Oxford University Press.
Pankratz, A. (1983) Forecasting with univariate Box–Jenkins models: concepts and cases. New York: John Wiley & Sons.
Sims, C. A. (1980). Macroeconomics and Reality. Econometrica, 48(1), 1-48.
Verbeek, M. (2008). A guide to modern econometrics. Wiley. | General advice on modeling
The reference to "letting the data guide the model" can be attributed to George E. P. Box and Gwilym M. Jenkins. In Chapter 2 of their classic textbook, Time Series Analysis: Forecasting and Control ( |
30,814 | Zero inflated models - "true zero" vs. "excess zero" | I only know what I've read, but I believe the difference is that excess zeros are zeros where there could not be any events, while true zeros occur where there could have been an event, but there was none. For example, people coming into a bank: during business hours, there might be a period of time when zero customers entered the bank (true zero), but when the bank is closed, you will always get zeros (excess zeros) and since the bank is closed more than it is open you will get a lot of excess zeros. | Zero inflated models - "true zero" vs. "excess zero" | I only know what I've read, but I believe the difference is that excess zeros are zeros where there could not be any events, while true zeros occur where there could have been an event, but there was | Zero inflated models - "true zero" vs. "excess zero"
I only know what I've read, but I believe the difference is that excess zeros are zeros where there could not be any events, while true zeros occur where there could have been an event, but there was none. For example, people coming into a bank: during business hours, there might be a period of time when zero customers entered the bank (true zero), but when the bank is closed, you will always get zeros (excess zeros) and since the bank is closed more than it is open you will get a lot of excess zeros. | Zero inflated models - "true zero" vs. "excess zero"
I only know what I've read, but I believe the difference is that excess zeros are zeros where there could not be any events, while true zeros occur where there could have been an event, but there was |
30,815 | Zero inflated models - "true zero" vs. "excess zero" | The book by Zuur et al Mixed Effects Models and Extension in Ecology with R provides extensive explanations of ZIP models of various sorts. They state that "zeros due to design, survey, or observer error are...called false zeros or false negatives [or, I believe, the excess zeros you are talking about]. In a perfect world, we should not have them. The structural zeros are called positive zeros, true zeros, or true negatives." (page 271). They continue to discuss how a hurdle model handles these different kinds of zeros differently than a zero-inflated model. | Zero inflated models - "true zero" vs. "excess zero" | The book by Zuur et al Mixed Effects Models and Extension in Ecology with R provides extensive explanations of ZIP models of various sorts. They state that "zeros due to design, survey, or observer e | Zero inflated models - "true zero" vs. "excess zero"
The book by Zuur et al Mixed Effects Models and Extension in Ecology with R provides extensive explanations of ZIP models of various sorts. They state that "zeros due to design, survey, or observer error are...called false zeros or false negatives [or, I believe, the excess zeros you are talking about]. In a perfect world, we should not have them. The structural zeros are called positive zeros, true zeros, or true negatives." (page 271). They continue to discuss how a hurdle model handles these different kinds of zeros differently than a zero-inflated model. | Zero inflated models - "true zero" vs. "excess zero"
The book by Zuur et al Mixed Effects Models and Extension in Ecology with R provides extensive explanations of ZIP models of various sorts. They state that "zeros due to design, survey, or observer e |
30,816 | Zero inflated models - "true zero" vs. "excess zero" | I think of this as a mixture of two distributions. The excess zeros as those zeroes in excess of what could be produced by a particular process (e.g. poisson or negative binomial). So, there is a zero present in the data by a certain probablity and if not zero, then it's value is governed by the processs (e.g. poisson or negative binomial), where it could also be zero again of course by that process. Am curious if I am off base and I am sure someone will point this out. | Zero inflated models - "true zero" vs. "excess zero" | I think of this as a mixture of two distributions. The excess zeros as those zeroes in excess of what could be produced by a particular process (e.g. poisson or negative binomial). So, there is a zero | Zero inflated models - "true zero" vs. "excess zero"
I think of this as a mixture of two distributions. The excess zeros as those zeroes in excess of what could be produced by a particular process (e.g. poisson or negative binomial). So, there is a zero present in the data by a certain probablity and if not zero, then it's value is governed by the processs (e.g. poisson or negative binomial), where it could also be zero again of course by that process. Am curious if I am off base and I am sure someone will point this out. | Zero inflated models - "true zero" vs. "excess zero"
I think of this as a mixture of two distributions. The excess zeros as those zeroes in excess of what could be produced by a particular process (e.g. poisson or negative binomial). So, there is a zero |
30,817 | Whether to use an offset in a Poisson regression when predicting total career goals scored by hockey players | An offset model is modeling goals per game, as one can see here:
log(goals/games) = a+bx
is equivalent to
log(goals) -log(games) = a+bx
is equivalent to
log(goals)= a+bx +log(games) <-this is an offset model, assumes coef on the last term =1
See slide 35 here:
http://www.ed.uiuc.edu/courses/EdPsy490AT/lectures/4glm3-ha-online.pdf
If you think a+bx is related to the log ratio of goals to games (the rate), use an offset. If you think there is a more complicated game effect, perhaps from accumulating experience, do not. For more discussion, see this: http://ezinearticles.com/?The-Exposure-and-Offset-Variables-in-Poisson-Regression-Models&id=2155811 | Whether to use an offset in a Poisson regression when predicting total career goals scored by hockey | An offset model is modeling goals per game, as one can see here:
log(goals/games) = a+bx
is equivalent to
log(goals) -log(games) = a+bx
is equivalent to
log(goals)= a+bx +log(games) <-this is an o | Whether to use an offset in a Poisson regression when predicting total career goals scored by hockey players
An offset model is modeling goals per game, as one can see here:
log(goals/games) = a+bx
is equivalent to
log(goals) -log(games) = a+bx
is equivalent to
log(goals)= a+bx +log(games) <-this is an offset model, assumes coef on the last term =1
See slide 35 here:
http://www.ed.uiuc.edu/courses/EdPsy490AT/lectures/4glm3-ha-online.pdf
If you think a+bx is related to the log ratio of goals to games (the rate), use an offset. If you think there is a more complicated game effect, perhaps from accumulating experience, do not. For more discussion, see this: http://ezinearticles.com/?The-Exposure-and-Offset-Variables-in-Poisson-Regression-Models&id=2155811 | Whether to use an offset in a Poisson regression when predicting total career goals scored by hockey
An offset model is modeling goals per game, as one can see here:
log(goals/games) = a+bx
is equivalent to
log(goals) -log(games) = a+bx
is equivalent to
log(goals)= a+bx +log(games) <-this is an o |
30,818 | Whether to use an offset in a Poisson regression when predicting total career goals scored by hockey players | A few simple points not directly addressing your question about offsets:
I'd have a look at whether number of games is correlated with mean goals scored. In many elite goal scoring sports that I can think of (e.g., soccer, Australian rules football, etc.) I would predict that longevity of a career is related to the success of a career. And at least for players in goal scoring roles, success is related to number of goals scored.
If this is true, then number of games would capture two effects. One would relate to the mere fact that more games played means more opportunities to score goals; and the other would capture skill-related effects.
You could examine the relationship between number of games and mean goals scored (e.g., goals / number of games) to explore this. I think this has substantive implications for any modelling that you do.
My instincts are to convert the dependent variable into mean goals per game. I realise that you would have more precise measurement of a player's skill for those who played more games, so maybe that would be an issue. Depending on the precision in your model that you desire, and the resulting distribution of player means, you might be able to rely on standard linear modelling techniques. But perhaps this is a bit too applied for your purposes, and perhaps you have reasons for wanting to model total goals scored. | Whether to use an offset in a Poisson regression when predicting total career goals scored by hockey | A few simple points not directly addressing your question about offsets:
I'd have a look at whether number of games is correlated with mean goals scored. In many elite goal scoring sports that I can | Whether to use an offset in a Poisson regression when predicting total career goals scored by hockey players
A few simple points not directly addressing your question about offsets:
I'd have a look at whether number of games is correlated with mean goals scored. In many elite goal scoring sports that I can think of (e.g., soccer, Australian rules football, etc.) I would predict that longevity of a career is related to the success of a career. And at least for players in goal scoring roles, success is related to number of goals scored.
If this is true, then number of games would capture two effects. One would relate to the mere fact that more games played means more opportunities to score goals; and the other would capture skill-related effects.
You could examine the relationship between number of games and mean goals scored (e.g., goals / number of games) to explore this. I think this has substantive implications for any modelling that you do.
My instincts are to convert the dependent variable into mean goals per game. I realise that you would have more precise measurement of a player's skill for those who played more games, so maybe that would be an issue. Depending on the precision in your model that you desire, and the resulting distribution of player means, you might be able to rely on standard linear modelling techniques. But perhaps this is a bit too applied for your purposes, and perhaps you have reasons for wanting to model total goals scored. | Whether to use an offset in a Poisson regression when predicting total career goals scored by hockey
A few simple points not directly addressing your question about offsets:
I'd have a look at whether number of games is correlated with mean goals scored. In many elite goal scoring sports that I can |
30,819 | "Correlation" terminology in time series analysis | In order to avoid the spurious correlation problem, you should regress two stationary time series against one another. This can (potentially) provide a causal story. It is non-stationary series that lead to spurious correlation. See the reasoning given by my answer to this question (As a footnote, you may not need stationary series if they are integrated series, but I'd point you to any of the applied time series books to learn more about that.) | "Correlation" terminology in time series analysis | In order to avoid the spurious correlation problem, you should regress two stationary time series against one another. This can (potentially) provide a causal story. It is non-stationary series that l | "Correlation" terminology in time series analysis
In order to avoid the spurious correlation problem, you should regress two stationary time series against one another. This can (potentially) provide a causal story. It is non-stationary series that lead to spurious correlation. See the reasoning given by my answer to this question (As a footnote, you may not need stationary series if they are integrated series, but I'd point you to any of the applied time series books to learn more about that.) | "Correlation" terminology in time series analysis
In order to avoid the spurious correlation problem, you should regress two stationary time series against one another. This can (potentially) provide a causal story. It is non-stationary series that l |
30,820 | "Correlation" terminology in time series analysis | There is a good definition of spurious relationship in wikipedia. Spurious means that there is some hidden variable or feature which causes both of the variables. In both time-series and in usual regression then terminology means the same, the relationship between two variables is spurious when something else causes both variables. In time-series context this something else is inherent property of random walks, in usual regression analysis some other variable. | "Correlation" terminology in time series analysis | There is a good definition of spurious relationship in wikipedia. Spurious means that there is some hidden variable or feature which causes both of the variables. In both time-series and in usual regr | "Correlation" terminology in time series analysis
There is a good definition of spurious relationship in wikipedia. Spurious means that there is some hidden variable or feature which causes both of the variables. In both time-series and in usual regression then terminology means the same, the relationship between two variables is spurious when something else causes both variables. In time-series context this something else is inherent property of random walks, in usual regression analysis some other variable. | "Correlation" terminology in time series analysis
There is a good definition of spurious relationship in wikipedia. Spurious means that there is some hidden variable or feature which causes both of the variables. In both time-series and in usual regr |
30,821 | "Correlation" terminology in time series analysis | As to your main question, my answer is No. If you've seen such a distinction between the way the terms are used in time series contexts vs. cross-sectional contexts, it must be due to one or two idiosyncratic authors you've read. Rigorous authors would never be correct in using "correlation" to mean "causation." You've gotten ahold of a spurious terminology distinction there. Interesting question, though. | "Correlation" terminology in time series analysis | As to your main question, my answer is No. If you've seen such a distinction between the way the terms are used in time series contexts vs. cross-sectional contexts, it must be due to one or two idio | "Correlation" terminology in time series analysis
As to your main question, my answer is No. If you've seen such a distinction between the way the terms are used in time series contexts vs. cross-sectional contexts, it must be due to one or two idiosyncratic authors you've read. Rigorous authors would never be correct in using "correlation" to mean "causation." You've gotten ahold of a spurious terminology distinction there. Interesting question, though. | "Correlation" terminology in time series analysis
As to your main question, my answer is No. If you've seen such a distinction between the way the terms are used in time series contexts vs. cross-sectional contexts, it must be due to one or two idio |
30,822 | How to plot a fan (Polar) Dendrogram in R? | In phylogenetics, this is a fan phylogram, so you can convert this to phylo and use ape:
library(ape)
library(cluster)
data(mtcars)
plot(as.phylo(hclust(dist(mtcars))),type="fan")
Result: | How to plot a fan (Polar) Dendrogram in R? | In phylogenetics, this is a fan phylogram, so you can convert this to phylo and use ape:
library(ape)
library(cluster)
data(mtcars)
plot(as.phylo(hclust(dist(mtcars))),type="fan")
Result: | How to plot a fan (Polar) Dendrogram in R?
In phylogenetics, this is a fan phylogram, so you can convert this to phylo and use ape:
library(ape)
library(cluster)
data(mtcars)
plot(as.phylo(hclust(dist(mtcars))),type="fan")
Result: | How to plot a fan (Polar) Dendrogram in R?
In phylogenetics, this is a fan phylogram, so you can convert this to phylo and use ape:
library(ape)
library(cluster)
data(mtcars)
plot(as.phylo(hclust(dist(mtcars))),type="fan")
Result: |
30,823 | How to plot a fan (Polar) Dendrogram in R? | Did you see this post? http://groups.google.com/group/ggplot2/browse_thread/thread/8e1efd0e7793c1bb
Take the example, add coord_polar() and reverse the axes and you get pretty close:
library(cluster)
data(mtcars)
x <- as.phylo(hclust(dist(mtcars)))
p <- ggplot(data=x)
p <- p + geom_segment(aes(y=x,x=y,yend=xend,xend=yend), colour="blue",alpha=1)
p <- p + geom_text(data=label.phylo(x), aes(x=y, y=x, label=label),family=3, size=3) + xlim(0, xlim) + coord_polar()
theme <- theme_update( axis.text.x = theme_blank(),
axis.ticks = theme_blank(),
axis.title.x = theme_blank(),
axis.title.y = theme_blank(),
legend.position = "none"
)
p <- p + theme_set(theme)
print(p) | How to plot a fan (Polar) Dendrogram in R? | Did you see this post? http://groups.google.com/group/ggplot2/browse_thread/thread/8e1efd0e7793c1bb
Take the example, add coord_polar() and reverse the axes and you get pretty close:
library(cluster) | How to plot a fan (Polar) Dendrogram in R?
Did you see this post? http://groups.google.com/group/ggplot2/browse_thread/thread/8e1efd0e7793c1bb
Take the example, add coord_polar() and reverse the axes and you get pretty close:
library(cluster)
data(mtcars)
x <- as.phylo(hclust(dist(mtcars)))
p <- ggplot(data=x)
p <- p + geom_segment(aes(y=x,x=y,yend=xend,xend=yend), colour="blue",alpha=1)
p <- p + geom_text(data=label.phylo(x), aes(x=y, y=x, label=label),family=3, size=3) + xlim(0, xlim) + coord_polar()
theme <- theme_update( axis.text.x = theme_blank(),
axis.ticks = theme_blank(),
axis.title.x = theme_blank(),
axis.title.y = theme_blank(),
legend.position = "none"
)
p <- p + theme_set(theme)
print(p) | How to plot a fan (Polar) Dendrogram in R?
Did you see this post? http://groups.google.com/group/ggplot2/browse_thread/thread/8e1efd0e7793c1bb
Take the example, add coord_polar() and reverse the axes and you get pretty close:
library(cluster) |
30,824 | How to plot a fan (Polar) Dendrogram in R? | Four years later, I am now able to answer this question. It can be done by combining two new packages: circlize and dendextend.
The plot can be made using the circlize_dendrogram function (allowing for a much more refined control over the "fan" layout of the plot.phylo function).
# install.packages("dendextend")
# install.packages("circlize")
library(dendextend)
library(circlize)
# create a dendrogram
hc <- hclust(dist(datasets::mtcars))
dend <- as.dendrogram(hc)
# modify the dendrogram to have some colors in the branches and labels
dend <- dend %>%
color_branches(k=4) %>%
color_labels
# plot the radial plot
par(mar = rep(0,4))
# circlize_dendrogram(dend, dend_track_height = 0.8)
circlize_dendrogram(dend, labels_track_height = NA, dend_track_height = .4)
And the result is: | How to plot a fan (Polar) Dendrogram in R? | Four years later, I am now able to answer this question. It can be done by combining two new packages: circlize and dendextend.
The plot can be made using the circlize_dendrogram function (allowing fo | How to plot a fan (Polar) Dendrogram in R?
Four years later, I am now able to answer this question. It can be done by combining two new packages: circlize and dendextend.
The plot can be made using the circlize_dendrogram function (allowing for a much more refined control over the "fan" layout of the plot.phylo function).
# install.packages("dendextend")
# install.packages("circlize")
library(dendextend)
library(circlize)
# create a dendrogram
hc <- hclust(dist(datasets::mtcars))
dend <- as.dendrogram(hc)
# modify the dendrogram to have some colors in the branches and labels
dend <- dend %>%
color_branches(k=4) %>%
color_labels
# plot the radial plot
par(mar = rep(0,4))
# circlize_dendrogram(dend, dend_track_height = 0.8)
circlize_dendrogram(dend, labels_track_height = NA, dend_track_height = .4)
And the result is: | How to plot a fan (Polar) Dendrogram in R?
Four years later, I am now able to answer this question. It can be done by combining two new packages: circlize and dendextend.
The plot can be made using the circlize_dendrogram function (allowing fo |
30,825 | Is Glorot/He-style variance-preserving *regularization* a known thing? | There are ways to preserve activation variance with an explicit regularization term. For example, the orthogonality regularizer
$$
\hat{\ell}(\theta) = \sum_k || {\theta_k}^T \theta_k - c_k I ||^2
$$
will do it, given square weight matrices $\theta_k$. (Possibly for rectangular $\theta_k$ as well, but I can't recall or verify at the moment.)
Notation: $\theta$ are network weights, with $\theta_k$ the weight matrix for each layer $k$. $I$ is the identity matrix, and $c_k$ is a scalar gain factor depending on the chosen nonlinearity and on data distribution properties. See details below.
Explanation
To state the goal a bit more formally, for layer $k$ in a neural net:
Let $\theta_k$ be layer parameters, so that the input $x$ and output $y$ of the layer are written $y = f_{\theta_k}(x)$
Let $X_k$ and $Y_k$ be random variables representing layer $k$ input and output, so that we can talk about their distributions. i.e. $Y_k = f_{\theta_k}(X_k)$
Given a training sample $z = (x, y)$ and network weights $\theta = \{\theta_1, \theta_2, \dots\}$, we seek a regularization function $\hat{\ell}(\cdot)$ so that the overall objective function
$$
\mathcal{L}(z; \theta) = \underbrace{\ell(z; \theta)}_{\text{loss}} + \lambda \underbrace{\hat{\ell}(\theta)}_{\text{regularizer}}
$$
induces
$$
\text{Var}(Y_k) \approx \text{Var}(X_k)
$$
for each layer $k$, during and after optimization.
To start simply, assume that $\theta_k$ is a $d \times d$ square matrix and $f_{\theta_k}: \mathbb{R}^d \to \mathbb{R}^d$ is linear, so that the layer is just a matrix multiplication:
$$
\begin{align}
Y_k &= f_{\theta_k}(X_k) \\
&= \theta_kX_k
\end{align}
$$
In this case, enforcing orthogonality of $\theta_k$ is a simple way to preserve variance, regardless of the properties of $X_k$. Enforcing or encouraging orthogonality during gradient descent is a well-studied problem. Since orthogonality can be defined as ${\theta_k}^T \theta_k = I$, a simple regularizer just minimizes the distance between both sides of the equation for each layer:
$$
\hat{\ell}(\theta) = \sum_k || {\theta_k}^T \theta_k - I ||^2
$$
Hence, each weight matrix $\theta_k$ is pushed towards orthogonality, with the proximity to orthogonality controlled by regularization weight $\lambda$ in the objective function $\mathcal{L} = \ell + \lambda \hat{\ell}$
Note, this preserves variance for linear $\mathbb{R}^d \to \mathbb{R}^d$ layers only. As we reintroduce practicality to our layers (i.e. nonlinear activation functions, non-square weight matrices, or even convolutions) then more care is necessary to preserve variance. For example, if we assume $X_k \sim N(0, \sigma^2)$ and $f_{\theta_k}$ uses ReLU activation, then $\text{Var}(Y_k) = c \sigma^2$ for some constant $c$. So we should replace the identity matrix $I$ with $\frac{1}{\sqrt{c}} I$ in the regularizer $\hat{\ell}(\cdot)$. (I am too lazy to derive this particular $c$ right now, but empirically, $c \approx 0.34$.) .
Different nonlinearities will have different gain factors. For example, I think OPLU conveniently has a gain factor of 1. These gain factors are discussed in a few different places, including the orthogonal initialization and descent literature, and the initialization literature in general.
Instead of soft-constraint by regularization, we can also hard-constrain to orthogonality by various methods. For instance, this paper uses weight orthogonality to address vanishing/exploding gradients in recurrent networks. But since you asked only about regularizers, I won't detail anything about that.
And finally, if one is interested in orthogonality-preserving optimization, then one is also typically interested in orthogonal weight initialization.
(The papers linked here are not comprehensive, and probably a bit out of date. If anyone knows of relevant surveys, do share in the comments.) | Is Glorot/He-style variance-preserving *regularization* a known thing? | There are ways to preserve activation variance with an explicit regularization term. For example, the orthogonality regularizer
$$
\hat{\ell}(\theta) = \sum_k || {\theta_k}^T \theta_k - c_k I ||^2
$$
| Is Glorot/He-style variance-preserving *regularization* a known thing?
There are ways to preserve activation variance with an explicit regularization term. For example, the orthogonality regularizer
$$
\hat{\ell}(\theta) = \sum_k || {\theta_k}^T \theta_k - c_k I ||^2
$$
will do it, given square weight matrices $\theta_k$. (Possibly for rectangular $\theta_k$ as well, but I can't recall or verify at the moment.)
Notation: $\theta$ are network weights, with $\theta_k$ the weight matrix for each layer $k$. $I$ is the identity matrix, and $c_k$ is a scalar gain factor depending on the chosen nonlinearity and on data distribution properties. See details below.
Explanation
To state the goal a bit more formally, for layer $k$ in a neural net:
Let $\theta_k$ be layer parameters, so that the input $x$ and output $y$ of the layer are written $y = f_{\theta_k}(x)$
Let $X_k$ and $Y_k$ be random variables representing layer $k$ input and output, so that we can talk about their distributions. i.e. $Y_k = f_{\theta_k}(X_k)$
Given a training sample $z = (x, y)$ and network weights $\theta = \{\theta_1, \theta_2, \dots\}$, we seek a regularization function $\hat{\ell}(\cdot)$ so that the overall objective function
$$
\mathcal{L}(z; \theta) = \underbrace{\ell(z; \theta)}_{\text{loss}} + \lambda \underbrace{\hat{\ell}(\theta)}_{\text{regularizer}}
$$
induces
$$
\text{Var}(Y_k) \approx \text{Var}(X_k)
$$
for each layer $k$, during and after optimization.
To start simply, assume that $\theta_k$ is a $d \times d$ square matrix and $f_{\theta_k}: \mathbb{R}^d \to \mathbb{R}^d$ is linear, so that the layer is just a matrix multiplication:
$$
\begin{align}
Y_k &= f_{\theta_k}(X_k) \\
&= \theta_kX_k
\end{align}
$$
In this case, enforcing orthogonality of $\theta_k$ is a simple way to preserve variance, regardless of the properties of $X_k$. Enforcing or encouraging orthogonality during gradient descent is a well-studied problem. Since orthogonality can be defined as ${\theta_k}^T \theta_k = I$, a simple regularizer just minimizes the distance between both sides of the equation for each layer:
$$
\hat{\ell}(\theta) = \sum_k || {\theta_k}^T \theta_k - I ||^2
$$
Hence, each weight matrix $\theta_k$ is pushed towards orthogonality, with the proximity to orthogonality controlled by regularization weight $\lambda$ in the objective function $\mathcal{L} = \ell + \lambda \hat{\ell}$
Note, this preserves variance for linear $\mathbb{R}^d \to \mathbb{R}^d$ layers only. As we reintroduce practicality to our layers (i.e. nonlinear activation functions, non-square weight matrices, or even convolutions) then more care is necessary to preserve variance. For example, if we assume $X_k \sim N(0, \sigma^2)$ and $f_{\theta_k}$ uses ReLU activation, then $\text{Var}(Y_k) = c \sigma^2$ for some constant $c$. So we should replace the identity matrix $I$ with $\frac{1}{\sqrt{c}} I$ in the regularizer $\hat{\ell}(\cdot)$. (I am too lazy to derive this particular $c$ right now, but empirically, $c \approx 0.34$.) .
Different nonlinearities will have different gain factors. For example, I think OPLU conveniently has a gain factor of 1. These gain factors are discussed in a few different places, including the orthogonal initialization and descent literature, and the initialization literature in general.
Instead of soft-constraint by regularization, we can also hard-constrain to orthogonality by various methods. For instance, this paper uses weight orthogonality to address vanishing/exploding gradients in recurrent networks. But since you asked only about regularizers, I won't detail anything about that.
And finally, if one is interested in orthogonality-preserving optimization, then one is also typically interested in orthogonal weight initialization.
(The papers linked here are not comprehensive, and probably a bit out of date. If anyone knows of relevant surveys, do share in the comments.) | Is Glorot/He-style variance-preserving *regularization* a known thing?
There are ways to preserve activation variance with an explicit regularization term. For example, the orthogonality regularizer
$$
\hat{\ell}(\theta) = \sum_k || {\theta_k}^T \theta_k - c_k I ||^2
$$
|
30,826 | Is Glorot/He-style variance-preserving *regularization* a known thing? | Has this been studied?
Yes, it has been studied. You are describing one of the goals of batchnorm. Other methods exist as well - see @Sycorax's answer for another.
So in fact, it has been implemented in every major library, and used very extensively in research models and in deployed models.
variance of each layer's output is equal to the variance of its input
Maintaining activation variances over the course of gradient descent is one of the two mechanisms behind Batch Normalization (a.k.a. batchnorm.) The other mechanism is to maintain activation means. With batchnorm, each batch of training data, a translation and scale parameter is learned for each layer, such that the activations at each layer are re-centered and re-scaleed to track zero mean and unit variance.
as a regularization constraint.
Indeed, batchnorm has a regularizing effect. The original goal of batchnorm was to reduce a phenomenon dubbed, "internal covariate shift," but more recent research suggests that it has a regularizing effect by smoothing the gradient steps, hence smoothing the effective loss function.
Figure 1 (c) of the latter paper shows layer 3 and layer 11 VGG activation distributions over the course of training, with and without batchnorm:
With batchnorm (blue), wee see that the distributions stay more consistent over training, especially in the tail behaviour, and especially in very early training. In the bottom left, note how quickly the activation distributions narrow into tight variance and stay that way for the duration of training. | Is Glorot/He-style variance-preserving *regularization* a known thing? | Has this been studied?
Yes, it has been studied. You are describing one of the goals of batchnorm. Other methods exist as well - see @Sycorax's answer for another.
So in fact, it has been implement | Is Glorot/He-style variance-preserving *regularization* a known thing?
Has this been studied?
Yes, it has been studied. You are describing one of the goals of batchnorm. Other methods exist as well - see @Sycorax's answer for another.
So in fact, it has been implemented in every major library, and used very extensively in research models and in deployed models.
variance of each layer's output is equal to the variance of its input
Maintaining activation variances over the course of gradient descent is one of the two mechanisms behind Batch Normalization (a.k.a. batchnorm.) The other mechanism is to maintain activation means. With batchnorm, each batch of training data, a translation and scale parameter is learned for each layer, such that the activations at each layer are re-centered and re-scaleed to track zero mean and unit variance.
as a regularization constraint.
Indeed, batchnorm has a regularizing effect. The original goal of batchnorm was to reduce a phenomenon dubbed, "internal covariate shift," but more recent research suggests that it has a regularizing effect by smoothing the gradient steps, hence smoothing the effective loss function.
Figure 1 (c) of the latter paper shows layer 3 and layer 11 VGG activation distributions over the course of training, with and without batchnorm:
With batchnorm (blue), wee see that the distributions stay more consistent over training, especially in the tail behaviour, and especially in very early training. In the bottom left, note how quickly the activation distributions narrow into tight variance and stay that way for the duration of training. | Is Glorot/He-style variance-preserving *regularization* a known thing?
Has this been studied?
Yes, it has been studied. You are describing one of the goals of batchnorm. Other methods exist as well - see @Sycorax's answer for another.
So in fact, it has been implement |
30,827 | Is Glorot/He-style variance-preserving *regularization* a known thing? | "Self-Normalizing Neural Networks" by Günter Klambauer Thomas Unterthiner Andreas Mayr & Sepp Hochreiter proposes a neural network that converges to activations with zero mean and unit variance.
Deep Learning has revolutionized vision via convolutional neural networks (CNNs)
and natural language processing via recurrent neural networks (RNNs). However,
success stories of Deep Learning with standard feed-forward neural networks
(FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot
exploit many levels of abstract representations. We introduce self-normalizing
neural networks (SNNs) to enable high-level abstract representations. While
batch normalization requires explicit normalization, neuron activations of SNNs
automatically converge towards zero mean and unit variance. The activation
function of SNNs are “scaled exponential linear units” (SELUs), which induce
self-normalizing properties. Using the Banach fixed-point theorem, we prove that
activations close to zero mean and unit variance that are propagated through many
network layers will converge towards zero mean and unit variance — even under
the presence of noise and perturbations. This convergence property of SNNs allows
to (1) train deep networks with many layers, (2) employ strong regularization
schemes, and (3) to make learning highly robust. Furthermore, for activations
not close to unit variance, we prove an upper and lower bound on the variance,
thus, vanishing and exploding gradients are impossible. We compared SNNs on
(a) 121 tasks from the UCI machine learning repository, on (b) drug discovery
benchmarks, and on (c) astronomy tasks with standard FNNs, and other machine
learning methods such as random forests and support vector machines. For FNNs
we considered (i) ReLU networks without normalization, (ii) batch normalization,
(iii) layer normalization, (iv) weight normalization, (v) highway networks, and (vi)
residual networks. SNNs significantly outperformed all competing FNN methods
at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and
set a new record at an astronomy data set. The winning SNN architectures are often
very deep. | Is Glorot/He-style variance-preserving *regularization* a known thing? | "Self-Normalizing Neural Networks" by Günter Klambauer Thomas Unterthiner Andreas Mayr & Sepp Hochreiter proposes a neural network that converges to activations with zero mean and unit variance.
Deep | Is Glorot/He-style variance-preserving *regularization* a known thing?
"Self-Normalizing Neural Networks" by Günter Klambauer Thomas Unterthiner Andreas Mayr & Sepp Hochreiter proposes a neural network that converges to activations with zero mean and unit variance.
Deep Learning has revolutionized vision via convolutional neural networks (CNNs)
and natural language processing via recurrent neural networks (RNNs). However,
success stories of Deep Learning with standard feed-forward neural networks
(FNNs) are rare. FNNs that perform well are typically shallow and, therefore cannot
exploit many levels of abstract representations. We introduce self-normalizing
neural networks (SNNs) to enable high-level abstract representations. While
batch normalization requires explicit normalization, neuron activations of SNNs
automatically converge towards zero mean and unit variance. The activation
function of SNNs are “scaled exponential linear units” (SELUs), which induce
self-normalizing properties. Using the Banach fixed-point theorem, we prove that
activations close to zero mean and unit variance that are propagated through many
network layers will converge towards zero mean and unit variance — even under
the presence of noise and perturbations. This convergence property of SNNs allows
to (1) train deep networks with many layers, (2) employ strong regularization
schemes, and (3) to make learning highly robust. Furthermore, for activations
not close to unit variance, we prove an upper and lower bound on the variance,
thus, vanishing and exploding gradients are impossible. We compared SNNs on
(a) 121 tasks from the UCI machine learning repository, on (b) drug discovery
benchmarks, and on (c) astronomy tasks with standard FNNs, and other machine
learning methods such as random forests and support vector machines. For FNNs
we considered (i) ReLU networks without normalization, (ii) batch normalization,
(iii) layer normalization, (iv) weight normalization, (v) highway networks, and (vi)
residual networks. SNNs significantly outperformed all competing FNN methods
at 121 UCI tasks, outperformed all competing methods at the Tox21 dataset, and
set a new record at an astronomy data set. The winning SNN architectures are often
very deep. | Is Glorot/He-style variance-preserving *regularization* a known thing?
"Self-Normalizing Neural Networks" by Günter Klambauer Thomas Unterthiner Andreas Mayr & Sepp Hochreiter proposes a neural network that converges to activations with zero mean and unit variance.
Deep |
30,828 | Approximation of a probability distribution | The histogram approximation might be better than you think. The simplest "histogram" approximation is to use a discrete distribution with a point mass of $1/n$ at each observation. This is the empirical density, and the corresponding CDF $\hat F_n$ is the empirical cumulative distribution function (ECDF). With iid data, the ECDF enjoys a number of properties, one of which is the Dvoretzky-Kiefer-Wolfowitz inequality:
$$
P\left(\sup_{x\in\mathbb R} |\hat F_n(x) - F(x)| > \epsilon\right) \leq 2e^{-2n\epsilon^2}.
$$
This means that the probability of the largest deviation being greater than some $\epsilon$ decreases exponentially in $n$. Since you have access to lots of samples you can make this probability tiny even for a very small $\epsilon$.
Sampling from $\hat F_n$ is equivalent to taking a bootstrap sample from your data, and the quality of $\hat F_n$ as an estimator of $F$ is a big part of why bootstrapping works so well.
There are lots of other options too though, like kernel density estimators. If you have access to lots of samples then many strategies will work since large-sample properties will likely be kicking in and probably any consistent estimator will perform well. | Approximation of a probability distribution | The histogram approximation might be better than you think. The simplest "histogram" approximation is to use a discrete distribution with a point mass of $1/n$ at each observation. This is the empiric | Approximation of a probability distribution
The histogram approximation might be better than you think. The simplest "histogram" approximation is to use a discrete distribution with a point mass of $1/n$ at each observation. This is the empirical density, and the corresponding CDF $\hat F_n$ is the empirical cumulative distribution function (ECDF). With iid data, the ECDF enjoys a number of properties, one of which is the Dvoretzky-Kiefer-Wolfowitz inequality:
$$
P\left(\sup_{x\in\mathbb R} |\hat F_n(x) - F(x)| > \epsilon\right) \leq 2e^{-2n\epsilon^2}.
$$
This means that the probability of the largest deviation being greater than some $\epsilon$ decreases exponentially in $n$. Since you have access to lots of samples you can make this probability tiny even for a very small $\epsilon$.
Sampling from $\hat F_n$ is equivalent to taking a bootstrap sample from your data, and the quality of $\hat F_n$ as an estimator of $F$ is a big part of why bootstrapping works so well.
There are lots of other options too though, like kernel density estimators. If you have access to lots of samples then many strategies will work since large-sample properties will likely be kicking in and probably any consistent estimator will perform well. | Approximation of a probability distribution
The histogram approximation might be better than you think. The simplest "histogram" approximation is to use a discrete distribution with a point mass of $1/n$ at each observation. This is the empiric |
30,829 | Time series forecasting: from ARIMA to LSTM | There are a couple of good review papers on the topic of deep learning for forecasting:
Neural forecasting: Introduction and literature
overview
Recurrent Neural Networks for Time Series Forecasting: Current Status
and Future Directions
And a very good presentation by the amazon team
A word of warning though: I am a very big fan of LSTM based forecasting and I advocate for it alot in my various roles. But I would be the first to tell you to tread very, very carefully: The number of use cases where LSTM provide an advantage over traditional statistical models is very limited, and Deep Learning is very far from being an established theoretical topic, the way ARIMA or State Space Models are. | Time series forecasting: from ARIMA to LSTM | There are a couple of good review papers on the topic of deep learning for forecasting:
Neural forecasting: Introduction and literature
overview
Recurrent Neural Networks for Time Series Forecasting: | Time series forecasting: from ARIMA to LSTM
There are a couple of good review papers on the topic of deep learning for forecasting:
Neural forecasting: Introduction and literature
overview
Recurrent Neural Networks for Time Series Forecasting: Current Status
and Future Directions
And a very good presentation by the amazon team
A word of warning though: I am a very big fan of LSTM based forecasting and I advocate for it alot in my various roles. But I would be the first to tell you to tread very, very carefully: The number of use cases where LSTM provide an advantage over traditional statistical models is very limited, and Deep Learning is very far from being an established theoretical topic, the way ARIMA or State Space Models are. | Time series forecasting: from ARIMA to LSTM
There are a couple of good review papers on the topic of deep learning for forecasting:
Neural forecasting: Introduction and literature
overview
Recurrent Neural Networks for Time Series Forecasting: |
30,830 | Time series forecasting: from ARIMA to LSTM | The "classical" methods comprise much more than ARIMA and GARCH (which address different questions, and at least ARIMA is not very useful for forecasting), e.g., decomposition, Exponential Smoothing etc. I recommend this very good free online textbook by Athanasopoulos & Hyndman.
I agree that there is very little in terms of textbooks on HMMs or NNs as used for forecasting, and I would be interested in any pointers.
Looking at book reviews in the International Journal of Forecasting may be helpful (even though the list of search results is admittedly not). | Time series forecasting: from ARIMA to LSTM | The "classical" methods comprise much more than ARIMA and GARCH (which address different questions, and at least ARIMA is not very useful for forecasting), e.g., decomposition, Exponential Smoothing e | Time series forecasting: from ARIMA to LSTM
The "classical" methods comprise much more than ARIMA and GARCH (which address different questions, and at least ARIMA is not very useful for forecasting), e.g., decomposition, Exponential Smoothing etc. I recommend this very good free online textbook by Athanasopoulos & Hyndman.
I agree that there is very little in terms of textbooks on HMMs or NNs as used for forecasting, and I would be interested in any pointers.
Looking at book reviews in the International Journal of Forecasting may be helpful (even though the list of search results is admittedly not). | Time series forecasting: from ARIMA to LSTM
The "classical" methods comprise much more than ARIMA and GARCH (which address different questions, and at least ARIMA is not very useful for forecasting), e.g., decomposition, Exponential Smoothing e |
30,831 | Time series forecasting: from ARIMA to LSTM | The combination of differential equations (e.g. ODE of SIR models) and HMM are often used in epidemiology. The hidden states are models as ODEs and the observation process are modeled as HMM. One example is pomp. The model is trained on existing data and produces forecasting on the future. Another goal of this kind of model is to understand epidemiology related parameters. More examples can be found in here and this book | Time series forecasting: from ARIMA to LSTM | The combination of differential equations (e.g. ODE of SIR models) and HMM are often used in epidemiology. The hidden states are models as ODEs and the observation process are modeled as HMM. One exam | Time series forecasting: from ARIMA to LSTM
The combination of differential equations (e.g. ODE of SIR models) and HMM are often used in epidemiology. The hidden states are models as ODEs and the observation process are modeled as HMM. One example is pomp. The model is trained on existing data and produces forecasting on the future. Another goal of this kind of model is to understand epidemiology related parameters. More examples can be found in here and this book | Time series forecasting: from ARIMA to LSTM
The combination of differential equations (e.g. ODE of SIR models) and HMM are often used in epidemiology. The hidden states are models as ODEs and the observation process are modeled as HMM. One exam |
30,832 | Logarithmic loss vs Brier score vs AUC score | The choice depends on how you plan to use the model. There are many potential strictly proper scoring rules (AUC isn't one). They effectively put different weights on different parts of the probability scale while still all meeting the requirement of having an optimal value at the true probabilities.
I have found the report "Loss Functions for Binary Class Probability Estimation and Classification: Structure and Applications," by Andreas Buja, Werner Stuetzle, and Yi Shen, to be very helpful in thinking about this. The authors show that choice of probability cutoff is equivalent to a choice of the relative cost of false-positive and false-negative classifications. They then provide a way to tailor loss functions to meet different choices of relative costs.
So the choice of scoring rule might best take the eventual use of the model into account. For a bit more detail without going into that full 48-page report, see related answers here and here. | Logarithmic loss vs Brier score vs AUC score | The choice depends on how you plan to use the model. There are many potential strictly proper scoring rules (AUC isn't one). They effectively put different weights on different parts of the probabilit | Logarithmic loss vs Brier score vs AUC score
The choice depends on how you plan to use the model. There are many potential strictly proper scoring rules (AUC isn't one). They effectively put different weights on different parts of the probability scale while still all meeting the requirement of having an optimal value at the true probabilities.
I have found the report "Loss Functions for Binary Class Probability Estimation and Classification: Structure and Applications," by Andreas Buja, Werner Stuetzle, and Yi Shen, to be very helpful in thinking about this. The authors show that choice of probability cutoff is equivalent to a choice of the relative cost of false-positive and false-negative classifications. They then provide a way to tailor loss functions to meet different choices of relative costs.
So the choice of scoring rule might best take the eventual use of the model into account. For a bit more detail without going into that full 48-page report, see related answers here and here. | Logarithmic loss vs Brier score vs AUC score
The choice depends on how you plan to use the model. There are many potential strictly proper scoring rules (AUC isn't one). They effectively put different weights on different parts of the probabilit |
30,833 | Logarithmic loss vs Brier score vs AUC score | The problem with the log loss is that it gives an arbitrarily high penalty for getting a single example completely wrong with high confidence. There are ways of dealing with this, but it can make the metric very sensitive to individual samples, which may not be desirable. | Logarithmic loss vs Brier score vs AUC score | The problem with the log loss is that it gives an arbitrarily high penalty for getting a single example completely wrong with high confidence. There are ways of dealing with this, but it can make the | Logarithmic loss vs Brier score vs AUC score
The problem with the log loss is that it gives an arbitrarily high penalty for getting a single example completely wrong with high confidence. There are ways of dealing with this, but it can make the metric very sensitive to individual samples, which may not be desirable. | Logarithmic loss vs Brier score vs AUC score
The problem with the log loss is that it gives an arbitrarily high penalty for getting a single example completely wrong with high confidence. There are ways of dealing with this, but it can make the |
30,834 | Why is the Beta Distribution Called the Beta Distribution? | Florian Cajori, in History of Mathematical Notations Vol. II (1928), wrote
... in the same paper of 1730 Euler gave what we now call the "beta function." ... About a century after Euler's first introduction of this function, Binet wrote the integral in the form $\int_0^1 x^{p-1} dx(1-x)^{q-1}$ and introduced the Greek letter beta, $B.$ Considering both beta and gamma functions, Binet said: "Je désignerai la première de ces fonctions par $B(p,q);$ et pour la seconde j'adopterai la notation $\Gamma(p)$ proposée par M. Legendre." Legendre had represented the beta function by the sign $\left(\frac{p}{q}\right).$
(Translation: I will call the first of these functions $B(p,q);$ and for the second I will adopt the notation $\Gamma(p)$ proposed by Mr. Legendre.)
Cajori references Jacques P. M. Binet in Journal de l'Ecole Polytechnique, Vol. XVI (1839), p. 131.
A Web page maintained by St. Andrews (Scotland) School of Mathematics and Statistics relates that Binet
wrote Mémoire sur les intégrales définies eulériennes et sur leur application à la théorie des suites; ainsi qu'à l'évaluation des fonctions des grands nombres in 1839. In this paper Binet introduced what today is called the Beta function $B(m,n).$ It has been suggested that Binet chose the notation $B$ and called it a beta function, because of the first letter of his own name. However, there is no evidence to support this claim.
(If I may speculate, I would propose that having placed the two functions in order, Binet selected $B$ as the antecedent letter in the Greek alphabet to $\Gamma$--and might not have minded that it was also his initial.)
A promising reference I came across is a history of the Gamma function: M. Godefroy, La fonction Gamma; Théorie, Histoire, Bibliographie, Gauthier-Villars, Paris (1901), but I haven't searched out a copy.
In his History of Statistics (1986), Stephen Stigler relates that Thomas Bayes worked with the Beta function:
The evaluation of the integral $\int_0^f \theta^p(1-\theta)^q\,d\theta,$ Bayes noted, ... would complete the solution. This integral is now known as the incomplete beta function ... . The first extensive tables of this function were not compiled until this century [20th], when the students in Karl Pearson's laboratory were pressed into reluctant service as "computers." A story, possibly apocryphal, still circulates in University College London of a student who resigned in disgust after a week, telling Pearson of his plans for a different career and announcing, "As far as I am concerned, the Table of the Incomplete Beta Function may stay incomplete."
[at p. 130]. Stigler dates this to c. 1755 [at p. 123], placing it a generation after Euler's paper (q.v.) but almost a century before Binet named it. It doesn't appear that Bayes gave this function any special name, but it is interesting that it had already emerged as important in statistical investigations. | Why is the Beta Distribution Called the Beta Distribution? | Florian Cajori, in History of Mathematical Notations Vol. II (1928), wrote
... in the same paper of 1730 Euler gave what we now call the "beta function." ... About a century after Euler's first intro | Why is the Beta Distribution Called the Beta Distribution?
Florian Cajori, in History of Mathematical Notations Vol. II (1928), wrote
... in the same paper of 1730 Euler gave what we now call the "beta function." ... About a century after Euler's first introduction of this function, Binet wrote the integral in the form $\int_0^1 x^{p-1} dx(1-x)^{q-1}$ and introduced the Greek letter beta, $B.$ Considering both beta and gamma functions, Binet said: "Je désignerai la première de ces fonctions par $B(p,q);$ et pour la seconde j'adopterai la notation $\Gamma(p)$ proposée par M. Legendre." Legendre had represented the beta function by the sign $\left(\frac{p}{q}\right).$
(Translation: I will call the first of these functions $B(p,q);$ and for the second I will adopt the notation $\Gamma(p)$ proposed by Mr. Legendre.)
Cajori references Jacques P. M. Binet in Journal de l'Ecole Polytechnique, Vol. XVI (1839), p. 131.
A Web page maintained by St. Andrews (Scotland) School of Mathematics and Statistics relates that Binet
wrote Mémoire sur les intégrales définies eulériennes et sur leur application à la théorie des suites; ainsi qu'à l'évaluation des fonctions des grands nombres in 1839. In this paper Binet introduced what today is called the Beta function $B(m,n).$ It has been suggested that Binet chose the notation $B$ and called it a beta function, because of the first letter of his own name. However, there is no evidence to support this claim.
(If I may speculate, I would propose that having placed the two functions in order, Binet selected $B$ as the antecedent letter in the Greek alphabet to $\Gamma$--and might not have minded that it was also his initial.)
A promising reference I came across is a history of the Gamma function: M. Godefroy, La fonction Gamma; Théorie, Histoire, Bibliographie, Gauthier-Villars, Paris (1901), but I haven't searched out a copy.
In his History of Statistics (1986), Stephen Stigler relates that Thomas Bayes worked with the Beta function:
The evaluation of the integral $\int_0^f \theta^p(1-\theta)^q\,d\theta,$ Bayes noted, ... would complete the solution. This integral is now known as the incomplete beta function ... . The first extensive tables of this function were not compiled until this century [20th], when the students in Karl Pearson's laboratory were pressed into reluctant service as "computers." A story, possibly apocryphal, still circulates in University College London of a student who resigned in disgust after a week, telling Pearson of his plans for a different career and announcing, "As far as I am concerned, the Table of the Incomplete Beta Function may stay incomplete."
[at p. 130]. Stigler dates this to c. 1755 [at p. 123], placing it a generation after Euler's paper (q.v.) but almost a century before Binet named it. It doesn't appear that Bayes gave this function any special name, but it is interesting that it had already emerged as important in statistical investigations. | Why is the Beta Distribution Called the Beta Distribution?
Florian Cajori, in History of Mathematical Notations Vol. II (1928), wrote
... in the same paper of 1730 Euler gave what we now call the "beta function." ... About a century after Euler's first intro |
30,835 | Quick test of quality of an econometrics textbook | I mostly check if it has a 21th century approach or a 20th century approach. Some indications include
does it pay inordinate attention to topics that had their day, like the Durbin-Watson test, simultaneous equations etc.
further red flags include too much attention being paid to fixed regressors, exact finite-sample results in toy settings, no attention to things that are routine nowadays like Eicker-White standard errors
somewhat similarly, does it mechanically go through "violations of the classical assumptions" in the sense of "if you find heteroskedasticity, do GLS", "if you find serial correlation, do Cochrane-Orcutt" etc. - there is nothing wrong with any of these techniques, but it is rare in practice to have that these issues occur in isolation. (To be fair, it is much easier to explain what not to do than what to do.)
does it pay attention to numerical implementation, i.e., how to actually carry out an empirical analysis using software, preferably one that allows for easy reproducibility, such as R or Stata?
especially in undergraduate level texts, does it include interesting empirical applications rather than artificial samples which can be fed into a calculator to illustrate computation of some statistic by hand
last but not least, is it typeset properly? | Quick test of quality of an econometrics textbook | I mostly check if it has a 21th century approach or a 20th century approach. Some indications include
does it pay inordinate attention to topics that had their day, like the Durbin-Watson test, simul | Quick test of quality of an econometrics textbook
I mostly check if it has a 21th century approach or a 20th century approach. Some indications include
does it pay inordinate attention to topics that had their day, like the Durbin-Watson test, simultaneous equations etc.
further red flags include too much attention being paid to fixed regressors, exact finite-sample results in toy settings, no attention to things that are routine nowadays like Eicker-White standard errors
somewhat similarly, does it mechanically go through "violations of the classical assumptions" in the sense of "if you find heteroskedasticity, do GLS", "if you find serial correlation, do Cochrane-Orcutt" etc. - there is nothing wrong with any of these techniques, but it is rare in practice to have that these issues occur in isolation. (To be fair, it is much easier to explain what not to do than what to do.)
does it pay attention to numerical implementation, i.e., how to actually carry out an empirical analysis using software, preferably one that allows for easy reproducibility, such as R or Stata?
especially in undergraduate level texts, does it include interesting empirical applications rather than artificial samples which can be fed into a calculator to illustrate computation of some statistic by hand
last but not least, is it typeset properly? | Quick test of quality of an econometrics textbook
I mostly check if it has a 21th century approach or a 20th century approach. Some indications include
does it pay inordinate attention to topics that had their day, like the Durbin-Watson test, simul |
30,836 | Quick test of quality of an econometrics textbook | Interesting question.
And I agree with every point of Christoph's answer.
I would perhaps add:
a good textbook should have a clear audience. I personally like, for example, the textbook Econometric Analysis of Cross Section and Panel Data by Jeffrey Wooldridge. However, I use it to refresh something I already knew, or to really understand certain topics (e.g. as preparation for second or third year PhD exams) but I would not recommend this textbook beginner-level or intermediate econometrics courses.
I like it if authors use a chapter to describe a sub-field they are familiar with even if that is not discussed everywhere (I will add an example here, currently thinking). If the manage not to get lost in details, these experts can often give intriguing insights which you do not find for every day topics.
A good textbook should use definitions which are up-to-date and widely used. As Christoph wrote, please do not use fixed regressors and confuse students why you later suddenly change the "world" (by moving from some "fixed" concepts to random ones). Also please do not use yet another definition of what is a fixed vs a random-effect.
Since much of the literature has become much more applied in recent decades, I think it is important to differentiate between theoretical approximations (asymptotics) and real-world behavior of estimators. Mostly Harmess Econometrics by Joshua Angrist and Jörn-Steffen Pischke could be here a good example which wrote, for example, that the difference between logit/probit and OLS often hardly matters in real-life (I hope I remember this correctly). This is an experience I also have made which can save you a lot of time! At the same time, please do not ignore theoretical aspects at all (see next point).
Since I left academia, I nowadays read more machine learning related textbooks (if I find the time to still look into textbooks). Here, my small sample impression is that the quality of the top textbooks is extremely high. And often you can read the textbooks online for free, this is amazing (some econometric textbooks are also available on the web but I never know whether legally or not so I have not linked them here)! Nobody has to make his or her book available for free, but students love free books. Make them aware of free good literature! Examples are Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville, An Introduction to Statistical Learning by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani or The Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani and Jerome Friedman. Although I personally think that these books should cover more theory (perhaps take a look at econometrics?!), there is much to learn from this field with respect to visualization, writing style, quality of coding etc.!
The interestingness of the examples given in the textbook should be high! I personally do not like toy examples, or (outdated) datasets I already have seen again and again. It is much more interesting for the students if you have data which is still relevant.
Good textbooks should take time to explain difficult concepts such as the incidental parameter problem (and what it means for linear and non-linear models) or avoid such terms.
Last but not least, answer the question students ask again and again. A few examples
Pooled OLS vs Random-Effects vs Fixed-Effects
Why square the difference instead of taking the absolute value in standard deviation?
When is a biased estimator preferable to unbiased one? or in general, differences between "forecasting" and (in-sample) "regression" (e.g. here).
Differences between microeconometrics, macroeconometrics and time series (and more?), e.g. here
Regression and/or matching?
Some other thoughts:
Give some historical context, be critical about what is hyped currently. Tell students, for example, why a nobel laureates such as Angus Deaton or James Heckman are critical about the reduced-form approaches. While of course also saying that are the benefits/alternatives.
If you do not want to talk about the history. Then perhaps briefly mention more "novel" topics relevant for econometrics such as version-control systems and reproducible codes or how to deal with big data for which classical econometric estimators are often not suited?
As Christoph wrote: some topics are at one point in fashion (such as the mentioned simultaneous equations) but play hardly any role anymore a decade later. However some authors should still aim at giving a broad overview about the field (Econometric Analyses by William Greene or Microeconometrics: Methods and Applications by Colin Cameron and Pravin Trivedi would be two examples). There is a lot of confusion about the need to use simultaneous equations models (with respect to consistency, bias and efficiency). Or why the Heckman correction (or other early "quasi causal" models) are not so popular anymore. So sometimes I find it nice to mention such topics on 1-2 pages and perhaps telling the reader why this topic is not so important anymore. | Quick test of quality of an econometrics textbook | Interesting question.
And I agree with every point of Christoph's answer.
I would perhaps add:
a good textbook should have a clear audience. I personally like, for example, the textbook Econometric | Quick test of quality of an econometrics textbook
Interesting question.
And I agree with every point of Christoph's answer.
I would perhaps add:
a good textbook should have a clear audience. I personally like, for example, the textbook Econometric Analysis of Cross Section and Panel Data by Jeffrey Wooldridge. However, I use it to refresh something I already knew, or to really understand certain topics (e.g. as preparation for second or third year PhD exams) but I would not recommend this textbook beginner-level or intermediate econometrics courses.
I like it if authors use a chapter to describe a sub-field they are familiar with even if that is not discussed everywhere (I will add an example here, currently thinking). If the manage not to get lost in details, these experts can often give intriguing insights which you do not find for every day topics.
A good textbook should use definitions which are up-to-date and widely used. As Christoph wrote, please do not use fixed regressors and confuse students why you later suddenly change the "world" (by moving from some "fixed" concepts to random ones). Also please do not use yet another definition of what is a fixed vs a random-effect.
Since much of the literature has become much more applied in recent decades, I think it is important to differentiate between theoretical approximations (asymptotics) and real-world behavior of estimators. Mostly Harmess Econometrics by Joshua Angrist and Jörn-Steffen Pischke could be here a good example which wrote, for example, that the difference between logit/probit and OLS often hardly matters in real-life (I hope I remember this correctly). This is an experience I also have made which can save you a lot of time! At the same time, please do not ignore theoretical aspects at all (see next point).
Since I left academia, I nowadays read more machine learning related textbooks (if I find the time to still look into textbooks). Here, my small sample impression is that the quality of the top textbooks is extremely high. And often you can read the textbooks online for free, this is amazing (some econometric textbooks are also available on the web but I never know whether legally or not so I have not linked them here)! Nobody has to make his or her book available for free, but students love free books. Make them aware of free good literature! Examples are Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville, An Introduction to Statistical Learning by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani or The Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani and Jerome Friedman. Although I personally think that these books should cover more theory (perhaps take a look at econometrics?!), there is much to learn from this field with respect to visualization, writing style, quality of coding etc.!
The interestingness of the examples given in the textbook should be high! I personally do not like toy examples, or (outdated) datasets I already have seen again and again. It is much more interesting for the students if you have data which is still relevant.
Good textbooks should take time to explain difficult concepts such as the incidental parameter problem (and what it means for linear and non-linear models) or avoid such terms.
Last but not least, answer the question students ask again and again. A few examples
Pooled OLS vs Random-Effects vs Fixed-Effects
Why square the difference instead of taking the absolute value in standard deviation?
When is a biased estimator preferable to unbiased one? or in general, differences between "forecasting" and (in-sample) "regression" (e.g. here).
Differences between microeconometrics, macroeconometrics and time series (and more?), e.g. here
Regression and/or matching?
Some other thoughts:
Give some historical context, be critical about what is hyped currently. Tell students, for example, why a nobel laureates such as Angus Deaton or James Heckman are critical about the reduced-form approaches. While of course also saying that are the benefits/alternatives.
If you do not want to talk about the history. Then perhaps briefly mention more "novel" topics relevant for econometrics such as version-control systems and reproducible codes or how to deal with big data for which classical econometric estimators are often not suited?
As Christoph wrote: some topics are at one point in fashion (such as the mentioned simultaneous equations) but play hardly any role anymore a decade later. However some authors should still aim at giving a broad overview about the field (Econometric Analyses by William Greene or Microeconometrics: Methods and Applications by Colin Cameron and Pravin Trivedi would be two examples). There is a lot of confusion about the need to use simultaneous equations models (with respect to consistency, bias and efficiency). Or why the Heckman correction (or other early "quasi causal" models) are not so popular anymore. So sometimes I find it nice to mention such topics on 1-2 pages and perhaps telling the reader why this topic is not so important anymore. | Quick test of quality of an econometrics textbook
Interesting question.
And I agree with every point of Christoph's answer.
I would perhaps add:
a good textbook should have a clear audience. I personally like, for example, the textbook Econometric |
30,837 | If the square of a time series is stationary, is the original time series stationary? | That conjecture is false. A simple counter-example is the deterministic time-series $X_t = (-1)^t$ over times $t \in \mathbb{Z}$. This time series is not even mean stationary, but its square is strictly stationary. | If the square of a time series is stationary, is the original time series stationary? | That conjecture is false. A simple counter-example is the deterministic time-series $X_t = (-1)^t$ over times $t \in \mathbb{Z}$. This time series is not even mean stationary, but its square is stri | If the square of a time series is stationary, is the original time series stationary?
That conjecture is false. A simple counter-example is the deterministic time-series $X_t = (-1)^t$ over times $t \in \mathbb{Z}$. This time series is not even mean stationary, but its square is strictly stationary. | If the square of a time series is stationary, is the original time series stationary?
That conjecture is false. A simple counter-example is the deterministic time-series $X_t = (-1)^t$ over times $t \in \mathbb{Z}$. This time series is not even mean stationary, but its square is stri |
30,838 | How to reduce predictors the right way for a logistic regression model | Some of the answers you have received that push feature selection are off base.
The lasso or better the elastic net will do feature selection but as pointed out above you will be quite disappointed at the volatility of the set of "selected" features. I believe the only real hope in your situation is data reduction, i.e., unsupervised learning, as I emphasize in my book. Data reduction brings more interpretability and especially more stability. I very much recommend sparse principal components, or variable clustering followed by regular principal components on clusters.
The information content in your dataset is far, far too low for any feature selection algorithm to be reliable. | How to reduce predictors the right way for a logistic regression model | Some of the answers you have received that push feature selection are off base.
The lasso or better the elastic net will do feature selection but as pointed out above you will be quite disappointed at | How to reduce predictors the right way for a logistic regression model
Some of the answers you have received that push feature selection are off base.
The lasso or better the elastic net will do feature selection but as pointed out above you will be quite disappointed at the volatility of the set of "selected" features. I believe the only real hope in your situation is data reduction, i.e., unsupervised learning, as I emphasize in my book. Data reduction brings more interpretability and especially more stability. I very much recommend sparse principal components, or variable clustering followed by regular principal components on clusters.
The information content in your dataset is far, far too low for any feature selection algorithm to be reliable. | How to reduce predictors the right way for a logistic regression model
Some of the answers you have received that push feature selection are off base.
The lasso or better the elastic net will do feature selection but as pointed out above you will be quite disappointed at |
30,839 | How to reduce predictors the right way for a logistic regression model | +1 for "sometimes seems a bit overwhelming". It really depends (as Harrell clearly states; see the section at the end of Chapter 4) whether you want to do
confirmatory analysis ($\to$ reduce your predictor complexity to a reasonable level without looking at the responses, by PCA or subject-area considerations or ...)
predictive analysis ($\to$ use appropriate penalization methods). Lasso could very well work OK with 100 predictors, if you have a reasonably large sample. Feature selection will be unstable, but that's OK if all you care about is prediction. I have a personal preference for ridge-like approaches that don't technically "select features" (because they never reduce any parameter to exactly zero), but whatever works ...
You'll have to use cross-validation to choose the degree of penalization, which will destroy your ability to do inference (construct confidence intervals on predictions) unless you use cutting-edge high-dimensional inference methods (e.g. Dezeure et al 2015; I have not tried these approaches but they seem sensible ...)
exploratory analysis: have fun, be transparent and honest, don't quote any p-values.
For the particular use case you have now described (a bunch of your predictors essentially represent a cumulative distribution of the dose received by different fractions of the heart), you might want to look into varying-coefficient models (a little hard to search for), which basically fit a smooth curve for the effect of the CDF (these can be implemented in R's mgcv package). | How to reduce predictors the right way for a logistic regression model | +1 for "sometimes seems a bit overwhelming". It really depends (as Harrell clearly states; see the section at the end of Chapter 4) whether you want to do
confirmatory analysis ($\to$ reduce your p | How to reduce predictors the right way for a logistic regression model
+1 for "sometimes seems a bit overwhelming". It really depends (as Harrell clearly states; see the section at the end of Chapter 4) whether you want to do
confirmatory analysis ($\to$ reduce your predictor complexity to a reasonable level without looking at the responses, by PCA or subject-area considerations or ...)
predictive analysis ($\to$ use appropriate penalization methods). Lasso could very well work OK with 100 predictors, if you have a reasonably large sample. Feature selection will be unstable, but that's OK if all you care about is prediction. I have a personal preference for ridge-like approaches that don't technically "select features" (because they never reduce any parameter to exactly zero), but whatever works ...
You'll have to use cross-validation to choose the degree of penalization, which will destroy your ability to do inference (construct confidence intervals on predictions) unless you use cutting-edge high-dimensional inference methods (e.g. Dezeure et al 2015; I have not tried these approaches but they seem sensible ...)
exploratory analysis: have fun, be transparent and honest, don't quote any p-values.
For the particular use case you have now described (a bunch of your predictors essentially represent a cumulative distribution of the dose received by different fractions of the heart), you might want to look into varying-coefficient models (a little hard to search for), which basically fit a smooth curve for the effect of the CDF (these can be implemented in R's mgcv package). | How to reduce predictors the right way for a logistic regression model
+1 for "sometimes seems a bit overwhelming". It really depends (as Harrell clearly states; see the section at the end of Chapter 4) whether you want to do
confirmatory analysis ($\to$ reduce your p |
30,840 | How to reduce predictors the right way for a logistic regression model | There are many different approaches. What I would recommend is trying some simple ones, in the following order:
L1 regularization (with increasing penalty; the larger the regularization coefficient, the more features will be eliminated)
Recursive Feature Elimination (https://scikit-learn.org/stable/modules/feature_selection.html#recursive-feature-elimination) -- removes features incrementally by eliminating the features associated with the smallest model coefficients (assuming that those are the least important once; obviously, it's very crucial here to normalize the input features)
Sequential Feature Selection (http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/) -- removes features based on how important they are for predictive performance | How to reduce predictors the right way for a logistic regression model | There are many different approaches. What I would recommend is trying some simple ones, in the following order:
L1 regularization (with increasing penalty; the larger the regularization coefficient, | How to reduce predictors the right way for a logistic regression model
There are many different approaches. What I would recommend is trying some simple ones, in the following order:
L1 regularization (with increasing penalty; the larger the regularization coefficient, the more features will be eliminated)
Recursive Feature Elimination (https://scikit-learn.org/stable/modules/feature_selection.html#recursive-feature-elimination) -- removes features incrementally by eliminating the features associated with the smallest model coefficients (assuming that those are the least important once; obviously, it's very crucial here to normalize the input features)
Sequential Feature Selection (http://rasbt.github.io/mlxtend/user_guide/feature_selection/SequentialFeatureSelector/) -- removes features based on how important they are for predictive performance | How to reduce predictors the right way for a logistic regression model
There are many different approaches. What I would recommend is trying some simple ones, in the following order:
L1 regularization (with increasing penalty; the larger the regularization coefficient, |
30,841 | What would be the output distribution of ReLu activation? | Your question seems to boil down to the following:
Suppose $X \sim N(\mu, \sigma^2)$.
What is the distribution of $Y = \operatorname{ReLU}(X) = \max\{0, X\}$?
Answer.
Let $F_X$ and $F_Y$ denote the cumulative distribution functions of $X$ and $Y$, respectively.
Let $\Phi$ be the standard normal cumulative distribution function:
$$
\Phi(z) = \int_{-\infty}^z \frac{1}{\sqrt{2 \pi}} e^{-z^2 / 2} \, dz,
$$
so that
$$
F_X(x)
= \Phi\left(\frac{x - \mu}{\sigma}\right)
$$
for all $x \in \mathbb{R}$.
If $y \in \mathbb{R}$, then
$$
\begin{aligned}
F_Y(y)
&= P(Y \leq y) \\
&= P(\max\{0, X\} \leq y) \\
&= P(0 \leq y, X \leq y) &&\text{(*)} \\
&= \begin{cases}
0, & \text{if $y < 0$}, \\
P(X \leq y), & \text{if $y \geq 0$}
\end{cases} \\
&= \begin{cases}
0, & \text{if $y < 0$}, \\
F_X(y), & \text{if $y \geq 0$}
\end{cases} \\
&= \begin{cases}
0, & \text{if $y < 0$}, \\
\Phi\left(\frac{y - \mu}{\sigma}\right), & \text{if $y \geq 0$}
\end{cases}
\end{aligned}
$$
(*) Here we used the fact that $\max\{a, b\} \leq c$ if and only if $a \leq c$ and $b \leq c$ (for any $a, b, c \in \mathbb{R}$).
It's worth emphasizing that $F_Y$ is the cumulative distribution function.
I don't know if this distribution has a name off the top of my head, but knowing the cumulative distribution function allows you to say everything there is to say about the distribution of $Y$.
Visualization
Here is a plot of the cumulative distribution function of $Y$ for various distributions of $X$:
Note: the distribution of $Y$ is neither discrete nor continuous!
You can see that the distribution of $Y$ is not continuous since continuous distributions have continuous cumulative distribution functions (and $Y$ clearly does not), and $Y$ is not discrete because discrete distributions have piecewise constant cumulative distribution functions (which again $Y$ does not).
In particular, this means that $Y$ does not have a density function.
Effect of Affine Transformations
Suppose your neural network has $p$-dimensional $\mathbf{X} = (X_1, \ldots, X_p) \sim N_p(\boldsymbol{\mu}, \boldsymbol{\Sigma})$ (multivariate normal with mean $\boldsymbol{\mu} \in \mathbb{R}^p$ and covariance matrix $\boldsymbol{\Sigma} \in \mathbb{R}^{p \times p}$).
Suppose that the next layer consists of $q$ units $\mathbf{Y} = (Y_1, \ldots, Y_q) \in \mathbb{R}^q$ given by an affine transformation followed by ReLU:
$$
Y_i = \operatorname{ReLU}\left(b_i + \sum_{j=1}^p w_{i, j} X_j\right).
$$
Let $\mathbf{X}^\prime = (X_1^\prime, \ldots, Y_q^\prime)$ denote the pre-activations:
$$
X_i^\prime = b_i + \sum_{j=1}^p w_{i, j} X_j.
$$
More concisely,
$$
\mathbf{X}^\prime = \mathbf{b} + \mathbf{W} \mathbf{X},
$$
where $\mathbf{b} = (b_1, \ldots, b_q)$ and $\mathbf{W}$ is the matrix of the $w_{i, j}$'s.
Since $\mathbf{X}$ is multivariate normal, so is $\mathbf{X}^\prime$, and we have
$$
\mathbf{X}^\prime
\sim N_q(\mathbf{b} + \mathbf{W}\boldsymbol{\mu}, \mathbf{W} \boldsymbol{\Sigma} \mathbf{W}^\top).
$$
In particular, each component $X_i^\prime$ of $\mathbf{X}^\prime$ is itself univariate normal with some mean and variance that can be read off from the joint mean and variance
Then we can apply the argument at the top of this answer to figure out the distribution of each activation $Y_i = \operatorname{ReLU}(X_i^\prime)$. | What would be the output distribution of ReLu activation? | Your question seems to boil down to the following:
Suppose $X \sim N(\mu, \sigma^2)$.
What is the distribution of $Y = \operatorname{ReLU}(X) = \max\{0, X\}$?
Answer.
Let $F_X$ and $F_Y$ denote th | What would be the output distribution of ReLu activation?
Your question seems to boil down to the following:
Suppose $X \sim N(\mu, \sigma^2)$.
What is the distribution of $Y = \operatorname{ReLU}(X) = \max\{0, X\}$?
Answer.
Let $F_X$ and $F_Y$ denote the cumulative distribution functions of $X$ and $Y$, respectively.
Let $\Phi$ be the standard normal cumulative distribution function:
$$
\Phi(z) = \int_{-\infty}^z \frac{1}{\sqrt{2 \pi}} e^{-z^2 / 2} \, dz,
$$
so that
$$
F_X(x)
= \Phi\left(\frac{x - \mu}{\sigma}\right)
$$
for all $x \in \mathbb{R}$.
If $y \in \mathbb{R}$, then
$$
\begin{aligned}
F_Y(y)
&= P(Y \leq y) \\
&= P(\max\{0, X\} \leq y) \\
&= P(0 \leq y, X \leq y) &&\text{(*)} \\
&= \begin{cases}
0, & \text{if $y < 0$}, \\
P(X \leq y), & \text{if $y \geq 0$}
\end{cases} \\
&= \begin{cases}
0, & \text{if $y < 0$}, \\
F_X(y), & \text{if $y \geq 0$}
\end{cases} \\
&= \begin{cases}
0, & \text{if $y < 0$}, \\
\Phi\left(\frac{y - \mu}{\sigma}\right), & \text{if $y \geq 0$}
\end{cases}
\end{aligned}
$$
(*) Here we used the fact that $\max\{a, b\} \leq c$ if and only if $a \leq c$ and $b \leq c$ (for any $a, b, c \in \mathbb{R}$).
It's worth emphasizing that $F_Y$ is the cumulative distribution function.
I don't know if this distribution has a name off the top of my head, but knowing the cumulative distribution function allows you to say everything there is to say about the distribution of $Y$.
Visualization
Here is a plot of the cumulative distribution function of $Y$ for various distributions of $X$:
Note: the distribution of $Y$ is neither discrete nor continuous!
You can see that the distribution of $Y$ is not continuous since continuous distributions have continuous cumulative distribution functions (and $Y$ clearly does not), and $Y$ is not discrete because discrete distributions have piecewise constant cumulative distribution functions (which again $Y$ does not).
In particular, this means that $Y$ does not have a density function.
Effect of Affine Transformations
Suppose your neural network has $p$-dimensional $\mathbf{X} = (X_1, \ldots, X_p) \sim N_p(\boldsymbol{\mu}, \boldsymbol{\Sigma})$ (multivariate normal with mean $\boldsymbol{\mu} \in \mathbb{R}^p$ and covariance matrix $\boldsymbol{\Sigma} \in \mathbb{R}^{p \times p}$).
Suppose that the next layer consists of $q$ units $\mathbf{Y} = (Y_1, \ldots, Y_q) \in \mathbb{R}^q$ given by an affine transformation followed by ReLU:
$$
Y_i = \operatorname{ReLU}\left(b_i + \sum_{j=1}^p w_{i, j} X_j\right).
$$
Let $\mathbf{X}^\prime = (X_1^\prime, \ldots, Y_q^\prime)$ denote the pre-activations:
$$
X_i^\prime = b_i + \sum_{j=1}^p w_{i, j} X_j.
$$
More concisely,
$$
\mathbf{X}^\prime = \mathbf{b} + \mathbf{W} \mathbf{X},
$$
where $\mathbf{b} = (b_1, \ldots, b_q)$ and $\mathbf{W}$ is the matrix of the $w_{i, j}$'s.
Since $\mathbf{X}$ is multivariate normal, so is $\mathbf{X}^\prime$, and we have
$$
\mathbf{X}^\prime
\sim N_q(\mathbf{b} + \mathbf{W}\boldsymbol{\mu}, \mathbf{W} \boldsymbol{\Sigma} \mathbf{W}^\top).
$$
In particular, each component $X_i^\prime$ of $\mathbf{X}^\prime$ is itself univariate normal with some mean and variance that can be read off from the joint mean and variance
Then we can apply the argument at the top of this answer to figure out the distribution of each activation $Y_i = \operatorname{ReLU}(X_i^\prime)$. | What would be the output distribution of ReLu activation?
Your question seems to boil down to the following:
Suppose $X \sim N(\mu, \sigma^2)$.
What is the distribution of $Y = \operatorname{ReLU}(X) = \max\{0, X\}$?
Answer.
Let $F_X$ and $F_Y$ denote th |
30,842 | What would be the output distribution of ReLu activation? | Adding to @Artem Mavrin's answer above and providing a short answer to the following rephrased question:
Suppose $X \sim N(\mu, \sigma^2)$.
What is the distribution of $Y = \operatorname{ReLU}(X) = \max\{0, X\}$?
The probability density function (pdf) of $Y$ will be a multidimensional version of the Rectified Gaussian distribution (https://en.wikipedia.org/wiki/Rectified_Gaussian_distribution).
This is a hybrid (discrete-continuous) distribution with a point mass at the origin, a multivariate Gaussian in the all-positive part of the space and 0 everywhere else. | What would be the output distribution of ReLu activation? | Adding to @Artem Mavrin's answer above and providing a short answer to the following rephrased question:
Suppose $X \sim N(\mu, \sigma^2)$.
What is the distribution of $Y = \operatorname{ReLU}(X) = | What would be the output distribution of ReLu activation?
Adding to @Artem Mavrin's answer above and providing a short answer to the following rephrased question:
Suppose $X \sim N(\mu, \sigma^2)$.
What is the distribution of $Y = \operatorname{ReLU}(X) = \max\{0, X\}$?
The probability density function (pdf) of $Y$ will be a multidimensional version of the Rectified Gaussian distribution (https://en.wikipedia.org/wiki/Rectified_Gaussian_distribution).
This is a hybrid (discrete-continuous) distribution with a point mass at the origin, a multivariate Gaussian in the all-positive part of the space and 0 everywhere else. | What would be the output distribution of ReLu activation?
Adding to @Artem Mavrin's answer above and providing a short answer to the following rephrased question:
Suppose $X \sim N(\mu, \sigma^2)$.
What is the distribution of $Y = \operatorname{ReLU}(X) = |
30,843 | What is the problem with $p > n$? | This could occur in many scenarios, few examples are:
Medical data analysis at hospitals. Medical researchers studying a particular cancer primarily can do data collection at their own hospital, and I think it is not a bad thing that they try collect many variables as possible from one particular patient like age, gender, tumour size, MRI, CT volume.
Micro platereader array studies in bioinformatics. It is often the case that you don’t have many species but you want to be able to test for as many effects as possible.
Analysis with images. You have often 16 million pixels while it is very difficult to collect and store that many images.
MRI reconstructions are often similar problems, which need sparse regression techniques, and improving them is really a central question in MRI imaging research.
The solution is really, to look at the regression literature and find what best works for your application.
If you have domain knowledge, incorporate into your prior distribution and take a Bayesian approach with Bayesian Linear Regression.
If you want to find a sparse solution, automatic relevance determination’s empirical Bayes approach could be the way to go.
If you think that with your problem, having a notion of probabilities is inappropriate (like solving a linear systems of equations), it might be worth to look at the Moore-Penrose pseudoinverse.
You can approach it from a feature selection perspective, and reduce the number of p until it is a well-posed problem. | What is the problem with $p > n$? | This could occur in many scenarios, few examples are:
Medical data analysis at hospitals. Medical researchers studying a particular cancer primarily can do data collection at their own hospital, and | What is the problem with $p > n$?
This could occur in many scenarios, few examples are:
Medical data analysis at hospitals. Medical researchers studying a particular cancer primarily can do data collection at their own hospital, and I think it is not a bad thing that they try collect many variables as possible from one particular patient like age, gender, tumour size, MRI, CT volume.
Micro platereader array studies in bioinformatics. It is often the case that you don’t have many species but you want to be able to test for as many effects as possible.
Analysis with images. You have often 16 million pixels while it is very difficult to collect and store that many images.
MRI reconstructions are often similar problems, which need sparse regression techniques, and improving them is really a central question in MRI imaging research.
The solution is really, to look at the regression literature and find what best works for your application.
If you have domain knowledge, incorporate into your prior distribution and take a Bayesian approach with Bayesian Linear Regression.
If you want to find a sparse solution, automatic relevance determination’s empirical Bayes approach could be the way to go.
If you think that with your problem, having a notion of probabilities is inappropriate (like solving a linear systems of equations), it might be worth to look at the Moore-Penrose pseudoinverse.
You can approach it from a feature selection perspective, and reduce the number of p until it is a well-posed problem. | What is the problem with $p > n$?
This could occur in many scenarios, few examples are:
Medical data analysis at hospitals. Medical researchers studying a particular cancer primarily can do data collection at their own hospital, and |
30,844 | What is the problem with $p > n$? | This is a very good question. When the number of candidate predictors $p$ is more than the effective sample size $n$, and one does not place any restrictions on the regression coefficients (e.g., one is not using shrinkage, a.k.a. penalized maximum likelihood estimation or regularization), the situation is hopeless. I say that for several reasons including
If you think about the number of non-redundant linear combination of variables that can be analyzed, this number is $\leq \min(n, p)$. For example you can't even compute, much less trust, principal components beyond $\min(n, p)$.
With $p = n$ and no two $y$-coordinates on a vertical line when plotting $(x, y)$, one can achieve $R^{2}=1.0$ for any dataset even if the true population $R^2$ is 0.0.
If you use any feature selection algorithm such as dreaded stepwise regression models, the list of features "selected" will essentially be a random set of features with no hope of replicating in another sample. This is especially true if there are correlations among the candidate features, e.g., co-linearity.
The value of $n$ needed to estimate with decent precision a single correlation coefficient between two variables is about 400. See here.
In general, a study that intends to analyze 45 variables on 45 subjects is poorly planned, and the only ways to rescue it that I know of are
Pre-specify one or two predictors to analyze and ignore the rest
Use penalized estimation such as ridge regression to fit all the variables but take the coefficients with a grain of salt (heavy discounting)
Use data reduction, e.g., principal components, variable clustering, sparse principal components (my favorite) as discussed in my RMS book and course notes. This involves combining variables that are hard to separate, and not trying to estimate separate effects for them. For $n=45$ you may only get by with 2 collapsed scores for playing against $y$. Data reduction (unsupervised learning) is more interpretable than most other methods.
A technical detail: if you use one of the best combination variable selection/penalization methods such as lasso or elastic net you can lower the chance of overfitting but will ultimately be disappointed that the list of selected features is highly unstable and will not replicate in other datasets. | What is the problem with $p > n$? | This is a very good question. When the number of candidate predictors $p$ is more than the effective sample size $n$, and one does not place any restrictions on the regression coefficients (e.g., one | What is the problem with $p > n$?
This is a very good question. When the number of candidate predictors $p$ is more than the effective sample size $n$, and one does not place any restrictions on the regression coefficients (e.g., one is not using shrinkage, a.k.a. penalized maximum likelihood estimation or regularization), the situation is hopeless. I say that for several reasons including
If you think about the number of non-redundant linear combination of variables that can be analyzed, this number is $\leq \min(n, p)$. For example you can't even compute, much less trust, principal components beyond $\min(n, p)$.
With $p = n$ and no two $y$-coordinates on a vertical line when plotting $(x, y)$, one can achieve $R^{2}=1.0$ for any dataset even if the true population $R^2$ is 0.0.
If you use any feature selection algorithm such as dreaded stepwise regression models, the list of features "selected" will essentially be a random set of features with no hope of replicating in another sample. This is especially true if there are correlations among the candidate features, e.g., co-linearity.
The value of $n$ needed to estimate with decent precision a single correlation coefficient between two variables is about 400. See here.
In general, a study that intends to analyze 45 variables on 45 subjects is poorly planned, and the only ways to rescue it that I know of are
Pre-specify one or two predictors to analyze and ignore the rest
Use penalized estimation such as ridge regression to fit all the variables but take the coefficients with a grain of salt (heavy discounting)
Use data reduction, e.g., principal components, variable clustering, sparse principal components (my favorite) as discussed in my RMS book and course notes. This involves combining variables that are hard to separate, and not trying to estimate separate effects for them. For $n=45$ you may only get by with 2 collapsed scores for playing against $y$. Data reduction (unsupervised learning) is more interpretable than most other methods.
A technical detail: if you use one of the best combination variable selection/penalization methods such as lasso or elastic net you can lower the chance of overfitting but will ultimately be disappointed that the list of selected features is highly unstable and will not replicate in other datasets. | What is the problem with $p > n$?
This is a very good question. When the number of candidate predictors $p$ is more than the effective sample size $n$, and one does not place any restrictions on the regression coefficients (e.g., one |
30,845 | Can $R^2$ be greater than 1? | I found the answer, so will post the answer to my question. As Martijn pointed out, with linear regression you can compute $R^2$ by two equivalent expressions:
$R^2 = 1- SS_e/SS_t = SS_m/SS_t$
With nonlinear regression, you cannot sum the sum-of-squares of residuals and sum-of-squares of the regression to obtain the total sum-of-squares. That equation is simply not true. So the equation above is not right. Those two experessions compute two different values for $R^2$.
The only equation that makes sense and is (I think) universally used is:
$R^2 = 1- SS_e/SS_t$
Its value is never greater than 1.0, but it can be negative when you fit the wrong model (or wrong constraints) so the $SS_e$ (sum-of-squares of residuals) is greater than $SS_t$ (sum of squares of the difference between actual and mean Y values).
The other equation is not used with nonlinear regression:
$R^2 = SS_m/SS_t$
But if this equation were used, it results in $R^2$ greater than 1.0 in cases where the model fits the data really poorly so $SS_m$ is larger than $SS_t$. This happens when the fit of the model is worse than the fit of a horizontal line, the same cases that lead to $R^2$<0 with the other equation.
Bottom line: $R^2$ can be greater than 1.0 only when an invalid (or nonstandard) equation is used to compute $R^2$ and when the chosen model (with constraints, if any) fits the data really poorly, worse than the fit of a horizontal line. | Can $R^2$ be greater than 1? | I found the answer, so will post the answer to my question. As Martijn pointed out, with linear regression you can compute $R^2$ by two equivalent expressions:
$R^2 = 1- SS_e/SS_t = SS_m/SS_t$
With no | Can $R^2$ be greater than 1?
I found the answer, so will post the answer to my question. As Martijn pointed out, with linear regression you can compute $R^2$ by two equivalent expressions:
$R^2 = 1- SS_e/SS_t = SS_m/SS_t$
With nonlinear regression, you cannot sum the sum-of-squares of residuals and sum-of-squares of the regression to obtain the total sum-of-squares. That equation is simply not true. So the equation above is not right. Those two experessions compute two different values for $R^2$.
The only equation that makes sense and is (I think) universally used is:
$R^2 = 1- SS_e/SS_t$
Its value is never greater than 1.0, but it can be negative when you fit the wrong model (or wrong constraints) so the $SS_e$ (sum-of-squares of residuals) is greater than $SS_t$ (sum of squares of the difference between actual and mean Y values).
The other equation is not used with nonlinear regression:
$R^2 = SS_m/SS_t$
But if this equation were used, it results in $R^2$ greater than 1.0 in cases where the model fits the data really poorly so $SS_m$ is larger than $SS_t$. This happens when the fit of the model is worse than the fit of a horizontal line, the same cases that lead to $R^2$<0 with the other equation.
Bottom line: $R^2$ can be greater than 1.0 only when an invalid (or nonstandard) equation is used to compute $R^2$ and when the chosen model (with constraints, if any) fits the data really poorly, worse than the fit of a horizontal line. | Can $R^2$ be greater than 1?
I found the answer, so will post the answer to my question. As Martijn pointed out, with linear regression you can compute $R^2$ by two equivalent expressions:
$R^2 = 1- SS_e/SS_t = SS_m/SS_t$
With no |
30,846 | Can $R^2$ be greater than 1? | By definition, $R^2 = 1 - SS_e/SS_t$ where both SS-terms are a sum of squares and thus nonnegative. The maximum is attained at $SS_e=0$ resulting in $R^2=1$. | Can $R^2$ be greater than 1? | By definition, $R^2 = 1 - SS_e/SS_t$ where both SS-terms are a sum of squares and thus nonnegative. The maximum is attained at $SS_e=0$ resulting in $R^2=1$. | Can $R^2$ be greater than 1?
By definition, $R^2 = 1 - SS_e/SS_t$ where both SS-terms are a sum of squares and thus nonnegative. The maximum is attained at $SS_e=0$ resulting in $R^2=1$. | Can $R^2$ be greater than 1?
By definition, $R^2 = 1 - SS_e/SS_t$ where both SS-terms are a sum of squares and thus nonnegative. The maximum is attained at $SS_e=0$ resulting in $R^2=1$. |
30,847 | Estimation of AR($p$) model by `lm` versus `arima` in R: different results | There are a few reasons. For one, your ARMA model doesn't include a mean/intercept. For another, the ARMA by default uses sum of squares only to find starting points for an iterative maximum likelihood scheme. Least squares regression (which throws away early data points), is usually called conditional sum of squares (CSS) in time series.
These should match up
summary(lm(x.curr ~ ., data=x.df))
arima(x = x.ts, order = c(2,0,0), include.mean = T, method="CSS") # note the mean and method arguments
Well you'll notice that there's a difference between lm's intercept and the arima's mean. The relationship is that the intercept equals the mean times $(1 - \phi_1 - \phi_2)$. You can verify that this works.
Also, and this makes everything much more confusing, the arima function will call its mean the intercept. This is a well-known issue covered in other questions such as this one, and is also explained here.
One more thing: your description for an AR(p) model is only true if you're looking at mean zero AR models. In general you can write it as
$$
(1 - \phi_1B - \cdots - \phi_p B^p)(X_t - \mu) = \epsilon_t
$$
where $\mu$ is the mean, or
$$
(1 - \phi_1B - \cdots - \phi_p B^p)X_t = c + \epsilon_t
$$
where $c$ is the intercept. This will help you with the intercept/mean dilemma above.
Finally, regarding your last question:
how could I model ARMA process with differencing (non-stationary)
using regression? Should I fit regression model to
differenced-and-lagged time series just like that?
You can either difference your nonstationary series, perhaps with diff in R, or by changing the order argument in your call to arima. For example, fitting an AR(3) to differenced data, is the same as an ARIMA(3,1,0), and so would require the parameter c(3,1,0). | Estimation of AR($p$) model by `lm` versus `arima` in R: different results | There are a few reasons. For one, your ARMA model doesn't include a mean/intercept. For another, the ARMA by default uses sum of squares only to find starting points for an iterative maximum likelihoo | Estimation of AR($p$) model by `lm` versus `arima` in R: different results
There are a few reasons. For one, your ARMA model doesn't include a mean/intercept. For another, the ARMA by default uses sum of squares only to find starting points for an iterative maximum likelihood scheme. Least squares regression (which throws away early data points), is usually called conditional sum of squares (CSS) in time series.
These should match up
summary(lm(x.curr ~ ., data=x.df))
arima(x = x.ts, order = c(2,0,0), include.mean = T, method="CSS") # note the mean and method arguments
Well you'll notice that there's a difference between lm's intercept and the arima's mean. The relationship is that the intercept equals the mean times $(1 - \phi_1 - \phi_2)$. You can verify that this works.
Also, and this makes everything much more confusing, the arima function will call its mean the intercept. This is a well-known issue covered in other questions such as this one, and is also explained here.
One more thing: your description for an AR(p) model is only true if you're looking at mean zero AR models. In general you can write it as
$$
(1 - \phi_1B - \cdots - \phi_p B^p)(X_t - \mu) = \epsilon_t
$$
where $\mu$ is the mean, or
$$
(1 - \phi_1B - \cdots - \phi_p B^p)X_t = c + \epsilon_t
$$
where $c$ is the intercept. This will help you with the intercept/mean dilemma above.
Finally, regarding your last question:
how could I model ARMA process with differencing (non-stationary)
using regression? Should I fit regression model to
differenced-and-lagged time series just like that?
You can either difference your nonstationary series, perhaps with diff in R, or by changing the order argument in your call to arima. For example, fitting an AR(3) to differenced data, is the same as an ARIMA(3,1,0), and so would require the parameter c(3,1,0). | Estimation of AR($p$) model by `lm` versus `arima` in R: different results
There are a few reasons. For one, your ARMA model doesn't include a mean/intercept. For another, the ARMA by default uses sum of squares only to find starting points for an iterative maximum likelihoo |
30,848 | How does batch size affect Adam Optimizer? | Yes, batch size affects Adam optimizer. Common batch sizes 16, 32, and 64 can be used. Results show that there is a sweet spot for batch size, where a model performs best. For example, on MNIST data, three different batch sizes gave different accuracy as shown in the table below:
|Batch Size | Test Accuracy |
-----------------------------------|
|1024 | 96% with 30 epochs |
|64 | 98% with 30 epochs |
|2 | 99% with 30 epochs |
|__________________________________|
Therefore, it can be concluded that decreasing batch size increases test accuracy. However, do not generalize these findings, as it depends on the complexity of on hand data.
Here is a detailed blog (Effect of batch size on training dynamics) that discusses impact of batch size.
In addition, following research paper throw detailed overview and analysis how batch size impacts model accuracy (generalization).
Smith, Samuel L., et al. "Don't decay the learning rate, increase the batch size." arXiv preprint arXiv:1711.00489 (2017).
Hoffer, Elad, Itay Hubara, and Daniel Soudry. "Train longer, generalize better: closing the generalization gap in large batch training of neural networks." Advances in Neural Information Processing Systems. 2017. | How does batch size affect Adam Optimizer? | Yes, batch size affects Adam optimizer. Common batch sizes 16, 32, and 64 can be used. Results show that there is a sweet spot for batch size, where a model performs best. For example, on MNIST data, | How does batch size affect Adam Optimizer?
Yes, batch size affects Adam optimizer. Common batch sizes 16, 32, and 64 can be used. Results show that there is a sweet spot for batch size, where a model performs best. For example, on MNIST data, three different batch sizes gave different accuracy as shown in the table below:
|Batch Size | Test Accuracy |
-----------------------------------|
|1024 | 96% with 30 epochs |
|64 | 98% with 30 epochs |
|2 | 99% with 30 epochs |
|__________________________________|
Therefore, it can be concluded that decreasing batch size increases test accuracy. However, do not generalize these findings, as it depends on the complexity of on hand data.
Here is a detailed blog (Effect of batch size on training dynamics) that discusses impact of batch size.
In addition, following research paper throw detailed overview and analysis how batch size impacts model accuracy (generalization).
Smith, Samuel L., et al. "Don't decay the learning rate, increase the batch size." arXiv preprint arXiv:1711.00489 (2017).
Hoffer, Elad, Itay Hubara, and Daniel Soudry. "Train longer, generalize better: closing the generalization gap in large batch training of neural networks." Advances in Neural Information Processing Systems. 2017. | How does batch size affect Adam Optimizer?
Yes, batch size affects Adam optimizer. Common batch sizes 16, 32, and 64 can be used. Results show that there is a sweet spot for batch size, where a model performs best. For example, on MNIST data, |
30,849 | How does batch size affect Adam Optimizer? | I would just leave this as a comment, but I don't have enough reputation.
There's an excellent discussion of the trade offs of large and small batch sizes here. | How does batch size affect Adam Optimizer? | I would just leave this as a comment, but I don't have enough reputation.
There's an excellent discussion of the trade offs of large and small batch sizes here. | How does batch size affect Adam Optimizer?
I would just leave this as a comment, but I don't have enough reputation.
There's an excellent discussion of the trade offs of large and small batch sizes here. | How does batch size affect Adam Optimizer?
I would just leave this as a comment, but I don't have enough reputation.
There's an excellent discussion of the trade offs of large and small batch sizes here. |
30,850 | Compound Poisson Distribution with sum of exponential random variables | The Question
Let $X \sim \text{Exponential}(\alpha)$, and let $\{X_1, X_2,\dots, X_n\}$ denote an iid sample of size $n$, where the sample size $n$ (instead of being fixed) is itself a Poisson random variable $N=n$. The OP seeks the distribution of the sample sum:
$$S = X_1 + X_2 + \dots + X_n \quad \quad \text{where} \quad N \sim \text{Poisson}(\lambda)$$
As $N$ is Poisson, and the domain of support of a Poisson includes 0, it follows that the sample size $N$ can be 0, in which case $S = 0$. This is important, because it means that $P(S = 0)$ will have discrete mass.
Solution
To proceed, first note that the sum of $n$ independent identical $\text{Exponential}(\alpha)$ variables has a $\text{Gamma}(n,\alpha)$ distribution i.e. $S$ has pdf $f(s \; \big| \; N = n)$:
where parameter $N \sim \text{Poisson}(\lambda)$ with pmf $g(n)$:
We seek the parameter mixture distribution of $S$ and $N$.
Unconditional pdf of $S$
Discrete Part: $S = 0$ iff $n = 0$. This occurs with probability $P(N=0)$:
Continuous Part:
The parameter-mix distribution, for $S>0$, is given by $\mathbb{E}_g[f]$:
where:
I am using the Expect function from the mathStatica package for Mathematica
Hypergeometric0F1Regularized denotes the confluent hypergeometric function
In summary, the unconditional pdf of $S$ is:
$$\text{pdf}(S) = \left\{
\begin{array}{cc}
e^{-\lambda} & \text{ if } s = 0 \\
\text{sol} & \text{ if } s > 0 \\
\end{array}\right.$$
which is a mixed discrete-continuous distribution. | Compound Poisson Distribution with sum of exponential random variables | The Question
Let $X \sim \text{Exponential}(\alpha)$, and let $\{X_1, X_2,\dots, X_n\}$ denote an iid sample of size $n$, where the sample size $n$ (instead of being fixed) is itself a Poisson random | Compound Poisson Distribution with sum of exponential random variables
The Question
Let $X \sim \text{Exponential}(\alpha)$, and let $\{X_1, X_2,\dots, X_n\}$ denote an iid sample of size $n$, where the sample size $n$ (instead of being fixed) is itself a Poisson random variable $N=n$. The OP seeks the distribution of the sample sum:
$$S = X_1 + X_2 + \dots + X_n \quad \quad \text{where} \quad N \sim \text{Poisson}(\lambda)$$
As $N$ is Poisson, and the domain of support of a Poisson includes 0, it follows that the sample size $N$ can be 0, in which case $S = 0$. This is important, because it means that $P(S = 0)$ will have discrete mass.
Solution
To proceed, first note that the sum of $n$ independent identical $\text{Exponential}(\alpha)$ variables has a $\text{Gamma}(n,\alpha)$ distribution i.e. $S$ has pdf $f(s \; \big| \; N = n)$:
where parameter $N \sim \text{Poisson}(\lambda)$ with pmf $g(n)$:
We seek the parameter mixture distribution of $S$ and $N$.
Unconditional pdf of $S$
Discrete Part: $S = 0$ iff $n = 0$. This occurs with probability $P(N=0)$:
Continuous Part:
The parameter-mix distribution, for $S>0$, is given by $\mathbb{E}_g[f]$:
where:
I am using the Expect function from the mathStatica package for Mathematica
Hypergeometric0F1Regularized denotes the confluent hypergeometric function
In summary, the unconditional pdf of $S$ is:
$$\text{pdf}(S) = \left\{
\begin{array}{cc}
e^{-\lambda} & \text{ if } s = 0 \\
\text{sol} & \text{ if } s > 0 \\
\end{array}\right.$$
which is a mixed discrete-continuous distribution. | Compound Poisson Distribution with sum of exponential random variables
The Question
Let $X \sim \text{Exponential}(\alpha)$, and let $\{X_1, X_2,\dots, X_n\}$ denote an iid sample of size $n$, where the sample size $n$ (instead of being fixed) is itself a Poisson random |
30,851 | Compound Poisson Distribution with sum of exponential random variables | The cumulative distribution does not have a simple closed form
expression, nor does the density. Note that there is an atom at $S =
0$ with mass $\mathrm{Pr}\{N = 0\} = e^{-\lambda}$, so the density is
for $S \vert S > 0$.
The series in the density can be related to the Bessel functions
$I_0(x)$ and $I_1(x)$. But since this is a special case of the compound
Poisson-Gamma distribution which itself is a special case of the
Tweedie distribution, usable computing tools can be found
under this name.
EDIT To derive an expression of the density, consider the following series
$$
R(y) := \sum_{n=0}^{\infty} \frac{1}{n!\,n!} \, y^{n}, \qquad
r(y) := \sum_{n=1}^{\infty} \frac{1}{n!\,(n-1)!} \, y^{n-1} = R'(y).
$$
Note that $R(y) = I_0(2 \sqrt{y})$ where $I_0(z)$ is the usual
modified Bessel function,
with derivative $I_0'(z) = I_1(z)$. So, using the expression given in the
question and some simple algebra we get the density: $f(x) = p
\,\delta(x) + (1 - p) \, f_1(x)$ where $\delta(x)$ (abusively) stands
for a Dirac density, $p:= e^{-\lambda}$ and
$$
f_1(x) = \frac{p}{1 -p} \, e^{ - \alpha x } \alpha \lambda \,
\frac{I_1(2 \sqrt{\alpha \lambda x})}{\sqrt{\alpha \lambda x}}
\qquad \text{for } x > 0,
$$
which is the density of $S$ conditional on $S > 0$. This must be the
same solution as that of the answer by @wolfies, up to the $\alpha
\leftrightarrow 1/ \alpha$ change of notation therein. The Bessel
functions $I_\nu(z)$ are widely available, e.g. in R using besselI. | Compound Poisson Distribution with sum of exponential random variables | The cumulative distribution does not have a simple closed form
expression, nor does the density. Note that there is an atom at $S =
0$ with mass $\mathrm{Pr}\{N = 0\} = e^{-\lambda}$, so the density | Compound Poisson Distribution with sum of exponential random variables
The cumulative distribution does not have a simple closed form
expression, nor does the density. Note that there is an atom at $S =
0$ with mass $\mathrm{Pr}\{N = 0\} = e^{-\lambda}$, so the density is
for $S \vert S > 0$.
The series in the density can be related to the Bessel functions
$I_0(x)$ and $I_1(x)$. But since this is a special case of the compound
Poisson-Gamma distribution which itself is a special case of the
Tweedie distribution, usable computing tools can be found
under this name.
EDIT To derive an expression of the density, consider the following series
$$
R(y) := \sum_{n=0}^{\infty} \frac{1}{n!\,n!} \, y^{n}, \qquad
r(y) := \sum_{n=1}^{\infty} \frac{1}{n!\,(n-1)!} \, y^{n-1} = R'(y).
$$
Note that $R(y) = I_0(2 \sqrt{y})$ where $I_0(z)$ is the usual
modified Bessel function,
with derivative $I_0'(z) = I_1(z)$. So, using the expression given in the
question and some simple algebra we get the density: $f(x) = p
\,\delta(x) + (1 - p) \, f_1(x)$ where $\delta(x)$ (abusively) stands
for a Dirac density, $p:= e^{-\lambda}$ and
$$
f_1(x) = \frac{p}{1 -p} \, e^{ - \alpha x } \alpha \lambda \,
\frac{I_1(2 \sqrt{\alpha \lambda x})}{\sqrt{\alpha \lambda x}}
\qquad \text{for } x > 0,
$$
which is the density of $S$ conditional on $S > 0$. This must be the
same solution as that of the answer by @wolfies, up to the $\alpha
\leftrightarrow 1/ \alpha$ change of notation therein. The Bessel
functions $I_\nu(z)$ are widely available, e.g. in R using besselI. | Compound Poisson Distribution with sum of exponential random variables
The cumulative distribution does not have a simple closed form
expression, nor does the density. Note that there is an atom at $S =
0$ with mass $\mathrm{Pr}\{N = 0\} = e^{-\lambda}$, so the density |
30,852 | Compound Poisson Distribution with sum of exponential random variables | Let $\{Y_i\}_{i\geq 1}$ be a sequence of IID $\mathrm{Exp}(\alpha)$ random variables, and let
$$
S_N=\sum_{i=1}^N Y_i,
$$
in which $N\sim\mathrm{Poisson}(\lambda)$. We know that $S_N\mid N=n\sim\mathrm{Gamma}(n,\alpha)$, for $n
\geq 1$. Hence,
\begin{align*}
\Pr\{S_N\in B\} &= \mathrm{E}\left[\Pr\{S_N\in B\mid N\}\right] \\
&=e^{-\lambda}I_B(0)+\sum_{n=1}^\infty \left(\frac{e^{-\lambda}\lambda^n}{n!} \int_B \frac{\alpha^n}{(n-1)!}\,u^{n-1}e^{-\alpha u}\,I_{(0,\infty)}(u)\,du\right).
\end{align*}
TOL <- 0.01
lambda <- 4
alpha <- 2
B <- c(0, 3.5)
prob <- 0
if (B[1] == 0) prob <- exp(-lambda)
n <- 1
repeat {
next_term <- exp(-lambda+n*log(lambda)-lfactorial(n)) *
(pgamma(B[2], shape = n, rate = alpha) -
pgamma(B[1], shape = n, rate = alpha))
if (next_term < TOL) break
prob <- prob + next_term
n <- n + 1
}
print(prob) | Compound Poisson Distribution with sum of exponential random variables | Let $\{Y_i\}_{i\geq 1}$ be a sequence of IID $\mathrm{Exp}(\alpha)$ random variables, and let
$$
S_N=\sum_{i=1}^N Y_i,
$$
in which $N\sim\mathrm{Poisson}(\lambda)$. We know that $S_N\mid N=n\sim\ma | Compound Poisson Distribution with sum of exponential random variables
Let $\{Y_i\}_{i\geq 1}$ be a sequence of IID $\mathrm{Exp}(\alpha)$ random variables, and let
$$
S_N=\sum_{i=1}^N Y_i,
$$
in which $N\sim\mathrm{Poisson}(\lambda)$. We know that $S_N\mid N=n\sim\mathrm{Gamma}(n,\alpha)$, for $n
\geq 1$. Hence,
\begin{align*}
\Pr\{S_N\in B\} &= \mathrm{E}\left[\Pr\{S_N\in B\mid N\}\right] \\
&=e^{-\lambda}I_B(0)+\sum_{n=1}^\infty \left(\frac{e^{-\lambda}\lambda^n}{n!} \int_B \frac{\alpha^n}{(n-1)!}\,u^{n-1}e^{-\alpha u}\,I_{(0,\infty)}(u)\,du\right).
\end{align*}
TOL <- 0.01
lambda <- 4
alpha <- 2
B <- c(0, 3.5)
prob <- 0
if (B[1] == 0) prob <- exp(-lambda)
n <- 1
repeat {
next_term <- exp(-lambda+n*log(lambda)-lfactorial(n)) *
(pgamma(B[2], shape = n, rate = alpha) -
pgamma(B[1], shape = n, rate = alpha))
if (next_term < TOL) break
prob <- prob + next_term
n <- n + 1
}
print(prob) | Compound Poisson Distribution with sum of exponential random variables
Let $\{Y_i\}_{i\geq 1}$ be a sequence of IID $\mathrm{Exp}(\alpha)$ random variables, and let
$$
S_N=\sum_{i=1}^N Y_i,
$$
in which $N\sim\mathrm{Poisson}(\lambda)$. We know that $S_N\mid N=n\sim\ma |
30,853 | Compound Poisson Distribution with sum of exponential random variables | This comes a bit late, but I've just spent some time on this question, so here's a summary of the way the reasoning goes, based on the previously proposed answers:
the sum of $n$ i.i.d. exponentially distributed variables is distributed as
$f_0 = \delta(x)$, $f_n = \frac{\alpha^n x^{n-1} e^{-\alpha x}}{(n-1)!}$ (in which case it is a gamma distribution).
then the probability of there being $n$ contributions in the total sum is given by the Poisson distribution of $n$, $w_n = e^{-\lambda} \frac{\lambda^n}{n!}$.
So, making a mix of the original post and parts of the different previous answers, we can make up an expression for the total pdf as:
$$
f(x) = e^{-\lambda} \left[ \delta(x) + \sum_{n=1}^\infty \frac{\lambda^n}{n!} \frac{\alpha^n x^{n-1} e^{-\alpha x}}{(n-1)!}\right],
$$
or, shifting the indices around:
$$
f(x) = e^{-\lambda} \left[ \delta(x) + e^{-\alpha x} \lambda \alpha \sum_{n=0}^\infty \frac{(\lambda \alpha x)^n}{(n+1)! n!}\right].
$$
Where I contribute is by referencing Eq. 9.6.10 of Abramowitz & Stegun, which gives an expression for the ``hard series'' mentioned by the OP in terms of modified Bessel functions as
$$
I_\nu(z) = (z/2)^\nu \sum_{k=0}^\infty \frac{(z^2/4)^k}{k! \Gamma(\nu+k+1)},
$$
from which we finally get
$$
f(x) = e^{-\lambda} \left[ \delta(x) + e^{-\alpha x} \sqrt{\frac{\alpha \lambda}{x}} \mathrm{I_1} (2 \sqrt{\alpha \lambda x})\right].
$$ | Compound Poisson Distribution with sum of exponential random variables | This comes a bit late, but I've just spent some time on this question, so here's a summary of the way the reasoning goes, based on the previously proposed answers:
the sum of $n$ i.i.d. exponentially | Compound Poisson Distribution with sum of exponential random variables
This comes a bit late, but I've just spent some time on this question, so here's a summary of the way the reasoning goes, based on the previously proposed answers:
the sum of $n$ i.i.d. exponentially distributed variables is distributed as
$f_0 = \delta(x)$, $f_n = \frac{\alpha^n x^{n-1} e^{-\alpha x}}{(n-1)!}$ (in which case it is a gamma distribution).
then the probability of there being $n$ contributions in the total sum is given by the Poisson distribution of $n$, $w_n = e^{-\lambda} \frac{\lambda^n}{n!}$.
So, making a mix of the original post and parts of the different previous answers, we can make up an expression for the total pdf as:
$$
f(x) = e^{-\lambda} \left[ \delta(x) + \sum_{n=1}^\infty \frac{\lambda^n}{n!} \frac{\alpha^n x^{n-1} e^{-\alpha x}}{(n-1)!}\right],
$$
or, shifting the indices around:
$$
f(x) = e^{-\lambda} \left[ \delta(x) + e^{-\alpha x} \lambda \alpha \sum_{n=0}^\infty \frac{(\lambda \alpha x)^n}{(n+1)! n!}\right].
$$
Where I contribute is by referencing Eq. 9.6.10 of Abramowitz & Stegun, which gives an expression for the ``hard series'' mentioned by the OP in terms of modified Bessel functions as
$$
I_\nu(z) = (z/2)^\nu \sum_{k=0}^\infty \frac{(z^2/4)^k}{k! \Gamma(\nu+k+1)},
$$
from which we finally get
$$
f(x) = e^{-\lambda} \left[ \delta(x) + e^{-\alpha x} \sqrt{\frac{\alpha \lambda}{x}} \mathrm{I_1} (2 \sqrt{\alpha \lambda x})\right].
$$ | Compound Poisson Distribution with sum of exponential random variables
This comes a bit late, but I've just spent some time on this question, so here's a summary of the way the reasoning goes, based on the previously proposed answers:
the sum of $n$ i.i.d. exponentially |
30,854 | Compound Poisson Distribution with sum of exponential random variables | Here is an alternative approach by considering the process as a 2d random walk with drift:
$$X(t) = \sum_{k=1}^t X_k \\
Y(t) = \sum_{k=1}^t Y_k$$
where each $X_k$ and $Y_k$ are exponentially distributed with rate $\lambda$. Below is a simulation for $\lambda = 15$.
The question is then similar to the distribution of the position of $Y(t-1)$ for the lowest $t$ where $X(t) > 1$. Or approximately similar to the distribution of the position of $Y(t)$ for the lowest $t$ where $X(t) > 1$.
Or in simple words: what was/is the position of the distribution of $Y(t)$ when $X(t)$ passes the barrier $x=1$?
If the rate $\lambda$ is high, which means that there are many small steps, then the above process becomes approximately a 2D Wiener process with a drift $\nu = \lambda^{-1}$ and diffusion $\sigma = \lambda^{-1}$.
We could consider the
Time $t$ that the $x$ coordinate hits the barrier. This is inverse Gaussian distributed.
$$f(t) = \frac{1}{\sqrt{2\pi s^{2} t^3}} \exp \left( \frac{-(1-st)^2}{2s^2 t} \right)$$ where $s = \lambda^{-1}$
Position $y(t)$ conditional on the barrier hit time. This is Gaussian distributed.
$$g(y;t) = \frac{1}{\sqrt{2\pi s^2 t}} \exp \left( \frac{-(y-s t)^2}{2 s^2 t}\right)$$
And the distribution of $y$ when the coordinate $x$ hits the barrier is distributed as the compound distribution
$$h(y) = \int_0^\infty g(y;t)f(t) dt$$
when we work the integral out then we get
$$\begin{array}{}
h(y) &=& \int_0^\infty \frac{1}{\sqrt{2\pi \lambda^2 t^3}} \exp \left( \frac{-(1-\lambda t)^2}{2\lambda^2 t} \right)\frac{1}{\sqrt{2\pi\lambda^2 t}} \exp \left( \frac{-(y-\lambda t)^2}{2 \lambda^2 t}\right) dt\\
&=&\frac{1}{2\pi \lambda^2 } \int_0^\infty \frac{1}{t^2} \exp \left( \frac{-(1-\lambda t)^2-(y-\lambda t)^2}{2\lambda^2 t} \right) dt\\
&=& \frac{\sqrt{2}}{\pi s^2} \exp\left(\frac{t+1}{s}\right)\sqrt{\frac{s^2}{t^2+1}}K_1\left(\sqrt{\frac{2t^2+2}{s^2}}\right)
\end{array}$$
where $K_1$ is the first order modified Bessel function of the second kind.
The graph below, with an emperical density of the simulated values along with the derived function, shows that this approach gives a reasonable approximation.
Non central $\chi^2$-distribution.
The other answers, using an exact approach, ended up with an expression that uses a slightly different Bessel function. Interestingly, those expressions relate to a non-central chi-squared distribution, which has the density
$$f_Y(y;n,\lambda) = \sum_{i=1}^\infty \frac{e^{-\lambda/2} (\lambda/2)^i}{i!} f_{Z_{k+2i}}(y)$$
where $f_{Z_{k+2i}}(y)$ is the pdf for the chi-squared distribution with $k+2i$, which for even values is equal to the Erlang distribution with rate $\lambda = 1$.
The case with $k = 0$ is equivalent to the problem here. This problem has been discussed before. Siegel, A. F. (1979), "The noncentral chi-squared distribution with zero degrees of freedom and testing for uniformity"
This relationship with the chi-squared distribution I still have to investigate further. To be continued. Intuition behind occurence of non central chi squared distribution in coordinates of a random walk | Compound Poisson Distribution with sum of exponential random variables | Here is an alternative approach by considering the process as a 2d random walk with drift:
$$X(t) = \sum_{k=1}^t X_k \\
Y(t) = \sum_{k=1}^t Y_k$$
where each $X_k$ and $Y_k$ are exponentially distribut | Compound Poisson Distribution with sum of exponential random variables
Here is an alternative approach by considering the process as a 2d random walk with drift:
$$X(t) = \sum_{k=1}^t X_k \\
Y(t) = \sum_{k=1}^t Y_k$$
where each $X_k$ and $Y_k$ are exponentially distributed with rate $\lambda$. Below is a simulation for $\lambda = 15$.
The question is then similar to the distribution of the position of $Y(t-1)$ for the lowest $t$ where $X(t) > 1$. Or approximately similar to the distribution of the position of $Y(t)$ for the lowest $t$ where $X(t) > 1$.
Or in simple words: what was/is the position of the distribution of $Y(t)$ when $X(t)$ passes the barrier $x=1$?
If the rate $\lambda$ is high, which means that there are many small steps, then the above process becomes approximately a 2D Wiener process with a drift $\nu = \lambda^{-1}$ and diffusion $\sigma = \lambda^{-1}$.
We could consider the
Time $t$ that the $x$ coordinate hits the barrier. This is inverse Gaussian distributed.
$$f(t) = \frac{1}{\sqrt{2\pi s^{2} t^3}} \exp \left( \frac{-(1-st)^2}{2s^2 t} \right)$$ where $s = \lambda^{-1}$
Position $y(t)$ conditional on the barrier hit time. This is Gaussian distributed.
$$g(y;t) = \frac{1}{\sqrt{2\pi s^2 t}} \exp \left( \frac{-(y-s t)^2}{2 s^2 t}\right)$$
And the distribution of $y$ when the coordinate $x$ hits the barrier is distributed as the compound distribution
$$h(y) = \int_0^\infty g(y;t)f(t) dt$$
when we work the integral out then we get
$$\begin{array}{}
h(y) &=& \int_0^\infty \frac{1}{\sqrt{2\pi \lambda^2 t^3}} \exp \left( \frac{-(1-\lambda t)^2}{2\lambda^2 t} \right)\frac{1}{\sqrt{2\pi\lambda^2 t}} \exp \left( \frac{-(y-\lambda t)^2}{2 \lambda^2 t}\right) dt\\
&=&\frac{1}{2\pi \lambda^2 } \int_0^\infty \frac{1}{t^2} \exp \left( \frac{-(1-\lambda t)^2-(y-\lambda t)^2}{2\lambda^2 t} \right) dt\\
&=& \frac{\sqrt{2}}{\pi s^2} \exp\left(\frac{t+1}{s}\right)\sqrt{\frac{s^2}{t^2+1}}K_1\left(\sqrt{\frac{2t^2+2}{s^2}}\right)
\end{array}$$
where $K_1$ is the first order modified Bessel function of the second kind.
The graph below, with an emperical density of the simulated values along with the derived function, shows that this approach gives a reasonable approximation.
Non central $\chi^2$-distribution.
The other answers, using an exact approach, ended up with an expression that uses a slightly different Bessel function. Interestingly, those expressions relate to a non-central chi-squared distribution, which has the density
$$f_Y(y;n,\lambda) = \sum_{i=1}^\infty \frac{e^{-\lambda/2} (\lambda/2)^i}{i!} f_{Z_{k+2i}}(y)$$
where $f_{Z_{k+2i}}(y)$ is the pdf for the chi-squared distribution with $k+2i$, which for even values is equal to the Erlang distribution with rate $\lambda = 1$.
The case with $k = 0$ is equivalent to the problem here. This problem has been discussed before. Siegel, A. F. (1979), "The noncentral chi-squared distribution with zero degrees of freedom and testing for uniformity"
This relationship with the chi-squared distribution I still have to investigate further. To be continued. Intuition behind occurence of non central chi squared distribution in coordinates of a random walk | Compound Poisson Distribution with sum of exponential random variables
Here is an alternative approach by considering the process as a 2d random walk with drift:
$$X(t) = \sum_{k=1}^t X_k \\
Y(t) = \sum_{k=1}^t Y_k$$
where each $X_k$ and $Y_k$ are exponentially distribut |
30,855 | How to decide whether to set REML to True or False? | In my (not entirely uninformed) opinion you're getting some questionable advice, from the web page and from the comments you received.
you can use REML (or ML) whenever you want (regardless of the random effects structure - single vs. multiple, balanced vs. unbalanced, crossed vs. nested)
in simple cases (balanced/nested/etc.) REML can be proven to provide unbiased estimates of variance components (but not unbiased estimates of e.g. standard deviation or log standard deviation)
you cannot compare models that differ in fixed effects if they are fitted by REML rather than ML; this is why the commenter recommends that you use REML=FALSE if you're trying to do model selection
however, I wouldn't recommend you do model selection in the first place, certainly not if you're going to rely on the conditional confidence intervals and p-values (i.e., analyzing the refitted 'minimal adequate' model without accounting for the effects of model selection)
From my chapter in Fox et al 2015:
It’s generally good to use REML, if it is available, when you are interested in the magnitude of the random effects variances, but never when you are comparing models with different fixed effects via hypothesis tests or information-theoretic criteria such as AIC. | How to decide whether to set REML to True or False? | In my (not entirely uninformed) opinion you're getting some questionable advice, from the web page and from the comments you received.
you can use REML (or ML) whenever you want (regardless of the ra | How to decide whether to set REML to True or False?
In my (not entirely uninformed) opinion you're getting some questionable advice, from the web page and from the comments you received.
you can use REML (or ML) whenever you want (regardless of the random effects structure - single vs. multiple, balanced vs. unbalanced, crossed vs. nested)
in simple cases (balanced/nested/etc.) REML can be proven to provide unbiased estimates of variance components (but not unbiased estimates of e.g. standard deviation or log standard deviation)
you cannot compare models that differ in fixed effects if they are fitted by REML rather than ML; this is why the commenter recommends that you use REML=FALSE if you're trying to do model selection
however, I wouldn't recommend you do model selection in the first place, certainly not if you're going to rely on the conditional confidence intervals and p-values (i.e., analyzing the refitted 'minimal adequate' model without accounting for the effects of model selection)
From my chapter in Fox et al 2015:
It’s generally good to use REML, if it is available, when you are interested in the magnitude of the random effects variances, but never when you are comparing models with different fixed effects via hypothesis tests or information-theoretic criteria such as AIC. | How to decide whether to set REML to True or False?
In my (not entirely uninformed) opinion you're getting some questionable advice, from the web page and from the comments you received.
you can use REML (or ML) whenever you want (regardless of the ra |
30,856 | What is the interpretation of eps parameter in DBSCAN clustering? | Epsilon is the local radius for expanding clusters. Think of it as a step size - DBSCAN never takes a step larger than this, but by doing multiple steps DBSCAN clusters can become much larger than eps.
If you want your "clusters" to have a maximum radius, that is a set cover type of problem, so you will probably want a greedy approximation. It's not a clustering problem, because you do not allow the clustering algorithm to discover structure larger than that. You want to approximate your data with a cover, ignoring structure.
But there are some clustering algorithms where you can bound the cluster radius (but they probably won't try hard enough to optimize for your problem):
LEADER is kind of like DBSCAN minus the cluster expansion. Choose an unclustered point and add everything within a radius of x. Repeat until all points are "clustered". It does not optimize anything, and you do not get a whole lot of theoretical properties. But the maximum distance in a cluster is 2x. Run it twice and you would get very different results.
Complete-link HAC after cutting the dendrogram at height x, that is the maximum distance of two points. The results should be much better than Leader's, and more stable. Nevertheless, complete-link HAC may not find he optimum.
3 CLINK is a faster variant of complete Link (just O(n²) rather than n³) but tends to find much worse solutions. You may want to run this several times on permutation of your data. | What is the interpretation of eps parameter in DBSCAN clustering? | Epsilon is the local radius for expanding clusters. Think of it as a step size - DBSCAN never takes a step larger than this, but by doing multiple steps DBSCAN clusters can become much larger than eps | What is the interpretation of eps parameter in DBSCAN clustering?
Epsilon is the local radius for expanding clusters. Think of it as a step size - DBSCAN never takes a step larger than this, but by doing multiple steps DBSCAN clusters can become much larger than eps.
If you want your "clusters" to have a maximum radius, that is a set cover type of problem, so you will probably want a greedy approximation. It's not a clustering problem, because you do not allow the clustering algorithm to discover structure larger than that. You want to approximate your data with a cover, ignoring structure.
But there are some clustering algorithms where you can bound the cluster radius (but they probably won't try hard enough to optimize for your problem):
LEADER is kind of like DBSCAN minus the cluster expansion. Choose an unclustered point and add everything within a radius of x. Repeat until all points are "clustered". It does not optimize anything, and you do not get a whole lot of theoretical properties. But the maximum distance in a cluster is 2x. Run it twice and you would get very different results.
Complete-link HAC after cutting the dendrogram at height x, that is the maximum distance of two points. The results should be much better than Leader's, and more stable. Nevertheless, complete-link HAC may not find he optimum.
3 CLINK is a faster variant of complete Link (just O(n²) rather than n³) but tends to find much worse solutions. You may want to run this several times on permutation of your data. | What is the interpretation of eps parameter in DBSCAN clustering?
Epsilon is the local radius for expanding clusters. Think of it as a step size - DBSCAN never takes a step larger than this, but by doing multiple steps DBSCAN clusters can become much larger than eps |
30,857 | What is the interpretation of eps parameter in DBSCAN clustering? | The meaning of $\epsilon$ is that of the neighbourhood size. The neighbourhood of a point $p$, denoted by $N_{\epsilon}(p)$, is defined as the $N_{\epsilon}(p) = \{q \in D | dist(p,q) \leq \epsilon \}$. Here $D$ is a database of $n$ objects (points) and $q$ a query point.
$\epsilon$ is what would be constitute a reasonable radius for your particular problem. For example when looking to cluster cities tens of kilometres is probably reasonable. See also this post. Yes, I guess $\epsilon = 1000$ seems like a reasonable first estimate. I would probably try something bigger first but this does not seems horribly misplaced. Let me point out that choosing your distance metric is probably more important than your $\epsilon$ in a way. You can also re-run your analysis with a different $\epsilon$ and see the influence of it but your insights will be tied directly to the distance metric used. | What is the interpretation of eps parameter in DBSCAN clustering? | The meaning of $\epsilon$ is that of the neighbourhood size. The neighbourhood of a point $p$, denoted by $N_{\epsilon}(p)$, is defined as the $N_{\epsilon}(p) = \{q \in D | dist(p,q) \leq \epsilon \} | What is the interpretation of eps parameter in DBSCAN clustering?
The meaning of $\epsilon$ is that of the neighbourhood size. The neighbourhood of a point $p$, denoted by $N_{\epsilon}(p)$, is defined as the $N_{\epsilon}(p) = \{q \in D | dist(p,q) \leq \epsilon \}$. Here $D$ is a database of $n$ objects (points) and $q$ a query point.
$\epsilon$ is what would be constitute a reasonable radius for your particular problem. For example when looking to cluster cities tens of kilometres is probably reasonable. See also this post. Yes, I guess $\epsilon = 1000$ seems like a reasonable first estimate. I would probably try something bigger first but this does not seems horribly misplaced. Let me point out that choosing your distance metric is probably more important than your $\epsilon$ in a way. You can also re-run your analysis with a different $\epsilon$ and see the influence of it but your insights will be tied directly to the distance metric used. | What is the interpretation of eps parameter in DBSCAN clustering?
The meaning of $\epsilon$ is that of the neighbourhood size. The neighbourhood of a point $p$, denoted by $N_{\epsilon}(p)$, is defined as the $N_{\epsilon}(p) = \{q \in D | dist(p,q) \leq \epsilon \} |
30,858 | Poisson xgboost with exposure | According to the answer in:
https://stackoverflow.com/questions/34896004/xgboost-offset-exposure
xgboost can handle offset term as in glm or gbm using setinfo, but this method is not documented very well.
In your example, the code would be:
setinfo(xgbMatrix,"base_margin",log(Insurance$Holders)) | Poisson xgboost with exposure | According to the answer in:
https://stackoverflow.com/questions/34896004/xgboost-offset-exposure
xgboost can handle offset term as in glm or gbm using setinfo, but this method is not documented very w | Poisson xgboost with exposure
According to the answer in:
https://stackoverflow.com/questions/34896004/xgboost-offset-exposure
xgboost can handle offset term as in glm or gbm using setinfo, but this method is not documented very well.
In your example, the code would be:
setinfo(xgbMatrix,"base_margin",log(Insurance$Holders)) | Poisson xgboost with exposure
According to the answer in:
https://stackoverflow.com/questions/34896004/xgboost-offset-exposure
xgboost can handle offset term as in glm or gbm using setinfo, but this method is not documented very w |
30,859 | Poisson xgboost with exposure | Your code works just fine, you just need to increase the parameter nround to have the desired result. The Boosting models don't converge at the first iterations.
xgbMatrix <- xgb.DMatrix(as.matrix(temp2),
label = Insurance$freq,
weight = Insurance$Holders)
bst = xgboost(data=xgbMatrix, objective='count:poisson', nrounds=500, verbose = 0)
Insurance$predFreq<-predict(bst, xgbMatrix)
with(Insurance, sum(Claims)) #3151
with(Insurance, sum(predFreq*Holders)) #same | Poisson xgboost with exposure | Your code works just fine, you just need to increase the parameter nround to have the desired result. The Boosting models don't converge at the first iterations.
xgbMatrix <- xgb.DMatrix(as.matrix(tem | Poisson xgboost with exposure
Your code works just fine, you just need to increase the parameter nround to have the desired result. The Boosting models don't converge at the first iterations.
xgbMatrix <- xgb.DMatrix(as.matrix(temp2),
label = Insurance$freq,
weight = Insurance$Holders)
bst = xgboost(data=xgbMatrix, objective='count:poisson', nrounds=500, verbose = 0)
Insurance$predFreq<-predict(bst, xgbMatrix)
with(Insurance, sum(Claims)) #3151
with(Insurance, sum(predFreq*Holders)) #same | Poisson xgboost with exposure
Your code works just fine, you just need to increase the parameter nround to have the desired result. The Boosting models don't converge at the first iterations.
xgbMatrix <- xgb.DMatrix(as.matrix(tem |
30,860 | How to include an interaction term in a random forest model | Tree-based models consider variables sequentially, which makes them handy for considering interactions without specifying them. Interactions that are useful for prediction will be easily picked up with a large enough forest, so there's no real need to include an explicit interaction term.
If you believe that the interaction is important, you could manually create the interaction term (for example, defining your formula within the model.frame function, which will create new columns for your interaction terms). Yet in your case this would nearly double the number of variables, as you're creating interactions between rad and every other feature, so it's probably ill-advised.
See also Including Interaction Terms in Random Forest which demonstrates Random Forests' inherent ability to detect interacting variables compared to linear methods. | How to include an interaction term in a random forest model | Tree-based models consider variables sequentially, which makes them handy for considering interactions without specifying them. Interactions that are useful for prediction will be easily picked up wit | How to include an interaction term in a random forest model
Tree-based models consider variables sequentially, which makes them handy for considering interactions without specifying them. Interactions that are useful for prediction will be easily picked up with a large enough forest, so there's no real need to include an explicit interaction term.
If you believe that the interaction is important, you could manually create the interaction term (for example, defining your formula within the model.frame function, which will create new columns for your interaction terms). Yet in your case this would nearly double the number of variables, as you're creating interactions between rad and every other feature, so it's probably ill-advised.
See also Including Interaction Terms in Random Forest which demonstrates Random Forests' inherent ability to detect interacting variables compared to linear methods. | How to include an interaction term in a random forest model
Tree-based models consider variables sequentially, which makes them handy for considering interactions without specifying them. Interactions that are useful for prediction will be easily picked up wit |
30,861 | Bayesian change point detection | Briefly, the package mcp does Bayesian change point regression. As of v0.2, it takes Gaussian, Binomial, Bernoulli, and Poisson. Modeling your data as four intercept-only segments:
model = list(
y ~ 1, # Intercept
~ 1, # etc...
~ 1,
~ 1
)
library(mcp)
df = data.frame(x = seq_along(coverages), y = coverages)
fit = mcp(model, df, par_x = "x")
Let's plot it with a prediction interval, just for fun (green dashed lines). The blue curves are posterior densities for the change point locations. The gray lines are random draws from the posterior.
plot(fit, q_predict = T)
You can use plot_pars() to plot individual parameter estimates. Here are the summaries. where cp_* are the change point estimates:
summary(fit))
Family: gaussian(link = 'identity')
Iterations: 9000 from 3 chains.
Segments:
1: y ~ 1
2: y ~ 1 ~ 1
3: y ~ 1 ~ 1
4: y ~ 1 ~ 1
Population-level parameters:
name mean lower upper Rhat n.eff
cp_1 101.280 99.38 103.0000 1 5627
cp_2 199.562 199.00 200.4314 1 5038
cp_3 299.365 296.85 301.7760 1 2340
int_1 -0.047 -0.11 0.0104 1 5614
int_2 -0.620 -0.68 -0.5592 1 5792
int_3 0.423 0.37 0.4838 1 6463
int_4 -0.018 -0.04 0.0036 1 5382
sigma_1 0.295 0.28 0.3082 1 5963
Read more on the mcp website. Disclaimer: I am the developer of mcp. | Bayesian change point detection | Briefly, the package mcp does Bayesian change point regression. As of v0.2, it takes Gaussian, Binomial, Bernoulli, and Poisson. Modeling your data as four intercept-only segments:
model = list(
y ~ | Bayesian change point detection
Briefly, the package mcp does Bayesian change point regression. As of v0.2, it takes Gaussian, Binomial, Bernoulli, and Poisson. Modeling your data as four intercept-only segments:
model = list(
y ~ 1, # Intercept
~ 1, # etc...
~ 1,
~ 1
)
library(mcp)
df = data.frame(x = seq_along(coverages), y = coverages)
fit = mcp(model, df, par_x = "x")
Let's plot it with a prediction interval, just for fun (green dashed lines). The blue curves are posterior densities for the change point locations. The gray lines are random draws from the posterior.
plot(fit, q_predict = T)
You can use plot_pars() to plot individual parameter estimates. Here are the summaries. where cp_* are the change point estimates:
summary(fit))
Family: gaussian(link = 'identity')
Iterations: 9000 from 3 chains.
Segments:
1: y ~ 1
2: y ~ 1 ~ 1
3: y ~ 1 ~ 1
4: y ~ 1 ~ 1
Population-level parameters:
name mean lower upper Rhat n.eff
cp_1 101.280 99.38 103.0000 1 5627
cp_2 199.562 199.00 200.4314 1 5038
cp_3 299.365 296.85 301.7760 1 2340
int_1 -0.047 -0.11 0.0104 1 5614
int_2 -0.620 -0.68 -0.5592 1 5792
int_3 0.423 0.37 0.4838 1 6463
int_4 -0.018 -0.04 0.0036 1 5382
sigma_1 0.295 0.28 0.3082 1 5963
Read more on the mcp website. Disclaimer: I am the developer of mcp. | Bayesian change point detection
Briefly, the package mcp does Bayesian change point regression. As of v0.2, it takes Gaussian, Binomial, Bernoulli, and Poisson. Modeling your data as four intercept-only segments:
model = list(
y ~ |
30,862 | Bayesian change point detection | The two good papers on this subject are below:
1) Bayesian Online Change Point Detection
2) Modeling changing dependency structure in multivariate time series
These do not apply a clustering algorithm but take the interval (since the last change point) into account as you have asked for. And they work with parametric distributions.
The paper by Adams and Mackay (the first one) also has the algorithm implemented in MatLab and Python. | Bayesian change point detection | The two good papers on this subject are below:
1) Bayesian Online Change Point Detection
2) Modeling changing dependency structure in multivariate time series
These do not apply a clustering algorithm | Bayesian change point detection
The two good papers on this subject are below:
1) Bayesian Online Change Point Detection
2) Modeling changing dependency structure in multivariate time series
These do not apply a clustering algorithm but take the interval (since the last change point) into account as you have asked for. And they work with parametric distributions.
The paper by Adams and Mackay (the first one) also has the algorithm implemented in MatLab and Python. | Bayesian change point detection
The two good papers on this subject are below:
1) Bayesian Online Change Point Detection
2) Modeling changing dependency structure in multivariate time series
These do not apply a clustering algorithm |
30,863 | Bayesian change point detection | Numerous packages are available in R for changepoint or breakpoint detection but the majority of them are non-Bayesian. Many such packages are touched in @Jonas Lindeløv's blog post: https://lindeloev.github.io/mcp/articles/packages.html. Glad that he also pointed to his site in the answer above.
In addition to mcp, bcp is also a popular Bayesian changepoint detection model and it has been actually used to analyze copy-number alteration sequence data--the same use case as your publication. For completeness, I put some quick results here with your sample data:
set.seed(1234)
variances = runif(1000, 0.01, 0.5)
coverages = c()
for (i in seq(1:100)) {
coverages = c(coverages, rnorm(1, mean=0, sd=variances[i]))
}
for (i in seq(101:200)) {
coverages <- c(coverages, rnorm(1, mean=-log(2), sd=variances[i] / 0.75))
}
for (i in seq(201:300)) {
coverages <- c(coverages, rnorm(1, mean=log(3/2), sd=variances[i] * 0.75))
}
for (i in seq(301:1000)) {
coverages <- c(coverages, rnorm(1, mean=0, sd=variances[i]))
}
library(bcp)
out = bcp(coverages)
plot(out)
Here is the bcp output:
Another package for Bayesian changepoint detection is Rbeast ( https://github.com/zhaokg/Rbeast, written by me), which handles only time-series or sequence-like data and therefore is not as versatile as mcp or bcp. But it is useful for your use scenario. Rbeast is aimed also to decompose time series into periodic and trend components. Since your sequence contains no periodic/seasonal component, season='none' is set in the code below and only the trend component is fitted:
library(Rbeast)
out = beast(coverages, season='none')
plot(out)
Below is the Rbeast output:
For each segment,Rbeast fits either a linear (1st order polynomial) or constant (0-th order poly) model; the average order of the polynomials needed to adequately fit the trend is estimated over time and depicted as the Order_t curve in the figure: all close to 0, suggesting an overall flat curve over individual segments. For your sample time series, since we know the segments are constant beforehand, we can enforce this strong prior by setting the min and max order of the segments (polynomial) to both 0: torder.minmax=c(0,0) so that only constant lines are fitted.
out = beast(coverages, season='none', torder.minmax=c(0,0) )
plot(out) | Bayesian change point detection | Numerous packages are available in R for changepoint or breakpoint detection but the majority of them are non-Bayesian. Many such packages are touched in @Jonas Lindeløv's blog post: https://lindeloev | Bayesian change point detection
Numerous packages are available in R for changepoint or breakpoint detection but the majority of them are non-Bayesian. Many such packages are touched in @Jonas Lindeløv's blog post: https://lindeloev.github.io/mcp/articles/packages.html. Glad that he also pointed to his site in the answer above.
In addition to mcp, bcp is also a popular Bayesian changepoint detection model and it has been actually used to analyze copy-number alteration sequence data--the same use case as your publication. For completeness, I put some quick results here with your sample data:
set.seed(1234)
variances = runif(1000, 0.01, 0.5)
coverages = c()
for (i in seq(1:100)) {
coverages = c(coverages, rnorm(1, mean=0, sd=variances[i]))
}
for (i in seq(101:200)) {
coverages <- c(coverages, rnorm(1, mean=-log(2), sd=variances[i] / 0.75))
}
for (i in seq(201:300)) {
coverages <- c(coverages, rnorm(1, mean=log(3/2), sd=variances[i] * 0.75))
}
for (i in seq(301:1000)) {
coverages <- c(coverages, rnorm(1, mean=0, sd=variances[i]))
}
library(bcp)
out = bcp(coverages)
plot(out)
Here is the bcp output:
Another package for Bayesian changepoint detection is Rbeast ( https://github.com/zhaokg/Rbeast, written by me), which handles only time-series or sequence-like data and therefore is not as versatile as mcp or bcp. But it is useful for your use scenario. Rbeast is aimed also to decompose time series into periodic and trend components. Since your sequence contains no periodic/seasonal component, season='none' is set in the code below and only the trend component is fitted:
library(Rbeast)
out = beast(coverages, season='none')
plot(out)
Below is the Rbeast output:
For each segment,Rbeast fits either a linear (1st order polynomial) or constant (0-th order poly) model; the average order of the polynomials needed to adequately fit the trend is estimated over time and depicted as the Order_t curve in the figure: all close to 0, suggesting an overall flat curve over individual segments. For your sample time series, since we know the segments are constant beforehand, we can enforce this strong prior by setting the min and max order of the segments (polynomial) to both 0: torder.minmax=c(0,0) so that only constant lines are fitted.
out = beast(coverages, season='none', torder.minmax=c(0,0) )
plot(out) | Bayesian change point detection
Numerous packages are available in R for changepoint or breakpoint detection but the majority of them are non-Bayesian. Many such packages are touched in @Jonas Lindeløv's blog post: https://lindeloev |
30,864 | Bayesian change point detection | This is more of a comment than an answer but it's too long to be a comment:
The "bible" for sequential analysis is probably 2014's book Sequential Analysis: Hypothesis Testing and Changepoint Detection by Alexander Tartakovsky. It is seemingly exhaustive in its coverage of the topic.
http://www.amazon.com/Sequential-Analysis-Hypothesis-Changepoint-Probability-ebook/dp/B00MMOIWTS/ref=sr_1_1?ie=UTF8&qid=1445511005&sr=8-1&keywords=sequential+analysis+tartakovsky
That said, in June 2014 Columbia sponsored The Fifth International Workshop in Sequential Methodologies which brought together the latest and greatest practitioners in the field. Tartakovsky was on the organizing committee.
https://sites.google.com/site/iwsm2015/home
See the "Detailed Program" link on the conference website for abstracts and papers. There's probably something there targeted specifically to your question.
Response lifted from this CV thread as written by me:
sequential estimators for a proportion | Bayesian change point detection | This is more of a comment than an answer but it's too long to be a comment:
The "bible" for sequential analysis is probably 2014's book Sequential Analysis: Hypothesis Testing and Changepoint Detectio | Bayesian change point detection
This is more of a comment than an answer but it's too long to be a comment:
The "bible" for sequential analysis is probably 2014's book Sequential Analysis: Hypothesis Testing and Changepoint Detection by Alexander Tartakovsky. It is seemingly exhaustive in its coverage of the topic.
http://www.amazon.com/Sequential-Analysis-Hypothesis-Changepoint-Probability-ebook/dp/B00MMOIWTS/ref=sr_1_1?ie=UTF8&qid=1445511005&sr=8-1&keywords=sequential+analysis+tartakovsky
That said, in June 2014 Columbia sponsored The Fifth International Workshop in Sequential Methodologies which brought together the latest and greatest practitioners in the field. Tartakovsky was on the organizing committee.
https://sites.google.com/site/iwsm2015/home
See the "Detailed Program" link on the conference website for abstracts and papers. There's probably something there targeted specifically to your question.
Response lifted from this CV thread as written by me:
sequential estimators for a proportion | Bayesian change point detection
This is more of a comment than an answer but it's too long to be a comment:
The "bible" for sequential analysis is probably 2014's book Sequential Analysis: Hypothesis Testing and Changepoint Detectio |
30,865 | Random effect specification in lmer mixed effect model | If you have two categorical factors f and g, then (1|f/g) expands to (1|f) + (1|f:g), i.e. variation in the intercept (that's the 1 on the left-hand side of the bar) among levels of f and among levels of f:g (the interaction between f and g). This is also referred to as a random effect of g nested within f (order matters here). This is the traditional way to combine two random factors in a classical ANOVA model, because in that framework random effects must be nested (i.e. either f is nested within g or g is nested with f). (See http://glmm.wikidot.com/faq for more information on nested factors.) This model estimates two parameters, i.e. $\sigma^2_f$ and $\sigma^2_{f:g}$, no matter how many levels each categorical variable has. It would be a typical model for a nested design.
In contrast, (f|g) specifies that the effects of f vary across levels of g: for example, if f is a two-level categorical variable with levels "control" and "treatment", then this model specifies that we are allowing both the intercept (control response) and the treatment effect (difference between control and treatment responses) to vary across levels of g. Each effect has its own variance, and by default lme4 fits covariances among each of the parameters. This model would estimate parameters $\sigma^2_{g,c}$, $\sigma^2_{g,t}$, and $\sigma_{g,c\cdot t}$, where the last refers to the covariance between control and treatment effects. If $f$ has $n$ levels, this model estimates $n(n+1)/2$ parameters; it is most appropriate for a randomized-block design where each treatment is repeated in every block.
If f has many levels, the latter (f|g)) model specification can imply models with many parameters; there is an ongoing debate (see e.g. this ArXiv paper) about the best way to handle this situation.
If instead we consider (x|g) where x is a continuous (numeric) input variable, then the term specifies a random-slopes model; the intercept (implicitly) and slope with respect to x both vary across levels of g (a covariance term is also fitted).
In this case, (g|x) would make no sense - the term on the right side of the bar is a grouping variable, and is always interpreted as categorical. The only case where it could make sense is in a design where x was continuous, but multiple observations were taken at each level, and where you wanted to treat x as a categorical variable for modeling purposes. | Random effect specification in lmer mixed effect model | If you have two categorical factors f and g, then (1|f/g) expands to (1|f) + (1|f:g), i.e. variation in the intercept (that's the 1 on the left-hand side of the bar) among levels of f and among levels | Random effect specification in lmer mixed effect model
If you have two categorical factors f and g, then (1|f/g) expands to (1|f) + (1|f:g), i.e. variation in the intercept (that's the 1 on the left-hand side of the bar) among levels of f and among levels of f:g (the interaction between f and g). This is also referred to as a random effect of g nested within f (order matters here). This is the traditional way to combine two random factors in a classical ANOVA model, because in that framework random effects must be nested (i.e. either f is nested within g or g is nested with f). (See http://glmm.wikidot.com/faq for more information on nested factors.) This model estimates two parameters, i.e. $\sigma^2_f$ and $\sigma^2_{f:g}$, no matter how many levels each categorical variable has. It would be a typical model for a nested design.
In contrast, (f|g) specifies that the effects of f vary across levels of g: for example, if f is a two-level categorical variable with levels "control" and "treatment", then this model specifies that we are allowing both the intercept (control response) and the treatment effect (difference between control and treatment responses) to vary across levels of g. Each effect has its own variance, and by default lme4 fits covariances among each of the parameters. This model would estimate parameters $\sigma^2_{g,c}$, $\sigma^2_{g,t}$, and $\sigma_{g,c\cdot t}$, where the last refers to the covariance between control and treatment effects. If $f$ has $n$ levels, this model estimates $n(n+1)/2$ parameters; it is most appropriate for a randomized-block design where each treatment is repeated in every block.
If f has many levels, the latter (f|g)) model specification can imply models with many parameters; there is an ongoing debate (see e.g. this ArXiv paper) about the best way to handle this situation.
If instead we consider (x|g) where x is a continuous (numeric) input variable, then the term specifies a random-slopes model; the intercept (implicitly) and slope with respect to x both vary across levels of g (a covariance term is also fitted).
In this case, (g|x) would make no sense - the term on the right side of the bar is a grouping variable, and is always interpreted as categorical. The only case where it could make sense is in a design where x was continuous, but multiple observations were taken at each level, and where you wanted to treat x as a categorical variable for modeling purposes. | Random effect specification in lmer mixed effect model
If you have two categorical factors f and g, then (1|f/g) expands to (1|f) + (1|f:g), i.e. variation in the intercept (that's the 1 on the left-hand side of the bar) among levels of f and among levels |
30,866 | Distribution of sum of squares of normals that have mean zero but not variance one? | $X_i \sim \mathcal{N}(0,\sigma^2_i) \Rightarrow \frac{X_i}{\sigma_i}\sim \mathcal{N}(0,1) $
$\therefore$ $\frac{X_i^2}{\sigma_i^2} \sim \chi^2(1)=\Gamma(1/2,2)$
$X_i^2 \sim \sigma_i^2\Gamma(1/2,2)=\Gamma(1/2,2\sigma_i^2)$
If your $\sigma_i$s are fixed (i.e all the same) then
$\sum_{i=1}^n X_i^2 \sim \sum_{i=1}^n\Gamma(1/2, 2\sigma^2)=\Gamma(n/2,2\sigma^2)$ suppose $\sigma_i$s are equal to $\sigma$
i.e $\sum_{i=1}^n X_i^2$ has a gamma distribution with $k=n/2,\theta=2\sigma^2$
If your $\sigma_i$s are not fixed then
ref this | Distribution of sum of squares of normals that have mean zero but not variance one? | $X_i \sim \mathcal{N}(0,\sigma^2_i) \Rightarrow \frac{X_i}{\sigma_i}\sim \mathcal{N}(0,1) $
$\therefore$ $\frac{X_i^2}{\sigma_i^2} \sim \chi^2(1)=\Gamma(1/2,2)$
$X_i^2 \sim \sigma_i^2\Gamma(1/2,2)=\G | Distribution of sum of squares of normals that have mean zero but not variance one?
$X_i \sim \mathcal{N}(0,\sigma^2_i) \Rightarrow \frac{X_i}{\sigma_i}\sim \mathcal{N}(0,1) $
$\therefore$ $\frac{X_i^2}{\sigma_i^2} \sim \chi^2(1)=\Gamma(1/2,2)$
$X_i^2 \sim \sigma_i^2\Gamma(1/2,2)=\Gamma(1/2,2\sigma_i^2)$
If your $\sigma_i$s are fixed (i.e all the same) then
$\sum_{i=1}^n X_i^2 \sim \sum_{i=1}^n\Gamma(1/2, 2\sigma^2)=\Gamma(n/2,2\sigma^2)$ suppose $\sigma_i$s are equal to $\sigma$
i.e $\sum_{i=1}^n X_i^2$ has a gamma distribution with $k=n/2,\theta=2\sigma^2$
If your $\sigma_i$s are not fixed then
ref this | Distribution of sum of squares of normals that have mean zero but not variance one?
$X_i \sim \mathcal{N}(0,\sigma^2_i) \Rightarrow \frac{X_i}{\sigma_i}\sim \mathcal{N}(0,1) $
$\therefore$ $\frac{X_i^2}{\sigma_i^2} \sim \chi^2(1)=\Gamma(1/2,2)$
$X_i^2 \sim \sigma_i^2\Gamma(1/2,2)=\G |
30,867 | What scientific field(s) studies how people interpret quantitative summaries and visualizations? | Gerd Gigerenzer is widely acknowledged as one of the world experts in the cognitive aspects of numeracy or, alternatively, innumeracy. He has many papers and books on these topics referenced on his website (https://www.mpib-berlin.mpg.de/en/staff/gerd-gigerenzer). One of his key texts is his 2002 book Calculated risks: How to know when numbers deceive you. Read the abstract here: https://www.mpib-berlin.mpg.de/en/research/adaptive-behavior-and-cognition/publications/books/calculated-risks
Related to Gigerenzer's work is cognition-based decision theoretic work that looks at the way information is presented. A representative paper here is Dan Goldstein's The Illusion of Wealth and its Reversal available here ... http://rady.ucsd.edu/docs/seminars/goldstein.pdf Here's from the intro:
Recently, researchers and policy makers have started to pay more
attention not just to choice architecture but also to information
architecture: the format in which information is presented to people.
Research in information architecture has shown, for example, that the
caloric content of food can be well appreciated in terms of the amount
of exercise it would take to work calories off, and the comprehension
of cars’ energy efficiency can be enhanced by presenting information
in terms of gallons per 100 miles instead of miles per gallon. This
paper investigates information architecture, though instead of
consuming calories or gasoline, we address economic consumption in
retirement.
An important recent addition to the literature is Berkeley Dietvorst's research into "algorithm aversion" and decision-making. Dietvorst contends that wrt predictive modeling, the technically naive and/or illiterate tend to assume that predictive models are a "magic bullet" or perfectly informative and when the algorithms prove to be, at best, weakly predictive, then the typical response is to reject quantitative solutions altogether.
https://marketing.wharton.upenn.edu/mktg/assets/File/Dietvorst%20Simmons%20&%20Massey%202014.pdf
Then there are bloggers like Kaiser Fung who maintains his Junkcharts website critiquing the graphs and visualizations of major pubs such as the NYTs or the WSJ
http://junkcharts.typepad.com/
Related to your question of visualization is the work of design experts such as Manuel Lima who maintains a website VisualComplexity.com covering the many approaches to this. Lima also teaches data visualization at Parsons School of Design in NYC.
http://www.visualcomplexity.com/vc/
Besides Parsons, other design and visualization institutions include:
College of Design and Social Context
https://www.rmit.edu.au/about/our-education/academic-colleges/college-of-design-and-social-context/
UCLA's Culture Analytics Institute
http://www.ipam.ucla.edu/programs/long-programs/culture-analytics/
Google's Cultural Institute
https://www.google.com/culturalinstitute/home
A MoMA design exhibition and book
http://www.moma.org/calendar/exhibitions/1071?locale=en
http://www.amazon.com/Talk-Me-Communication-between-Objects/dp/0870707965
In terms of conferences there is the Eyeo Festival
http://eyeofestival.com/
In R software, the visualization guru is Hadley Wickham
http://had.co.nz/
In SAS software, there is Rob Allison
http://www.robslink.com/SAS/graph_book.htm
Finally, there are no shortage of "one-off" kinds of websites:
http://infosthetics.com/ great visuals of govt data
http://www.thefunctionalart.com/2012/09/in-praise-of-connected-scatter-plots.html
http://www.informationisbeautifulawards.com/
How to display data badly by Karl Broman
https://www.biostat.wisc.edu/~kbroman/presentations/IowaState2013/graphs_combined.pdf
https://www.biostat.wisc.edu/~kbroman/presentations/IowaState2013/index.html
Maria Popova's Design and Communication blog
https://www.brainpickings.org/2012/06/26/talk-to-me-moma-paola-antonelli-book/
Gallery of Data Visualization
http://www.datavis.ca/gallery/index.php
Periodic Table of Data Visualization
http://www.visual-literacy.org/periodic_table/periodic_table.html
Our World in Data
http://ourworldindata.org/
This just begins to scratch the surface of what's out there... | What scientific field(s) studies how people interpret quantitative summaries and visualizations? | Gerd Gigerenzer is widely acknowledged as one of the world experts in the cognitive aspects of numeracy or, alternatively, innumeracy. He has many papers and books on these topics referenced on his we | What scientific field(s) studies how people interpret quantitative summaries and visualizations?
Gerd Gigerenzer is widely acknowledged as one of the world experts in the cognitive aspects of numeracy or, alternatively, innumeracy. He has many papers and books on these topics referenced on his website (https://www.mpib-berlin.mpg.de/en/staff/gerd-gigerenzer). One of his key texts is his 2002 book Calculated risks: How to know when numbers deceive you. Read the abstract here: https://www.mpib-berlin.mpg.de/en/research/adaptive-behavior-and-cognition/publications/books/calculated-risks
Related to Gigerenzer's work is cognition-based decision theoretic work that looks at the way information is presented. A representative paper here is Dan Goldstein's The Illusion of Wealth and its Reversal available here ... http://rady.ucsd.edu/docs/seminars/goldstein.pdf Here's from the intro:
Recently, researchers and policy makers have started to pay more
attention not just to choice architecture but also to information
architecture: the format in which information is presented to people.
Research in information architecture has shown, for example, that the
caloric content of food can be well appreciated in terms of the amount
of exercise it would take to work calories off, and the comprehension
of cars’ energy efficiency can be enhanced by presenting information
in terms of gallons per 100 miles instead of miles per gallon. This
paper investigates information architecture, though instead of
consuming calories or gasoline, we address economic consumption in
retirement.
An important recent addition to the literature is Berkeley Dietvorst's research into "algorithm aversion" and decision-making. Dietvorst contends that wrt predictive modeling, the technically naive and/or illiterate tend to assume that predictive models are a "magic bullet" or perfectly informative and when the algorithms prove to be, at best, weakly predictive, then the typical response is to reject quantitative solutions altogether.
https://marketing.wharton.upenn.edu/mktg/assets/File/Dietvorst%20Simmons%20&%20Massey%202014.pdf
Then there are bloggers like Kaiser Fung who maintains his Junkcharts website critiquing the graphs and visualizations of major pubs such as the NYTs or the WSJ
http://junkcharts.typepad.com/
Related to your question of visualization is the work of design experts such as Manuel Lima who maintains a website VisualComplexity.com covering the many approaches to this. Lima also teaches data visualization at Parsons School of Design in NYC.
http://www.visualcomplexity.com/vc/
Besides Parsons, other design and visualization institutions include:
College of Design and Social Context
https://www.rmit.edu.au/about/our-education/academic-colleges/college-of-design-and-social-context/
UCLA's Culture Analytics Institute
http://www.ipam.ucla.edu/programs/long-programs/culture-analytics/
Google's Cultural Institute
https://www.google.com/culturalinstitute/home
A MoMA design exhibition and book
http://www.moma.org/calendar/exhibitions/1071?locale=en
http://www.amazon.com/Talk-Me-Communication-between-Objects/dp/0870707965
In terms of conferences there is the Eyeo Festival
http://eyeofestival.com/
In R software, the visualization guru is Hadley Wickham
http://had.co.nz/
In SAS software, there is Rob Allison
http://www.robslink.com/SAS/graph_book.htm
Finally, there are no shortage of "one-off" kinds of websites:
http://infosthetics.com/ great visuals of govt data
http://www.thefunctionalart.com/2012/09/in-praise-of-connected-scatter-plots.html
http://www.informationisbeautifulawards.com/
How to display data badly by Karl Broman
https://www.biostat.wisc.edu/~kbroman/presentations/IowaState2013/graphs_combined.pdf
https://www.biostat.wisc.edu/~kbroman/presentations/IowaState2013/index.html
Maria Popova's Design and Communication blog
https://www.brainpickings.org/2012/06/26/talk-to-me-moma-paola-antonelli-book/
Gallery of Data Visualization
http://www.datavis.ca/gallery/index.php
Periodic Table of Data Visualization
http://www.visual-literacy.org/periodic_table/periodic_table.html
Our World in Data
http://ourworldindata.org/
This just begins to scratch the surface of what's out there... | What scientific field(s) studies how people interpret quantitative summaries and visualizations?
Gerd Gigerenzer is widely acknowledged as one of the world experts in the cognitive aspects of numeracy or, alternatively, innumeracy. He has many papers and books on these topics referenced on his we |
30,868 | What scientific field(s) studies how people interpret quantitative summaries and visualizations? | Psychophysics studies how humans respond to and interpret stimuli, to include interpretation of data visualizations. The Cleveland and McGill paper linked in the comments is an example, and the second section of this paper gives a quick overview of a few perspectives.
Numerical or mathematical cognition is a sub-discipline of cognitive science that studies things like number sense. It sometimes borrows concepts from psychophysics, for instance Fechner's scale, which "states that subjective sensation is proportional to the logarithm of the stimulus intensity." Wiki's description of the concept applied to numerical cognition:
Psychological studies show that it becomes increasingly difficult to discriminate among two numbers as the difference between them decreases. This is called the distance effect. This is important in areas of magnitude estimation, such as dealing with large scales and estimating distances. It may also play a role in explaining why consumers neglect to shop around to save a small percentage on a large purchase, but will shop around to save a large percentage on a small purchase which represents a much smaller absolute dollar amount.
Related, in behavioral economics, prospect theory (original paper) examines human choices between risky, probabilistic alternatives. | What scientific field(s) studies how people interpret quantitative summaries and visualizations? | Psychophysics studies how humans respond to and interpret stimuli, to include interpretation of data visualizations. The Cleveland and McGill paper linked in the comments is an example, and the second | What scientific field(s) studies how people interpret quantitative summaries and visualizations?
Psychophysics studies how humans respond to and interpret stimuli, to include interpretation of data visualizations. The Cleveland and McGill paper linked in the comments is an example, and the second section of this paper gives a quick overview of a few perspectives.
Numerical or mathematical cognition is a sub-discipline of cognitive science that studies things like number sense. It sometimes borrows concepts from psychophysics, for instance Fechner's scale, which "states that subjective sensation is proportional to the logarithm of the stimulus intensity." Wiki's description of the concept applied to numerical cognition:
Psychological studies show that it becomes increasingly difficult to discriminate among two numbers as the difference between them decreases. This is called the distance effect. This is important in areas of magnitude estimation, such as dealing with large scales and estimating distances. It may also play a role in explaining why consumers neglect to shop around to save a small percentage on a large purchase, but will shop around to save a large percentage on a small purchase which represents a much smaller absolute dollar amount.
Related, in behavioral economics, prospect theory (original paper) examines human choices between risky, probabilistic alternatives. | What scientific field(s) studies how people interpret quantitative summaries and visualizations?
Psychophysics studies how humans respond to and interpret stimuli, to include interpretation of data visualizations. The Cleveland and McGill paper linked in the comments is an example, and the second |
30,869 | Mean of two normal distributions | The sum of two independent normal variables is normal random variable, e.g. $x\sim\mathcal{N}(\mu_x,\sigma_x^2)$ and $y\sim\mathcal{N}(\mu_y,\sigma_y^2)$ will get you $$\alpha x+(1-\alpha)y\sim\mathcal{N}(\alpha\mu_x+(1-\alpha)\mu_y,\alpha^2\sigma_x^2+(1-\alpha)^2\sigma_y^2)$$
Here, you could use $\alpha=\frac{1}{2}$ for an equal weight mean.
If you assume that both instruments are unbiased, then you actually have a simpler situation:
$$x\sim\mathcal{N}(\mu,\sigma_x^2)$$
$$y\sim\mathcal{N}(\mu,\sigma_y^2)$$
In this case you assume that in average both instruments are accurate(as defined by IUPAC), i.e. have no bias. However, their precision is different $\sigma_x,\sigma_y$.
Let's construct a weighted estimator
$$\hat\mu=\alpha x + (1-\alpha) y$$
Let's look at its characteristics:
$$E[\hat\mu]=\alpha\mu+(1-\alpha)\mu=\mu $$
Good, it's unbiased regardless of the weight $\alpha$, i.e. it's accurate.
Let's see what's its precision:
$$Var[\hat\mu]=\alpha^2\sigma_x^2+(1-\alpha)^2\sigma_y^2$$
The independence assumption of normal variables is usually reasonable for instrument measurements unless they're affected with the same exact random shocks, which may happen in certain setups but not usually encountered.
In this case the the optimal
$$\alpha=\frac{\sigma_y^2}{\sigma_x^2+\sigma_y^2}$$
You can see that if the precisions are the same, the weight is $\alpha=1/2$. Otherwise, if the first instrument twice more precise, e.g. $\sigma_x=\sigma_y/2$ then you get $$\alpha=\frac{4}{4+1}=0.8$$ | Mean of two normal distributions | The sum of two independent normal variables is normal random variable, e.g. $x\sim\mathcal{N}(\mu_x,\sigma_x^2)$ and $y\sim\mathcal{N}(\mu_y,\sigma_y^2)$ will get you $$\alpha x+(1-\alpha)y\sim\mathca | Mean of two normal distributions
The sum of two independent normal variables is normal random variable, e.g. $x\sim\mathcal{N}(\mu_x,\sigma_x^2)$ and $y\sim\mathcal{N}(\mu_y,\sigma_y^2)$ will get you $$\alpha x+(1-\alpha)y\sim\mathcal{N}(\alpha\mu_x+(1-\alpha)\mu_y,\alpha^2\sigma_x^2+(1-\alpha)^2\sigma_y^2)$$
Here, you could use $\alpha=\frac{1}{2}$ for an equal weight mean.
If you assume that both instruments are unbiased, then you actually have a simpler situation:
$$x\sim\mathcal{N}(\mu,\sigma_x^2)$$
$$y\sim\mathcal{N}(\mu,\sigma_y^2)$$
In this case you assume that in average both instruments are accurate(as defined by IUPAC), i.e. have no bias. However, their precision is different $\sigma_x,\sigma_y$.
Let's construct a weighted estimator
$$\hat\mu=\alpha x + (1-\alpha) y$$
Let's look at its characteristics:
$$E[\hat\mu]=\alpha\mu+(1-\alpha)\mu=\mu $$
Good, it's unbiased regardless of the weight $\alpha$, i.e. it's accurate.
Let's see what's its precision:
$$Var[\hat\mu]=\alpha^2\sigma_x^2+(1-\alpha)^2\sigma_y^2$$
The independence assumption of normal variables is usually reasonable for instrument measurements unless they're affected with the same exact random shocks, which may happen in certain setups but not usually encountered.
In this case the the optimal
$$\alpha=\frac{\sigma_y^2}{\sigma_x^2+\sigma_y^2}$$
You can see that if the precisions are the same, the weight is $\alpha=1/2$. Otherwise, if the first instrument twice more precise, e.g. $\sigma_x=\sigma_y/2$ then you get $$\alpha=\frac{4}{4+1}=0.8$$ | Mean of two normal distributions
The sum of two independent normal variables is normal random variable, e.g. $x\sim\mathcal{N}(\mu_x,\sigma_x^2)$ and $y\sim\mathcal{N}(\mu_y,\sigma_y^2)$ will get you $$\alpha x+(1-\alpha)y\sim\mathca |
30,870 | Mean of two normal distributions | I will edit this answer into a more elaborate one later in the day.
You can consider the geodesic between your two densities and pick-up the distribution at the mid-distance. These densities have an hyperbolic geometry under the Fisher-Rao metric. You can google SIR Costa Information Geometry for detailed computations and close-form expressions. | Mean of two normal distributions | I will edit this answer into a more elaborate one later in the day.
You can consider the geodesic between your two densities and pick-up the distribution at the mid-distance. These densities have an | Mean of two normal distributions
I will edit this answer into a more elaborate one later in the day.
You can consider the geodesic between your two densities and pick-up the distribution at the mid-distance. These densities have an hyperbolic geometry under the Fisher-Rao metric. You can google SIR Costa Information Geometry for detailed computations and close-form expressions. | Mean of two normal distributions
I will edit this answer into a more elaborate one later in the day.
You can consider the geodesic between your two densities and pick-up the distribution at the mid-distance. These densities have an |
30,871 | Halving a discrete random variable? | A notion strongly related to this property (if weaker) is decomposability. A decomposable law is a probability distribution that can be represented as the distribution of a sum of two (or more) non-trivial independent random variables. (And an indecomposable law cannot be written that way. The "or more" is definitely irrelevant.) A necessary and sufficient condition for decomposability is that the characteristic function $$\psi(t)=\mathbb{E}[\exp\{itX\}]$$ is the product of two (or more) characteristic functions.
I do not know whether or not the property you consider already has a name in probability theory, maybe linked with infinite divisibility. Which is a much stronger property of $X$, but which includes this property: all infinitely divisible rv's do satisfy this decomposition.
A necessary and sufficient condition for this "primary divisibility" is that the root of the characteristic function $$\psi(t)=\mathbb{E}[\exp\{itX\}]$$ is again a characteristic function.
In the case of distributions with integer support, this is rarely the case since the characteristic function is a polynomial in $\exp\{it\}$. For instance, a Bernoulli random variable is not decomposable.
As pointed out in the Wikipedia page on decomposability, there also exist absolutely continuous distributions that are non-decomposable, like the one with density$$f(x)=\frac{x^2}{\sqrt{2\pi}}\exp\{-x^2/2\}$$
In the event the characteristic function of $X$ is real-valued, Polya's theorem can be used:
Pólya’s theorem. If φ is a real-valued, even, continuous function which satisfies the conditions
φ(0) = 1,
φ is convex on (0,∞),
φ(∞) = 0,
then φ is the characteristic function of an absolutely continuous
symmetric distribution.
Indeed, in this case, $\varphi^{1/2}$ is again real-valued. Therefore, a sufficient condition for $X$ to be primary divisible is that φ is root-convex. But it only applies to symmetric distributions so is of much more limited use than Böchner's theorem for instance. | Halving a discrete random variable? | A notion strongly related to this property (if weaker) is decomposability. A decomposable law is a probability distribution that can be represented as the distribution of a sum of two (or more) non-tr | Halving a discrete random variable?
A notion strongly related to this property (if weaker) is decomposability. A decomposable law is a probability distribution that can be represented as the distribution of a sum of two (or more) non-trivial independent random variables. (And an indecomposable law cannot be written that way. The "or more" is definitely irrelevant.) A necessary and sufficient condition for decomposability is that the characteristic function $$\psi(t)=\mathbb{E}[\exp\{itX\}]$$ is the product of two (or more) characteristic functions.
I do not know whether or not the property you consider already has a name in probability theory, maybe linked with infinite divisibility. Which is a much stronger property of $X$, but which includes this property: all infinitely divisible rv's do satisfy this decomposition.
A necessary and sufficient condition for this "primary divisibility" is that the root of the characteristic function $$\psi(t)=\mathbb{E}[\exp\{itX\}]$$ is again a characteristic function.
In the case of distributions with integer support, this is rarely the case since the characteristic function is a polynomial in $\exp\{it\}$. For instance, a Bernoulli random variable is not decomposable.
As pointed out in the Wikipedia page on decomposability, there also exist absolutely continuous distributions that are non-decomposable, like the one with density$$f(x)=\frac{x^2}{\sqrt{2\pi}}\exp\{-x^2/2\}$$
In the event the characteristic function of $X$ is real-valued, Polya's theorem can be used:
Pólya’s theorem. If φ is a real-valued, even, continuous function which satisfies the conditions
φ(0) = 1,
φ is convex on (0,∞),
φ(∞) = 0,
then φ is the characteristic function of an absolutely continuous
symmetric distribution.
Indeed, in this case, $\varphi^{1/2}$ is again real-valued. Therefore, a sufficient condition for $X$ to be primary divisible is that φ is root-convex. But it only applies to symmetric distributions so is of much more limited use than Böchner's theorem for instance. | Halving a discrete random variable?
A notion strongly related to this property (if weaker) is decomposability. A decomposable law is a probability distribution that can be represented as the distribution of a sum of two (or more) non-tr |
30,872 | Halving a discrete random variable? | There are some special cases where this holds true, but for an
arbitrary discrete random variable, your "halving" is not possible.
The sum of two independent Binomial$(n,p)$ random variables is a
a Binomial$(2n,p)$ random variable, and so a Binomial$(2n,p)$ can be
"halved".
Exercise: figure out whether a Binomial$(2n+1,p)$ random variable can be "halved".
Similarly, a Negative Binomial$(2n,p)$ random variable can be "halved".
The sum of two independent Poisson$(\lambda)$ random variables is a Poisson$(2\lambda)$; conversely, a Poisson$(\lambda)$ random variable is the sum of two independent Poisson$(\frac{\lambda}{2})$ random
variables. Indeed, as @Xi'an points
out in a comment, a Poisson$(\lambda)$ random variable can be "halved" as many times
as we like: for each positive integer $n$,
it is the sum of $2^n$ independent
Poisson$\left(\frac{\lambda}{2^n}\right)$ random variables. | Halving a discrete random variable? | There are some special cases where this holds true, but for an
arbitrary discrete random variable, your "halving" is not possible.
The sum of two independent Binomial$(n,p)$ random variables is a
a B | Halving a discrete random variable?
There are some special cases where this holds true, but for an
arbitrary discrete random variable, your "halving" is not possible.
The sum of two independent Binomial$(n,p)$ random variables is a
a Binomial$(2n,p)$ random variable, and so a Binomial$(2n,p)$ can be
"halved".
Exercise: figure out whether a Binomial$(2n+1,p)$ random variable can be "halved".
Similarly, a Negative Binomial$(2n,p)$ random variable can be "halved".
The sum of two independent Poisson$(\lambda)$ random variables is a Poisson$(2\lambda)$; conversely, a Poisson$(\lambda)$ random variable is the sum of two independent Poisson$(\frac{\lambda}{2})$ random
variables. Indeed, as @Xi'an points
out in a comment, a Poisson$(\lambda)$ random variable can be "halved" as many times
as we like: for each positive integer $n$,
it is the sum of $2^n$ independent
Poisson$\left(\frac{\lambda}{2^n}\right)$ random variables. | Halving a discrete random variable?
There are some special cases where this holds true, but for an
arbitrary discrete random variable, your "halving" is not possible.
The sum of two independent Binomial$(n,p)$ random variables is a
a B |
30,873 | Halving a discrete random variable? | The problem seems to me that you ask for an "independent copy", otherwise you could just multiply with $\frac{1}{2}$? Instead of writing copy (a copy is always dependent), you should maybe write "two independent, but identically distributed random variables".
To answer your questions,
what comes closest is maybe the term convolution. For given $X$, you are looking for two iid RV with convolution $X$.
if you accept negative probabilities, these are no longer random variables, since there is no probability space anymore. There are cases where you can find such $Y,Y^*$ ($X$ $\lambda$-Poisson-distributed, $Y$,$Y^*$ $\frac{\lambda}{2}$-Poisson-distributed), and cases where it is not possible ($X$ Bernoulli, as example).
i haven't seen any, and i can't imagine how to formalize such a best fit. Usually, approximations to random variables are measured by a norm on the space of random variables. I can't think of approximations of random variables by or to non - random variables.
I hope i could help. | Halving a discrete random variable? | The problem seems to me that you ask for an "independent copy", otherwise you could just multiply with $\frac{1}{2}$? Instead of writing copy (a copy is always dependent), you should maybe write "two | Halving a discrete random variable?
The problem seems to me that you ask for an "independent copy", otherwise you could just multiply with $\frac{1}{2}$? Instead of writing copy (a copy is always dependent), you should maybe write "two independent, but identically distributed random variables".
To answer your questions,
what comes closest is maybe the term convolution. For given $X$, you are looking for two iid RV with convolution $X$.
if you accept negative probabilities, these are no longer random variables, since there is no probability space anymore. There are cases where you can find such $Y,Y^*$ ($X$ $\lambda$-Poisson-distributed, $Y$,$Y^*$ $\frac{\lambda}{2}$-Poisson-distributed), and cases where it is not possible ($X$ Bernoulli, as example).
i haven't seen any, and i can't imagine how to formalize such a best fit. Usually, approximations to random variables are measured by a norm on the space of random variables. I can't think of approximations of random variables by or to non - random variables.
I hope i could help. | Halving a discrete random variable?
The problem seems to me that you ask for an "independent copy", otherwise you could just multiply with $\frac{1}{2}$? Instead of writing copy (a copy is always dependent), you should maybe write "two |
30,874 | Observational vs quasi-experimental design? | First, as far as you have described the research design, the study is not a quasi-experiment.
I prefer the term natural experiment to quasi-experiment, because I think it more clearly communicates the fact that treatment needs to have been randomly assigned (or as-if randomly assigned). I use the term natural experiments below, but I consider the two equivalent in meaning.
You are correct that experiments are confined to those situations where a researcher actually manipulates treatment assignment.
Observational studies comprise anything that was not an experiment. Natural experiments are a subset of observational studies, but in a natural experiment units were assigned to treatment in a random process (or as-if random, or almost random).
You might look for a natural experiment (or quasi-experiment) if you were seeking to identify the causal effect of a treatment on a set of outcomes. Then you would look for a situation where assignment to that treatment was assigned randomly (or as-if randomly) by nature or a government program, for example. For example, if you wanted to study the impact of forest fires on bird diversity, you might find a place where the government has defined that it will fight fires when they come with X miles of residential areas. After forest fires, you could compare (i) bird diversity in areas affected by the forest fire just a little further than X miles away from residential areas (treatment group) to (ii) bird diversity in areas just a little less than X miles away from residential areas (control group). Because birds would not choose where to live prior to fire based on the government's designation of the distance X, we can expect that before the fire on either side of the X-mile cutoff, birds would be identical on average. There assignment to treatment (being "treated" by the forest fire) is as-if random on either side of the X-mile cutoff. This design is called a regression-discontinuity design [1] or a geographic regression discontinuity design [2].
Also, see more discussion of the difference here: Panel study is a quasi-experimental study? Quasi-experimental is the same as correlational?
https://en.wikipedia.org/wiki/Regression_discontinuity_design
"Geographic boundaries as regression discontinuities." LJ Keele, R Titiunik. Political Analysis, 2014 | Observational vs quasi-experimental design? | First, as far as you have described the research design, the study is not a quasi-experiment.
I prefer the term natural experiment to quasi-experiment, because I think it more clearly communicates th | Observational vs quasi-experimental design?
First, as far as you have described the research design, the study is not a quasi-experiment.
I prefer the term natural experiment to quasi-experiment, because I think it more clearly communicates the fact that treatment needs to have been randomly assigned (or as-if randomly assigned). I use the term natural experiments below, but I consider the two equivalent in meaning.
You are correct that experiments are confined to those situations where a researcher actually manipulates treatment assignment.
Observational studies comprise anything that was not an experiment. Natural experiments are a subset of observational studies, but in a natural experiment units were assigned to treatment in a random process (or as-if random, or almost random).
You might look for a natural experiment (or quasi-experiment) if you were seeking to identify the causal effect of a treatment on a set of outcomes. Then you would look for a situation where assignment to that treatment was assigned randomly (or as-if randomly) by nature or a government program, for example. For example, if you wanted to study the impact of forest fires on bird diversity, you might find a place where the government has defined that it will fight fires when they come with X miles of residential areas. After forest fires, you could compare (i) bird diversity in areas affected by the forest fire just a little further than X miles away from residential areas (treatment group) to (ii) bird diversity in areas just a little less than X miles away from residential areas (control group). Because birds would not choose where to live prior to fire based on the government's designation of the distance X, we can expect that before the fire on either side of the X-mile cutoff, birds would be identical on average. There assignment to treatment (being "treated" by the forest fire) is as-if random on either side of the X-mile cutoff. This design is called a regression-discontinuity design [1] or a geographic regression discontinuity design [2].
Also, see more discussion of the difference here: Panel study is a quasi-experimental study? Quasi-experimental is the same as correlational?
https://en.wikipedia.org/wiki/Regression_discontinuity_design
"Geographic boundaries as regression discontinuities." LJ Keele, R Titiunik. Political Analysis, 2014 | Observational vs quasi-experimental design?
First, as far as you have described the research design, the study is not a quasi-experiment.
I prefer the term natural experiment to quasi-experiment, because I think it more clearly communicates th |
30,875 | Observational vs quasi-experimental design? | I can try to give an example from my own field, econometrics:
Economists are interested in "returns to schooling", i.e., how much more do you earn per additional year of schooling obtained.
An experiment is not an option, as, for good and obvious reasons, you cannot force people to continue or stop their educational career just because of the empirical analysis.
Observational data is generally tricky to interpret if you are interested in the causal effect of another year of schooling, because there are "confounders" that imply that the (generally positive) correlation between schooling and earnings is not (fully) causal. For example, more able, motivated and careful individuals can be thought to choose to obtain more schooling, and such individuals would have at least partly done well in the labor market without additional schooling.
Now, sometimes nature or law is kind enough to hand you a "quasi-experiment". In the above example, researchers have for example exploited changes in compulsory schooling laws. If, effective say 1955, all students in a country are obliged to attend secondary school for, say, 10 rather than 8 years, there will be at least some students who obtain more schooling only because of the new law, and not because they choose so.
Instrumental variable approaches may then be a credible way to exploit this so-called exogenous variation to say something about the causal effect of schooling. | Observational vs quasi-experimental design? | I can try to give an example from my own field, econometrics:
Economists are interested in "returns to schooling", i.e., how much more do you earn per additional year of schooling obtained.
An experi | Observational vs quasi-experimental design?
I can try to give an example from my own field, econometrics:
Economists are interested in "returns to schooling", i.e., how much more do you earn per additional year of schooling obtained.
An experiment is not an option, as, for good and obvious reasons, you cannot force people to continue or stop their educational career just because of the empirical analysis.
Observational data is generally tricky to interpret if you are interested in the causal effect of another year of schooling, because there are "confounders" that imply that the (generally positive) correlation between schooling and earnings is not (fully) causal. For example, more able, motivated and careful individuals can be thought to choose to obtain more schooling, and such individuals would have at least partly done well in the labor market without additional schooling.
Now, sometimes nature or law is kind enough to hand you a "quasi-experiment". In the above example, researchers have for example exploited changes in compulsory schooling laws. If, effective say 1955, all students in a country are obliged to attend secondary school for, say, 10 rather than 8 years, there will be at least some students who obtain more schooling only because of the new law, and not because they choose so.
Instrumental variable approaches may then be a credible way to exploit this so-called exogenous variation to say something about the causal effect of schooling. | Observational vs quasi-experimental design?
I can try to give an example from my own field, econometrics:
Economists are interested in "returns to schooling", i.e., how much more do you earn per additional year of schooling obtained.
An experi |
30,876 | Observational vs quasi-experimental design? | I would like to answer your question from the Epidemiology point of view.
Basically,there are three kind of studies in Epidemiology, observational study, Experimental and theoretical study.
For observational studies, as a researcher you will not give any interventions to any groups you will study. You just collect data cross-sectionally, retrospectively or prospectively.
For experimental design, as a researcher you will allocate your intervention to some groups and other groups will not receive your intervention.
There are randomized experiments (such as clinical trials) and non-randomized experiment.
For randomized experiments patient belongs to which group is determined by randomization procedures).
For non-randomized experiment, which is also called quasi-experiment, there are no randomization procedures to allocate patients to different groups, it might just be done by convenience. | Observational vs quasi-experimental design? | I would like to answer your question from the Epidemiology point of view.
Basically,there are three kind of studies in Epidemiology, observational study, Experimental and theoretical study.
For obse | Observational vs quasi-experimental design?
I would like to answer your question from the Epidemiology point of view.
Basically,there are three kind of studies in Epidemiology, observational study, Experimental and theoretical study.
For observational studies, as a researcher you will not give any interventions to any groups you will study. You just collect data cross-sectionally, retrospectively or prospectively.
For experimental design, as a researcher you will allocate your intervention to some groups and other groups will not receive your intervention.
There are randomized experiments (such as clinical trials) and non-randomized experiment.
For randomized experiments patient belongs to which group is determined by randomization procedures).
For non-randomized experiment, which is also called quasi-experiment, there are no randomization procedures to allocate patients to different groups, it might just be done by convenience. | Observational vs quasi-experimental design?
I would like to answer your question from the Epidemiology point of view.
Basically,there are three kind of studies in Epidemiology, observational study, Experimental and theoretical study.
For obse |
30,877 | Observational vs quasi-experimental design? | The quasi experimental design is the one that uses an "experimental research procedure" but not all extraneous variables are controlled. Quasi experimental designs lack random assignment of participants to groups. Only in strong experimental designs is this achieved.
Causal inferences can only be done from quasi-experimental designs if (1) cause and effect covary, (2) cause must precede effect and (3) rival hypothesis must be implausible (so the relationship between variables must not be due a confounding extraneous variable).
Now, the third condition is hard to achieve since there is no randomization.
So we can see Quasi-experimental designs to be a better option than weak experimental designs and not as good as strong experimental designs.
For your bird-watching scenario, using a quasi-experimental design will not have a conclusive result and the relationship between the type of bird and the land use parameters might be affected by other variables like weather, migration seasons, temperature, humidity, wind orientation, etc. However this might be good enough for your study if you are not able to apply a strong experimental design.
On the other hand, the observational study draws inferences from a sample to a population where the independent variable is not under the control of the researcher. The observational study is then more into the data collection process, where you as a researcher must collect what you can, to draw inferences from there. The inference in this case (statistically speaking) could be managed by the amount and quality of attributes recorded. Naturalistic observation is conducted in real world observations and subject to noise and error. Since the observational study might be conducted in a single farm or a couple selected from the near-by surroundings, this will not be using a randomized sample which might also be prone to statistical error for causal relationships. The only way the observational study will be good enough to demonstrate cause and effect, will be when it is ran under laboratory conditions, say your birds are in a controlled environment, where several domes are created that represents each land/farm type and then you basically observe behavior or whatever you are measuring. The laboratory observation is closely similar to a quasi-experimental design since you are having control of a variable (the setting).
hope this helps. | Observational vs quasi-experimental design? | The quasi experimental design is the one that uses an "experimental research procedure" but not all extraneous variables are controlled. Quasi experimental designs lack random assignment of participan | Observational vs quasi-experimental design?
The quasi experimental design is the one that uses an "experimental research procedure" but not all extraneous variables are controlled. Quasi experimental designs lack random assignment of participants to groups. Only in strong experimental designs is this achieved.
Causal inferences can only be done from quasi-experimental designs if (1) cause and effect covary, (2) cause must precede effect and (3) rival hypothesis must be implausible (so the relationship between variables must not be due a confounding extraneous variable).
Now, the third condition is hard to achieve since there is no randomization.
So we can see Quasi-experimental designs to be a better option than weak experimental designs and not as good as strong experimental designs.
For your bird-watching scenario, using a quasi-experimental design will not have a conclusive result and the relationship between the type of bird and the land use parameters might be affected by other variables like weather, migration seasons, temperature, humidity, wind orientation, etc. However this might be good enough for your study if you are not able to apply a strong experimental design.
On the other hand, the observational study draws inferences from a sample to a population where the independent variable is not under the control of the researcher. The observational study is then more into the data collection process, where you as a researcher must collect what you can, to draw inferences from there. The inference in this case (statistically speaking) could be managed by the amount and quality of attributes recorded. Naturalistic observation is conducted in real world observations and subject to noise and error. Since the observational study might be conducted in a single farm or a couple selected from the near-by surroundings, this will not be using a randomized sample which might also be prone to statistical error for causal relationships. The only way the observational study will be good enough to demonstrate cause and effect, will be when it is ran under laboratory conditions, say your birds are in a controlled environment, where several domes are created that represents each land/farm type and then you basically observe behavior or whatever you are measuring. The laboratory observation is closely similar to a quasi-experimental design since you are having control of a variable (the setting).
hope this helps. | Observational vs quasi-experimental design?
The quasi experimental design is the one that uses an "experimental research procedure" but not all extraneous variables are controlled. Quasi experimental designs lack random assignment of participan |
30,878 | Observational vs quasi-experimental design? | The point of experiments is to determine causality, which typically requires establishing that: 1) one thing happened before the other, 2) that the putative cause had some explanation mechanism for affecting the outcome, and 3) that there are no competing explanations or alternate causes. Also helps if the relationship is reliable--that the lights go on every time you hit the switch. Experiments are designed to establish these relationships, by controlling conditions to establish chronological sequence and control for possible alternate causes. Effective experimental design also includes a control: A population that is not given the experimental treatment.
In many cases, it's not possible/safe/legal to establish a control group before the experiment. In which case, it's a quasi-experiment. If the assumption that the affects of the treatment were random is true (ie, people weren't somehow selected for the treatment, by age/socio-economic status/race etc), then it's a control. The assumption is random assignment, but sometimes that assumption has to be relaxed (or controlled for); if not controlled for, it weakens the strength of your causal inferences.
Control groups also rely on the assumption that the two groups prior to the treatment were identical. This measurement is typically called the a 'pre-test' measurement. Then after the experimental treatment is applied, a 'post-test measure is made. The pre-test measure should be the same for both the treatment (experimental) and control groups. And then, if the experimental treatment did anything, the post-test value for the treatment and control groups should be different. In summary, both experiments (natural and otherwise) should have a pre-test, post-test and control group.
Actual observational studies belong to a completely different style of science: inductive rather than deductive, and can generally be identified with qualitative traditions. The essential difference is that numerical data sufficient for a statistical analysis isn't available. In a comparative case study, the sample might be as small as two. While their are a variety of qualitative techniques I'm not remotely qualified to talk about, I'll make a very broad generalization and say: they compare fewer, less well defined things, because part of the role of qualitative research is to define what things are: to create constructs an definitions and theorize (based on observations) what the possible causal relationships between things might be. | Observational vs quasi-experimental design? | The point of experiments is to determine causality, which typically requires establishing that: 1) one thing happened before the other, 2) that the putative cause had some explanation mechanism for a | Observational vs quasi-experimental design?
The point of experiments is to determine causality, which typically requires establishing that: 1) one thing happened before the other, 2) that the putative cause had some explanation mechanism for affecting the outcome, and 3) that there are no competing explanations or alternate causes. Also helps if the relationship is reliable--that the lights go on every time you hit the switch. Experiments are designed to establish these relationships, by controlling conditions to establish chronological sequence and control for possible alternate causes. Effective experimental design also includes a control: A population that is not given the experimental treatment.
In many cases, it's not possible/safe/legal to establish a control group before the experiment. In which case, it's a quasi-experiment. If the assumption that the affects of the treatment were random is true (ie, people weren't somehow selected for the treatment, by age/socio-economic status/race etc), then it's a control. The assumption is random assignment, but sometimes that assumption has to be relaxed (or controlled for); if not controlled for, it weakens the strength of your causal inferences.
Control groups also rely on the assumption that the two groups prior to the treatment were identical. This measurement is typically called the a 'pre-test' measurement. Then after the experimental treatment is applied, a 'post-test measure is made. The pre-test measure should be the same for both the treatment (experimental) and control groups. And then, if the experimental treatment did anything, the post-test value for the treatment and control groups should be different. In summary, both experiments (natural and otherwise) should have a pre-test, post-test and control group.
Actual observational studies belong to a completely different style of science: inductive rather than deductive, and can generally be identified with qualitative traditions. The essential difference is that numerical data sufficient for a statistical analysis isn't available. In a comparative case study, the sample might be as small as two. While their are a variety of qualitative techniques I'm not remotely qualified to talk about, I'll make a very broad generalization and say: they compare fewer, less well defined things, because part of the role of qualitative research is to define what things are: to create constructs an definitions and theorize (based on observations) what the possible causal relationships between things might be. | Observational vs quasi-experimental design?
The point of experiments is to determine causality, which typically requires establishing that: 1) one thing happened before the other, 2) that the putative cause had some explanation mechanism for a |
30,879 | Does an urn's probability distribution change as you draw from it without replacement on average? | "Direct calculation": Let there be $n$ balls of $m$ colours in the urn. Let us focus on the probability of drawing one particular colour, say white, on the second draw. Let the number of white balls be $n_w$. Let $X_i$ be the colour of the ball obtained at the $i$-th draw.
\begin{eqnarray}
P(X_2=W)&=&P(X_2=W|X_1=W)P(X_1=W)+P(X_2=W|X_1=\overline{W})P(X_1=\overline{W})\\
&=&\frac{n_w-1}{n-1}\frac{n_w}{n}+\frac{n_w}{n-1}\frac{n-n_w}{n}\\
&=&\frac{n_w(n-n_w+n_w-1)}{n(n-1)}\\
&=&\frac{n_w}{n}\\
&=&P(X_1=W)
\end{eqnarray}
Of course this same argument applies to any colour on the second draw. We can apply the same kind of argument recursively when considering later draws.
[One could of course perform an even more direct calculation. Consider the first $k$ draws as consisting of $i$ white balls and $k-i$ non-white balls (with probability given by the hypergeometric distribution), and perform the corresponding calculation to the simple one above but for the draw at step $k+1$; one gets a similar simplification and cancellation, but it's not especially enlightening to carry out.]
A shorter argument: consider labelling the balls randomly with the numbers $1,2,...,n$, and then drawing them out in labelled order. The question now becomes "Is the probability that a given label, $k$, is placed on a white ball the same as the probability the label $1$ gets placed on a white ball?"
Now we see the answer must be "yes" by symmetry of the labels. Similarly, by symmetry of the ball-colours, it doesn't matter that we said "white", so the argument that label $k$ and label $1$ have the same probability applies to any colour. Hence the distribution at the $k$-th draw is the same as for the first draw, as long as we have no additional information from the earlier draws (i.e. as long as the earlier drawn balls are not seen). | Does an urn's probability distribution change as you draw from it without replacement on average? | "Direct calculation": Let there be $n$ balls of $m$ colours in the urn. Let us focus on the probability of drawing one particular colour, say white, on the second draw. Let the number of white balls b | Does an urn's probability distribution change as you draw from it without replacement on average?
"Direct calculation": Let there be $n$ balls of $m$ colours in the urn. Let us focus on the probability of drawing one particular colour, say white, on the second draw. Let the number of white balls be $n_w$. Let $X_i$ be the colour of the ball obtained at the $i$-th draw.
\begin{eqnarray}
P(X_2=W)&=&P(X_2=W|X_1=W)P(X_1=W)+P(X_2=W|X_1=\overline{W})P(X_1=\overline{W})\\
&=&\frac{n_w-1}{n-1}\frac{n_w}{n}+\frac{n_w}{n-1}\frac{n-n_w}{n}\\
&=&\frac{n_w(n-n_w+n_w-1)}{n(n-1)}\\
&=&\frac{n_w}{n}\\
&=&P(X_1=W)
\end{eqnarray}
Of course this same argument applies to any colour on the second draw. We can apply the same kind of argument recursively when considering later draws.
[One could of course perform an even more direct calculation. Consider the first $k$ draws as consisting of $i$ white balls and $k-i$ non-white balls (with probability given by the hypergeometric distribution), and perform the corresponding calculation to the simple one above but for the draw at step $k+1$; one gets a similar simplification and cancellation, but it's not especially enlightening to carry out.]
A shorter argument: consider labelling the balls randomly with the numbers $1,2,...,n$, and then drawing them out in labelled order. The question now becomes "Is the probability that a given label, $k$, is placed on a white ball the same as the probability the label $1$ gets placed on a white ball?"
Now we see the answer must be "yes" by symmetry of the labels. Similarly, by symmetry of the ball-colours, it doesn't matter that we said "white", so the argument that label $k$ and label $1$ have the same probability applies to any colour. Hence the distribution at the $k$-th draw is the same as for the first draw, as long as we have no additional information from the earlier draws (i.e. as long as the earlier drawn balls are not seen). | Does an urn's probability distribution change as you draw from it without replacement on average?
"Direct calculation": Let there be $n$ balls of $m$ colours in the urn. Let us focus on the probability of drawing one particular colour, say white, on the second draw. Let the number of white balls b |
30,880 | Does an urn's probability distribution change as you draw from it without replacement on average? | The only reason it is not perfectly obvious that the distribution remains unchanged (provided at least one ball remains) is that there is too much information. Let's strip out the distracting material.
Ignore, for a moment, the color of each ball. Focus on one ball. Assume $k$ balls are about to be randomly removed (and not observed), and then a $k+1$st ball will be drawn and observed. It makes no difference what order the selection occurs in, so you might as well observe the very first ball drawn (and then remove another $k$ balls if you insist). The distribution obviously has not changed, because it will not be affected by removing the other $k$ balls.
This argument--although perfectly valid--could make some people feel uneasy. The following analysis might be accepted as more rigorous, because it does not ask us to ignore the selection order.
Keep focusing on your ball. It will have some probability $p_k$ of being selected as the $k+1$st ball. Although $p_k$ is easy to compute, we don't need to know its value: all that matters is that it must be the same value for each ball (because all balls are equivalent) and that it be nonzero. But if it were zero, no ball would have any probability of being selected: so as long as at least one ball remains, $p_{k}\ne 0$.
Pay attention to the colors again. By definition, the chance that a particular color $C$ will be chosen (after $k$ balls are randomly removed) is the sum of the chances of all the original $C$-colored balls divided by the sum of chances of all original balls. When there originally are $k_C$ balls of color $C$ and $n$ balls total, that value is
$${\Pr}_k(C) = \frac{k_c p_k}{n p_k} = \frac{k_c}{n}.$$
When $k\lt n$ it does not depend on $k$, QED. | Does an urn's probability distribution change as you draw from it without replacement on average? | The only reason it is not perfectly obvious that the distribution remains unchanged (provided at least one ball remains) is that there is too much information. Let's strip out the distracting materia | Does an urn's probability distribution change as you draw from it without replacement on average?
The only reason it is not perfectly obvious that the distribution remains unchanged (provided at least one ball remains) is that there is too much information. Let's strip out the distracting material.
Ignore, for a moment, the color of each ball. Focus on one ball. Assume $k$ balls are about to be randomly removed (and not observed), and then a $k+1$st ball will be drawn and observed. It makes no difference what order the selection occurs in, so you might as well observe the very first ball drawn (and then remove another $k$ balls if you insist). The distribution obviously has not changed, because it will not be affected by removing the other $k$ balls.
This argument--although perfectly valid--could make some people feel uneasy. The following analysis might be accepted as more rigorous, because it does not ask us to ignore the selection order.
Keep focusing on your ball. It will have some probability $p_k$ of being selected as the $k+1$st ball. Although $p_k$ is easy to compute, we don't need to know its value: all that matters is that it must be the same value for each ball (because all balls are equivalent) and that it be nonzero. But if it were zero, no ball would have any probability of being selected: so as long as at least one ball remains, $p_{k}\ne 0$.
Pay attention to the colors again. By definition, the chance that a particular color $C$ will be chosen (after $k$ balls are randomly removed) is the sum of the chances of all the original $C$-colored balls divided by the sum of chances of all original balls. When there originally are $k_C$ balls of color $C$ and $n$ balls total, that value is
$${\Pr}_k(C) = \frac{k_c p_k}{n p_k} = \frac{k_c}{n}.$$
When $k\lt n$ it does not depend on $k$, QED. | Does an urn's probability distribution change as you draw from it without replacement on average?
The only reason it is not perfectly obvious that the distribution remains unchanged (provided at least one ball remains) is that there is too much information. Let's strip out the distracting materia |
30,881 | Does an urn's probability distribution change as you draw from it without replacement on average? | Let the distribution of drawing a single ball — after having already drawn $k$ balls without replacement — have categorical distribution $E(D_k)$ given the distribution over such categorical distributions $D_k$.
I guess you are asking whether $E(D_k)$ is constant.
I think it is. Suppose that you eventually draw all of the balls. All permutations of the balls are equally likely. The probability of drawing initially is $E(D_0)$. You could rearrange your choices to an equally likely permutation whereby your first chosen ball was chosen last, and your second chosen was chosen first. That ball has expectation $E(D_1)$, which must be equal to $E(D_0)$ due to symmetry. By induction the $E(D_i)$ are all equal. | Does an urn's probability distribution change as you draw from it without replacement on average? | Let the distribution of drawing a single ball — after having already drawn $k$ balls without replacement — have categorical distribution $E(D_k)$ given the distribution over such categorical distribut | Does an urn's probability distribution change as you draw from it without replacement on average?
Let the distribution of drawing a single ball — after having already drawn $k$ balls without replacement — have categorical distribution $E(D_k)$ given the distribution over such categorical distributions $D_k$.
I guess you are asking whether $E(D_k)$ is constant.
I think it is. Suppose that you eventually draw all of the balls. All permutations of the balls are equally likely. The probability of drawing initially is $E(D_0)$. You could rearrange your choices to an equally likely permutation whereby your first chosen ball was chosen last, and your second chosen was chosen first. That ball has expectation $E(D_1)$, which must be equal to $E(D_0)$ due to symmetry. By induction the $E(D_i)$ are all equal. | Does an urn's probability distribution change as you draw from it without replacement on average?
Let the distribution of drawing a single ball — after having already drawn $k$ balls without replacement — have categorical distribution $E(D_k)$ given the distribution over such categorical distribut |
30,882 | Does an urn's probability distribution change as you draw from it without replacement on average? | The "expected distribution" do not change. One could use a martingale argument! I Will add such to the answer later (I am travelling now).
The distribution, conditional on the earlier draws (for the later draws) do change only when you actually observes the draws. If you draw the ball from the urn with a tightly closed hand, and then throws it away without observing its color (I have used such theater effectively as class demonstration), the distribution do not change. This fact has an explication: Probability is about information, Probability is an information concept.
So probabilities do change only when you get new information (conditional probabilities, that is). Drawing the ball and throwing it away without observing it does not give you any new information, so nothing new to condition upon. So when you condition on the actual information set, that has not changed, so the conditional distribution cannot change.
EDIT
I will not now give much more details to this answer, only add one reference: Hosam M. Mahmoud:"Pólya Urn Models" (Chapman & Hall), which treats urn models like the one in this question, and also much more generalized urn schemes, also by using martingale methods for obtaining limit results. But the martingale methods are not needed for the question in this post. | Does an urn's probability distribution change as you draw from it without replacement on average? | The "expected distribution" do not change. One could use a martingale argument! I Will add such to the answer later (I am travelling now).
The distribution, conditional on the earlier draws (for the | Does an urn's probability distribution change as you draw from it without replacement on average?
The "expected distribution" do not change. One could use a martingale argument! I Will add such to the answer later (I am travelling now).
The distribution, conditional on the earlier draws (for the later draws) do change only when you actually observes the draws. If you draw the ball from the urn with a tightly closed hand, and then throws it away without observing its color (I have used such theater effectively as class demonstration), the distribution do not change. This fact has an explication: Probability is about information, Probability is an information concept.
So probabilities do change only when you get new information (conditional probabilities, that is). Drawing the ball and throwing it away without observing it does not give you any new information, so nothing new to condition upon. So when you condition on the actual information set, that has not changed, so the conditional distribution cannot change.
EDIT
I will not now give much more details to this answer, only add one reference: Hosam M. Mahmoud:"Pólya Urn Models" (Chapman & Hall), which treats urn models like the one in this question, and also much more generalized urn schemes, also by using martingale methods for obtaining limit results. But the martingale methods are not needed for the question in this post. | Does an urn's probability distribution change as you draw from it without replacement on average?
The "expected distribution" do not change. One could use a martingale argument! I Will add such to the answer later (I am travelling now).
The distribution, conditional on the earlier draws (for the |
30,883 | How to fit piecewise constant (or step-function) model and compare to logistic model in R | I think one of the main difficulties is that piecewise constant regressions are not usually called "piecewise constant regressions". They are usually called regression trees, which is a nice visual name, but not particularly googleable if you don't already know what people call them! They can be fit in R with the builtin rpart package (I believe rpart stands for "recursive partitioning", which is our third name for the same concept).
Here's rpart in action on your data:
df <- data.frame(x=x, y=y)
tree <- rpart(y ~ x, data=df)
I wrote a little plot_tree function that shows the predictions
plot_tree <- function(tree, x, y) {
s <- seq(110, 155, by=.5)
plot(x, y)
lines(s, predict(tree, data.frame(x=s)))
}
which, when applied to the default tree on your data, looks like this
plot_tree(tree, x, y)
You can control the granularity of fit to your data by using rpart.controll
tree <- rpart(y ~ x, data=df, control=rpart.control(minsplit=5, cp=.0001))
plot_tree(tree, x, y) | How to fit piecewise constant (or step-function) model and compare to logistic model in R | I think one of the main difficulties is that piecewise constant regressions are not usually called "piecewise constant regressions". They are usually called regression trees, which is a nice visual n | How to fit piecewise constant (or step-function) model and compare to logistic model in R
I think one of the main difficulties is that piecewise constant regressions are not usually called "piecewise constant regressions". They are usually called regression trees, which is a nice visual name, but not particularly googleable if you don't already know what people call them! They can be fit in R with the builtin rpart package (I believe rpart stands for "recursive partitioning", which is our third name for the same concept).
Here's rpart in action on your data:
df <- data.frame(x=x, y=y)
tree <- rpart(y ~ x, data=df)
I wrote a little plot_tree function that shows the predictions
plot_tree <- function(tree, x, y) {
s <- seq(110, 155, by=.5)
plot(x, y)
lines(s, predict(tree, data.frame(x=s)))
}
which, when applied to the default tree on your data, looks like this
plot_tree(tree, x, y)
You can control the granularity of fit to your data by using rpart.controll
tree <- rpart(y ~ x, data=df, control=rpart.control(minsplit=5, cp=.0001))
plot_tree(tree, x, y) | How to fit piecewise constant (or step-function) model and compare to logistic model in R
I think one of the main difficulties is that piecewise constant regressions are not usually called "piecewise constant regressions". They are usually called regression trees, which is a nice visual n |
30,884 | Does binomial distribution have the smallest possible variance among all "reasonable" distributions that can model binary elections? | No.
Suppose the voters consist of $n=2k$ married pairs. The husbands get together and decide to vote against their wives, who themselves choose randomly. The outcome is always $k$ votes for each of the candidates, with zero variance.
You might cry foul because the husbands are not voting randomly. Well, they are--they just happen to be tied closely with the random votes of their wives. If that bothers you, change things a bit by having each husband flip ten fair coins. If all ten are heads, he will vote with his wife; otherwise he votes against her. You can check that the election outcome still has small (albeit nonzero) variance, even though every vote is unpredictable.
The crux of the matter lies in the negative covariance between two voting blocs, males and females. | Does binomial distribution have the smallest possible variance among all "reasonable" distributions | No.
Suppose the voters consist of $n=2k$ married pairs. The husbands get together and decide to vote against their wives, who themselves choose randomly. The outcome is always $k$ votes for each of | Does binomial distribution have the smallest possible variance among all "reasonable" distributions that can model binary elections?
No.
Suppose the voters consist of $n=2k$ married pairs. The husbands get together and decide to vote against their wives, who themselves choose randomly. The outcome is always $k$ votes for each of the candidates, with zero variance.
You might cry foul because the husbands are not voting randomly. Well, they are--they just happen to be tied closely with the random votes of their wives. If that bothers you, change things a bit by having each husband flip ten fair coins. If all ten are heads, he will vote with his wife; otherwise he votes against her. You can check that the election outcome still has small (albeit nonzero) variance, even though every vote is unpredictable.
The crux of the matter lies in the negative covariance between two voting blocs, males and females. | Does binomial distribution have the smallest possible variance among all "reasonable" distributions
No.
Suppose the voters consist of $n=2k$ married pairs. The husbands get together and decide to vote against their wives, who themselves choose randomly. The outcome is always $k$ votes for each of |
30,885 | Does binomial distribution have the smallest possible variance among all "reasonable" distributions that can model binary elections? | Double-no (it maximises the variance)
The answer from whuber is excellent. To supplement that answer, it is also worth examining what happens if you assume that the votes are independent. If we take the votes as mutually independent with probabilities $p_1,...,p_n$ then the mean and variance are:
$$\mathbb{E}(S_n) = \sum_{i=1}^n p_i
\quad \quad \quad \quad \quad
\mathbb{V}(S_n) = \sum_{i=1}^n p_i - \sum_{i=1}^n p_i^2.$$
If we condition on a fixed expected value $\mu = \mathbb{E}(S_n)$ then it can be shown that the maximum variance is achieved when $p_1 = \cdots = p_n = \mu$. (To demonstrate this you can set up the Lagrangian optimisation to attain this solution.) So not only does the binomial distribution not minimise the variance, it maximises the variance out of all possible cases where we have independent votes. | Does binomial distribution have the smallest possible variance among all "reasonable" distributions | Double-no (it maximises the variance)
The answer from whuber is excellent. To supplement that answer, it is also worth examining what happens if you assume that the votes are independent. If we take | Does binomial distribution have the smallest possible variance among all "reasonable" distributions that can model binary elections?
Double-no (it maximises the variance)
The answer from whuber is excellent. To supplement that answer, it is also worth examining what happens if you assume that the votes are independent. If we take the votes as mutually independent with probabilities $p_1,...,p_n$ then the mean and variance are:
$$\mathbb{E}(S_n) = \sum_{i=1}^n p_i
\quad \quad \quad \quad \quad
\mathbb{V}(S_n) = \sum_{i=1}^n p_i - \sum_{i=1}^n p_i^2.$$
If we condition on a fixed expected value $\mu = \mathbb{E}(S_n)$ then it can be shown that the maximum variance is achieved when $p_1 = \cdots = p_n = \mu$. (To demonstrate this you can set up the Lagrangian optimisation to attain this solution.) So not only does the binomial distribution not minimise the variance, it maximises the variance out of all possible cases where we have independent votes. | Does binomial distribution have the smallest possible variance among all "reasonable" distributions
Double-no (it maximises the variance)
The answer from whuber is excellent. To supplement that answer, it is also worth examining what happens if you assume that the votes are independent. If we take |
30,886 | Using post-stratification weights in R survey package | If people say they have post-stratified weights, it does not necessarily mean they implemented post-stratification, proper (as in, rescaled the weights in each demographic cell to the known population total). About 80% of usage that I hear of "post-stratified weights" actually refers to calibrated weights (i.e., rather than trying to adjust each and every cell in a five-way table, the weights are only adjusted to match each of the five variables of the table individually). I produced what somebody referred to as a methodological rant on the distinction. The distinction, however, plays a role in standard error calculations, as Anthony noted in another answer. With properly post-stratified weights, you can apply the regular variance estimation formulae, more or less treating your post-strata as sampling strata (minor technicalities aside). With weights that are only calibrated on each table margin, computations are somewhat more involved. Both procedures are internalized in survey package, anyway, though. You just need to feed your post-stratification/calibration variables to the appropriate design object/formula.
library(survey)
data(api)
# cross-classified post-stratification variable in population
apipop$stype.sch.wide <- 10*as.integer(apipop$stype) +
as.integer(apipop$sch.wide)
# cross-classified post-stratification variable in sample
apiclus1$stype.sch.wide <-
10*as.integer(apiclus1$stype) + as.integer(apiclus1$sch.wide)
# population totals
(pop.totals <- xtabs(~stype.sch.wide, data=apipop))
# reference design
dclus1 <- svydesign(id=~dnum,weights=~pw,data=apiclus1,fpc=~fpc)
# post-stratification of the original design
dclus1p <- postStratify(dclus1,~stype.sch.wide, pop.totals)
# design with post-stratified weights, but no evidence of post-stratification
dclus1pfake <- svydesign(id=~dnum,weights=~weights(dclus1p),data=apiclus1,fpc=~fpc)
# taking off the design with known weights, add post-stratification interaction
dclus1pp <- postStratify(dclus1pfake,~stype.sch.wide, pop.totals)
# estimates and standard errors: starting point
svymean(~api00,dclus1)
# post-stratification reduces standard errors a bit
svymean(~api00,dclus1p)
# but here we are not aware of the survey being post-stratified
svymean(~api00,dclus1pfake)
# if we just add post-stratification variables to the design object
# that only had post-stratified weights, the result is the same
# as for post-stratified object based on the original weights
svymean(~api00,dclus1pp) | Using post-stratification weights in R survey package | If people say they have post-stratified weights, it does not necessarily mean they implemented post-stratification, proper (as in, rescaled the weights in each demographic cell to the known population | Using post-stratification weights in R survey package
If people say they have post-stratified weights, it does not necessarily mean they implemented post-stratification, proper (as in, rescaled the weights in each demographic cell to the known population total). About 80% of usage that I hear of "post-stratified weights" actually refers to calibrated weights (i.e., rather than trying to adjust each and every cell in a five-way table, the weights are only adjusted to match each of the five variables of the table individually). I produced what somebody referred to as a methodological rant on the distinction. The distinction, however, plays a role in standard error calculations, as Anthony noted in another answer. With properly post-stratified weights, you can apply the regular variance estimation formulae, more or less treating your post-strata as sampling strata (minor technicalities aside). With weights that are only calibrated on each table margin, computations are somewhat more involved. Both procedures are internalized in survey package, anyway, though. You just need to feed your post-stratification/calibration variables to the appropriate design object/formula.
library(survey)
data(api)
# cross-classified post-stratification variable in population
apipop$stype.sch.wide <- 10*as.integer(apipop$stype) +
as.integer(apipop$sch.wide)
# cross-classified post-stratification variable in sample
apiclus1$stype.sch.wide <-
10*as.integer(apiclus1$stype) + as.integer(apiclus1$sch.wide)
# population totals
(pop.totals <- xtabs(~stype.sch.wide, data=apipop))
# reference design
dclus1 <- svydesign(id=~dnum,weights=~pw,data=apiclus1,fpc=~fpc)
# post-stratification of the original design
dclus1p <- postStratify(dclus1,~stype.sch.wide, pop.totals)
# design with post-stratified weights, but no evidence of post-stratification
dclus1pfake <- svydesign(id=~dnum,weights=~weights(dclus1p),data=apiclus1,fpc=~fpc)
# taking off the design with known weights, add post-stratification interaction
dclus1pp <- postStratify(dclus1pfake,~stype.sch.wide, pop.totals)
# estimates and standard errors: starting point
svymean(~api00,dclus1)
# post-stratification reduces standard errors a bit
svymean(~api00,dclus1p)
# but here we are not aware of the survey being post-stratified
svymean(~api00,dclus1pfake)
# if we just add post-stratification variables to the design object
# that only had post-stratified weights, the result is the same
# as for post-stratified object based on the original weights
svymean(~api00,dclus1pp) | Using post-stratification weights in R survey package
If people say they have post-stratified weights, it does not necessarily mean they implemented post-stratification, proper (as in, rescaled the weights in each demographic cell to the known population |
30,887 | Using post-stratification weights in R survey package | As @StasK says, the correct standard errors for raked/calibrated weights depend on the original weights and the auxiliary variables, and you don't get the right standard errors just by treating them as sampling weights. Even post-stratification can be problematic if clusters are split between post-strata.
You don't actually need the original weights if you have the auxiliary variables. You can recalibrate the weights in the survey package. The weights won't change -- they were already calibrated -- but the standard error estimates resulting from them will change. There's a nice example of this in the PEAS exemplars created by Gillian Raab
It's pretty common in official statistics, though, to treat calibrated weights as if they were sampling weights. That's what NHANES and BRFSS public-use datasets do, and pretty much any other large survey with public use data. Until recently, users wouldn't have had software for doing it right. So, it's not 'best practice', but it probably qualifies as 'good enough practice' | Using post-stratification weights in R survey package | As @StasK says, the correct standard errors for raked/calibrated weights depend on the original weights and the auxiliary variables, and you don't get the right standard errors just by treating them a | Using post-stratification weights in R survey package
As @StasK says, the correct standard errors for raked/calibrated weights depend on the original weights and the auxiliary variables, and you don't get the right standard errors just by treating them as sampling weights. Even post-stratification can be problematic if clusters are split between post-strata.
You don't actually need the original weights if you have the auxiliary variables. You can recalibrate the weights in the survey package. The weights won't change -- they were already calibrated -- but the standard error estimates resulting from them will change. There's a nice example of this in the PEAS exemplars created by Gillian Raab
It's pretty common in official statistics, though, to treat calibrated weights as if they were sampling weights. That's what NHANES and BRFSS public-use datasets do, and pretty much any other large survey with public use data. Until recently, users wouldn't have had software for doing it right. So, it's not 'best practice', but it probably qualifies as 'good enough practice' | Using post-stratification weights in R survey package
As @StasK says, the correct standard errors for raked/calibrated weights depend on the original weights and the auxiliary variables, and you don't get the right standard errors just by treating them a |
30,888 | Using post-stratification weights in R survey package | bad news :) your standard errors, confidence intervals, and tests of significance will be incorrect if you do not account for the relationship between the original and post-stratified weights.
i believe you can back-calculate the original sampling weights if you have the sampling clusters (although you'd have to invest a lot of time reversing the method in the postStratify command).
rather than spending more money, ask whoever creates this data to provide both sets of weights. this is information that the original survey administrator has, and can send to you for the price of an e-mail. | Using post-stratification weights in R survey package | bad news :) your standard errors, confidence intervals, and tests of significance will be incorrect if you do not account for the relationship between the original and post-stratified weights.
i beli | Using post-stratification weights in R survey package
bad news :) your standard errors, confidence intervals, and tests of significance will be incorrect if you do not account for the relationship between the original and post-stratified weights.
i believe you can back-calculate the original sampling weights if you have the sampling clusters (although you'd have to invest a lot of time reversing the method in the postStratify command).
rather than spending more money, ask whoever creates this data to provide both sets of weights. this is information that the original survey administrator has, and can send to you for the price of an e-mail. | Using post-stratification weights in R survey package
bad news :) your standard errors, confidence intervals, and tests of significance will be incorrect if you do not account for the relationship between the original and post-stratified weights.
i beli |
30,889 | Difference between anova and Anova function | anova is a function in base R. Anova is a function in the car package.
The former calculates type I tests, that is, each variable is added in sequential order. The latter calculates type II or III tests. Type II tests test each variable after all the others. For details, see ?Anova. | Difference between anova and Anova function | anova is a function in base R. Anova is a function in the car package.
The former calculates type I tests, that is, each variable is added in sequential order. The latter calculates type II or III t | Difference between anova and Anova function
anova is a function in base R. Anova is a function in the car package.
The former calculates type I tests, that is, each variable is added in sequential order. The latter calculates type II or III tests. Type II tests test each variable after all the others. For details, see ?Anova. | Difference between anova and Anova function
anova is a function in base R. Anova is a function in the car package.
The former calculates type I tests, that is, each variable is added in sequential order. The latter calculates type II or III t |
30,890 | Correlated cases and Cross Validation | We have a paper in press that discusses this problem. AFAIK, there is no R package with sophisticated options for block cross-validation, but the paper has some code attached in the appendix that may be useful.
Roberts, D. R. et al. (2017) Cross-validation strategies for data with temporal, spatial, hierarchical, or phylogenetic structure Ecography, in press.
http://onlinelibrary.wiley.com/doi/10.1111/ecog.02881/abstract | Correlated cases and Cross Validation | We have a paper in press that discusses this problem. AFAIK, there is no R package with sophisticated options for block cross-validation, but the paper has some code attached in the appendix that may | Correlated cases and Cross Validation
We have a paper in press that discusses this problem. AFAIK, there is no R package with sophisticated options for block cross-validation, but the paper has some code attached in the appendix that may be useful.
Roberts, D. R. et al. (2017) Cross-validation strategies for data with temporal, spatial, hierarchical, or phylogenetic structure Ecography, in press.
http://onlinelibrary.wiley.com/doi/10.1111/ecog.02881/abstract | Correlated cases and Cross Validation
We have a paper in press that discusses this problem. AFAIK, there is no R package with sophisticated options for block cross-validation, but the paper has some code attached in the appendix that may |
30,891 | Correlated cases and Cross Validation | The development version of cvTools at https://github.com/aalfons/cvTools has a grouping argument which may be what you are looking for. | Correlated cases and Cross Validation | The development version of cvTools at https://github.com/aalfons/cvTools has a grouping argument which may be what you are looking for. | Correlated cases and Cross Validation
The development version of cvTools at https://github.com/aalfons/cvTools has a grouping argument which may be what you are looking for. | Correlated cases and Cross Validation
The development version of cvTools at https://github.com/aalfons/cvTools has a grouping argument which may be what you are looking for. |
30,892 | Count the number of each unique row in a data frame? [closed] | Use the count function from the plyr package.
library(plyr)
df = data.frame(x1=c(0,1,1,1,2,3,3,3),
x2=c(0,1,1,3,2,3,3,2),
x3=c(0,1,1,1,2,3,3,2))
count(df, vars = c("x1", "x2", "x3"))
Output:
> count(df, vars = c("x1", "x2", "x3"))
x1 x2 x3 freq
1 0 0 0 1
2 1 1 1 2
3 1 3 1 1
4 2 2 2 1
5 3 2 2 1
6 3 3 3 2 | Count the number of each unique row in a data frame? [closed] | Use the count function from the plyr package.
library(plyr)
df = data.frame(x1=c(0,1,1,1,2,3,3,3),
x2=c(0,1,1,3,2,3,3,2),
x3=c(0,1,1,1,2,3,3,2))
count(df, vars = c("x1", | Count the number of each unique row in a data frame? [closed]
Use the count function from the plyr package.
library(plyr)
df = data.frame(x1=c(0,1,1,1,2,3,3,3),
x2=c(0,1,1,3,2,3,3,2),
x3=c(0,1,1,1,2,3,3,2))
count(df, vars = c("x1", "x2", "x3"))
Output:
> count(df, vars = c("x1", "x2", "x3"))
x1 x2 x3 freq
1 0 0 0 1
2 1 1 1 2
3 1 3 1 1
4 2 2 2 1
5 3 2 2 1
6 3 3 3 2 | Count the number of each unique row in a data frame? [closed]
Use the count function from the plyr package.
library(plyr)
df = data.frame(x1=c(0,1,1,1,2,3,3,3),
x2=c(0,1,1,3,2,3,3,2),
x3=c(0,1,1,1,2,3,3,2))
count(df, vars = c("x1", |
30,893 | Nearest Neighbor Matching in R using matchit | I am not an expert on either R nor propensity matching, but I ran into the same problem while working on a project. I think what matchit does is randomly pick one of the control subjects that falls within the caliper interval around the treated subject. If you set your seed to the same number every time you run your match.out line, you will get the same result:
set.seed(100)
match.out <- matchit(Category ~ FactorA + FactorB, Data,
method = 'nearest', distance = 'logit', caliper = .10)
Try running these two lines together. | Nearest Neighbor Matching in R using matchit | I am not an expert on either R nor propensity matching, but I ran into the same problem while working on a project. I think what matchit does is randomly pick one of the control subjects that falls wi | Nearest Neighbor Matching in R using matchit
I am not an expert on either R nor propensity matching, but I ran into the same problem while working on a project. I think what matchit does is randomly pick one of the control subjects that falls within the caliper interval around the treated subject. If you set your seed to the same number every time you run your match.out line, you will get the same result:
set.seed(100)
match.out <- matchit(Category ~ FactorA + FactorB, Data,
method = 'nearest', distance = 'logit', caliper = .10)
Try running these two lines together. | Nearest Neighbor Matching in R using matchit
I am not an expert on either R nor propensity matching, but I ran into the same problem while working on a project. I think what matchit does is randomly pick one of the control subjects that falls wi |
30,894 | Nearest Neighbor Matching in R using matchit | I came across the same problem. Setting the seed to a fixed number with the set.seed() function, however, by altering the number given in this function, the outcomes will change. It is true that matchit() will randomly select control subjects when they fall into the caliper.
By making use of the argument mahvars you can define on basis of which variables a subject from the pool of control subjects within a caliper should be picked.
From the MatchIt manual (p. 19):
mahvars: variables on which to perform Mahalanobis-metric matching
within each caliper (default $=$ NULL). Variables should be entered as a
vector of variable names. (e.g., mahvars = c("X1", "X2")). If mahvars
is specified without caliper, the caliper is set to 0.25. | Nearest Neighbor Matching in R using matchit | I came across the same problem. Setting the seed to a fixed number with the set.seed() function, however, by altering the number given in this function, the outcomes will change. It is true that match | Nearest Neighbor Matching in R using matchit
I came across the same problem. Setting the seed to a fixed number with the set.seed() function, however, by altering the number given in this function, the outcomes will change. It is true that matchit() will randomly select control subjects when they fall into the caliper.
By making use of the argument mahvars you can define on basis of which variables a subject from the pool of control subjects within a caliper should be picked.
From the MatchIt manual (p. 19):
mahvars: variables on which to perform Mahalanobis-metric matching
within each caliper (default $=$ NULL). Variables should be entered as a
vector of variable names. (e.g., mahvars = c("X1", "X2")). If mahvars
is specified without caliper, the caliper is set to 0.25. | Nearest Neighbor Matching in R using matchit
I came across the same problem. Setting the seed to a fixed number with the set.seed() function, however, by altering the number given in this function, the outcomes will change. It is true that match |
30,895 | Errors in fitting a censored quantile regression model | In such artificial data problems the default starting values for the Powell method are not very propitious. Here is what is happening: crq.fit.pow naively starts by trying to find an rq solution by ignoring the censoring. In your case, since your covariates are independent of the response and one of the covariates is binary, this is likely to yield a solution with a hard zero treatment coefficient. Then the algorithm tries to start at this solution and finds that this basic solution (the pair of observations that characterize the initial fit) both have treatment indicator 0, (or 1), and at that point, trying to solve for the starting value yields a singular linear system and you get your error.
So the problem arises from a rather nasty conspiracy of problems that have to do with your replicated data the lack of a model signal, and, frankly, a rather naïve choice of a protocol for choosing a starting value. If you really want to force R to produce an answer you can use start = "global" -- and (at least for small problems like this) crq will produce a globally optimal solution. But I suspect that the better path is to change the model somewhat. | Errors in fitting a censored quantile regression model | In such artificial data problems the default starting values for the Powell method are not very propitious. Here is what is happening: crq.fit.pow naively starts by trying to find an rq solution by ig | Errors in fitting a censored quantile regression model
In such artificial data problems the default starting values for the Powell method are not very propitious. Here is what is happening: crq.fit.pow naively starts by trying to find an rq solution by ignoring the censoring. In your case, since your covariates are independent of the response and one of the covariates is binary, this is likely to yield a solution with a hard zero treatment coefficient. Then the algorithm tries to start at this solution and finds that this basic solution (the pair of observations that characterize the initial fit) both have treatment indicator 0, (or 1), and at that point, trying to solve for the starting value yields a singular linear system and you get your error.
So the problem arises from a rather nasty conspiracy of problems that have to do with your replicated data the lack of a model signal, and, frankly, a rather naïve choice of a protocol for choosing a starting value. If you really want to force R to produce an answer you can use start = "global" -- and (at least for small problems like this) crq will produce a globally optimal solution. But I suspect that the better path is to change the model somewhat. | Errors in fitting a censored quantile regression model
In such artificial data problems the default starting values for the Powell method are not very propitious. Here is what is happening: crq.fit.pow naively starts by trying to find an rq solution by ig |
30,896 | How to correctly interpret a parallel analysis in exploratory factor analysis? | There are two equivalent ways to express the parallel analysis criterion. But first I need to take care of a misunderstanding prevalent in the literature.
The Misunderstanding
The so-called Kaiser rule (Kaiser didn't actually like the rule if you read his 1960 paper) eigenvalues greater than one are retained for principal component analysis. Using the so-called Kaiser rule eigenvalues greater than zero are retained for principal factor analysis/common factor anlaysis. This confusion has arisen over the years because several authors have been sloppy about using the label "factor analysis" to describe "principal component analysis," when they are not the same thing.
See Gently Clarifying the Application of Horn’s Parallel Analysis to Principal Component Analysis Versus Factor Analysis for the math of it if you need convincing on this point.
Parallel Analysis Retention Criteria
For principal component analysis based on the correlation matrix of $p$ number of variables, you have several quantities. First you have the observed eigenvalues from an eigendecomposition of the correlation matrix of your data, $\lambda_{1}, \dots, \lambda_{p}$. Second, you have the mean eigenvalues from eigendecompositions of the correlation matrices of "a large number" of random (uncorrelated) data sets of the same $n$ and $p$ as your own, $\bar{\lambda}^{\text{r}}_{1},\dots,\bar{\lambda}^{\text{r}}_{p}$.
Horn also frames his examples in terms of "sampling bias" and estimates this bias for the $q^{\text{th}}$ eigenvalue (for principal component analysis) as $\varepsilon_{q} = \bar{\lambda}^{\text{r}}_{q} - 1$. This bias can then be used to adjust observed eigenvalues thus: $\lambda^{\text{adj}}_{q} = \lambda_{q} - \varepsilon_{q}$
Given these quantities you can express the retention criterion for the $q^{\text{th}}$ observed eigenvalue of a principal component parallel analysis in two mathematically equivalent ways:
$\lambda^{\text{adj}}_{q} \left\{\begin{array}{cc}
> 1 & \text{Retain.} \\\\
\le 1 & \text{Not retain.}
\end{array}\right.$
$\lambda_{q} \left\{\begin{array}{cc}
> \bar{\lambda}^{\text{r}}_{q} & \text{Retain.} \\\\
\le \bar{\lambda}^{\text{r}}_{q} & \text{Not retain.}
\end{array}\right.$
What about for principal factor analysis/common factor analysis? Here we have to bear in mind that the bias is the corresponding mean eigenvalue: $\varepsilon_{q} = \bar{\lambda}^{\text{r}}_{q} - 0 = \bar{\lambda}^{\text{r}}_{q}$ (minus zero because the Kaiser rule for eigendecomposition of the correlation matrix with the diagonal replaced by the communalities is to retain eigenvalues greater than zero). Therefore here $\lambda^{\text{adj}}_{q} = \lambda_{q} - \bar{\lambda}^{\text{r}}_{q}$.
Therefore the retention criteria for principal factor analysis/common factor analysis ought be expressed as:
$\lambda^{\text{adj}}_{q} \left\{\begin{array}{cc}
> 0 & \text{Retain.} \\\\
\le 0 & \text{Not retain.}
\end{array}\right.$
$\lambda_{q} \left\{\begin{array}{cc}
> \bar{\lambda}^{\text{r}}_{q} & \text{Retain.} \\\\
\le \bar{\lambda}^{\text{r}}_{q} & \text{Not retain.}
\end{array}\right.$
Notice that the second form of expressing the retention criterion is consistent for both principal component analysis and common factor analysis (i.e. because the definition of $\lambda^{\text{adj}}_{q}$ changes depending on components/factors, but the second form of retention criterion is not expressed in terms of $\lambda^{\text{adj}}_{q}$).
one more thing...
Both principal component analysis and principal factor analysis/common factor analysis can be based on the covariance matrix rather than the correlation matrix. Because this changes the assumptions/definitions about the total and common variance, only the second forms of the retention criterion ought to be used when basing one's analysis on the covariance matrix. | How to correctly interpret a parallel analysis in exploratory factor analysis? | There are two equivalent ways to express the parallel analysis criterion. But first I need to take care of a misunderstanding prevalent in the literature.
The Misunderstanding
The so-called Kaiser rul | How to correctly interpret a parallel analysis in exploratory factor analysis?
There are two equivalent ways to express the parallel analysis criterion. But first I need to take care of a misunderstanding prevalent in the literature.
The Misunderstanding
The so-called Kaiser rule (Kaiser didn't actually like the rule if you read his 1960 paper) eigenvalues greater than one are retained for principal component analysis. Using the so-called Kaiser rule eigenvalues greater than zero are retained for principal factor analysis/common factor anlaysis. This confusion has arisen over the years because several authors have been sloppy about using the label "factor analysis" to describe "principal component analysis," when they are not the same thing.
See Gently Clarifying the Application of Horn’s Parallel Analysis to Principal Component Analysis Versus Factor Analysis for the math of it if you need convincing on this point.
Parallel Analysis Retention Criteria
For principal component analysis based on the correlation matrix of $p$ number of variables, you have several quantities. First you have the observed eigenvalues from an eigendecomposition of the correlation matrix of your data, $\lambda_{1}, \dots, \lambda_{p}$. Second, you have the mean eigenvalues from eigendecompositions of the correlation matrices of "a large number" of random (uncorrelated) data sets of the same $n$ and $p$ as your own, $\bar{\lambda}^{\text{r}}_{1},\dots,\bar{\lambda}^{\text{r}}_{p}$.
Horn also frames his examples in terms of "sampling bias" and estimates this bias for the $q^{\text{th}}$ eigenvalue (for principal component analysis) as $\varepsilon_{q} = \bar{\lambda}^{\text{r}}_{q} - 1$. This bias can then be used to adjust observed eigenvalues thus: $\lambda^{\text{adj}}_{q} = \lambda_{q} - \varepsilon_{q}$
Given these quantities you can express the retention criterion for the $q^{\text{th}}$ observed eigenvalue of a principal component parallel analysis in two mathematically equivalent ways:
$\lambda^{\text{adj}}_{q} \left\{\begin{array}{cc}
> 1 & \text{Retain.} \\\\
\le 1 & \text{Not retain.}
\end{array}\right.$
$\lambda_{q} \left\{\begin{array}{cc}
> \bar{\lambda}^{\text{r}}_{q} & \text{Retain.} \\\\
\le \bar{\lambda}^{\text{r}}_{q} & \text{Not retain.}
\end{array}\right.$
What about for principal factor analysis/common factor analysis? Here we have to bear in mind that the bias is the corresponding mean eigenvalue: $\varepsilon_{q} = \bar{\lambda}^{\text{r}}_{q} - 0 = \bar{\lambda}^{\text{r}}_{q}$ (minus zero because the Kaiser rule for eigendecomposition of the correlation matrix with the diagonal replaced by the communalities is to retain eigenvalues greater than zero). Therefore here $\lambda^{\text{adj}}_{q} = \lambda_{q} - \bar{\lambda}^{\text{r}}_{q}$.
Therefore the retention criteria for principal factor analysis/common factor analysis ought be expressed as:
$\lambda^{\text{adj}}_{q} \left\{\begin{array}{cc}
> 0 & \text{Retain.} \\\\
\le 0 & \text{Not retain.}
\end{array}\right.$
$\lambda_{q} \left\{\begin{array}{cc}
> \bar{\lambda}^{\text{r}}_{q} & \text{Retain.} \\\\
\le \bar{\lambda}^{\text{r}}_{q} & \text{Not retain.}
\end{array}\right.$
Notice that the second form of expressing the retention criterion is consistent for both principal component analysis and common factor analysis (i.e. because the definition of $\lambda^{\text{adj}}_{q}$ changes depending on components/factors, but the second form of retention criterion is not expressed in terms of $\lambda^{\text{adj}}_{q}$).
one more thing...
Both principal component analysis and principal factor analysis/common factor analysis can be based on the covariance matrix rather than the correlation matrix. Because this changes the assumptions/definitions about the total and common variance, only the second forms of the retention criterion ought to be used when basing one's analysis on the covariance matrix. | How to correctly interpret a parallel analysis in exploratory factor analysis?
There are two equivalent ways to express the parallel analysis criterion. But first I need to take care of a misunderstanding prevalent in the literature.
The Misunderstanding
The so-called Kaiser rul |
30,897 | How to correctly interpret a parallel analysis in exploratory factor analysis? | Yes, it is possible to have a value of 2.21 if the sample size is not infinitely large (or large enough...). This is, in fact the motivation behind the development of Parallel Analysis as an augmentation to the eigenvalue 1 rule.
I cite Valle 1999 on this answer and have italicized the part speaking directly to your question.
Selection of the Number of Principal Components: The Variance of the Reconstruction Error Criterion with a Comparison to Other Methods†
Sergio Valle,Weihua Li, and, and S. Joe Qin*
Industrial & Engineering Chemistry Research 1999 38 (11), 4389-4401
Parallel Analysis. The PA method basically
builds PCA models for two matrices: one is the original
data matrix and the other is an uncorrelated data
matrix with the same size as the original matrix. This
method was developed originally by Horn to enhance
the performance of the Scree test. When the eigenvalues
for each matrix are plotted in the same figure, all the
values above the intersection represent the process
information and the values under the intersection are
considered noise. Because of this intersection, the parallel
analysis method is not ambiguous in the selection of
the number of PCs.
For a large number of samples, the eigenvalues for a
correlation matrix of uncorrelated variables are 1. In
this case, the PA method is identical to the AE method.
However, when the samples are generated with a finite
number of samples, the initial eigenvalues exceed 1,
while the final eigenvalues are under 1. That is why
Horn suggested comparing the correlation matrix
eigenvalues for uncorrelated variables with those of a
real data matrix based on the same sample size. | How to correctly interpret a parallel analysis in exploratory factor analysis? | Yes, it is possible to have a value of 2.21 if the sample size is not infinitely large (or large enough...). This is, in fact the motivation behind the development of Parallel Analysis as an augmentat | How to correctly interpret a parallel analysis in exploratory factor analysis?
Yes, it is possible to have a value of 2.21 if the sample size is not infinitely large (or large enough...). This is, in fact the motivation behind the development of Parallel Analysis as an augmentation to the eigenvalue 1 rule.
I cite Valle 1999 on this answer and have italicized the part speaking directly to your question.
Selection of the Number of Principal Components: The Variance of the Reconstruction Error Criterion with a Comparison to Other Methods†
Sergio Valle,Weihua Li, and, and S. Joe Qin*
Industrial & Engineering Chemistry Research 1999 38 (11), 4389-4401
Parallel Analysis. The PA method basically
builds PCA models for two matrices: one is the original
data matrix and the other is an uncorrelated data
matrix with the same size as the original matrix. This
method was developed originally by Horn to enhance
the performance of the Scree test. When the eigenvalues
for each matrix are plotted in the same figure, all the
values above the intersection represent the process
information and the values under the intersection are
considered noise. Because of this intersection, the parallel
analysis method is not ambiguous in the selection of
the number of PCs.
For a large number of samples, the eigenvalues for a
correlation matrix of uncorrelated variables are 1. In
this case, the PA method is identical to the AE method.
However, when the samples are generated with a finite
number of samples, the initial eigenvalues exceed 1,
while the final eigenvalues are under 1. That is why
Horn suggested comparing the correlation matrix
eigenvalues for uncorrelated variables with those of a
real data matrix based on the same sample size. | How to correctly interpret a parallel analysis in exploratory factor analysis?
Yes, it is possible to have a value of 2.21 if the sample size is not infinitely large (or large enough...). This is, in fact the motivation behind the development of Parallel Analysis as an augmentat |
30,898 | How to correctly interpret a parallel analysis in exploratory factor analysis? | Your example is certainly not clear, but it might not be nonsense either. Briefly, consider the possibility that the example is basing its decision rule on the eigenvalue of the first simulated factor that is larger than the real factor of the same factor number.
Here's another example in r:
d8a=data.frame(y=rbinom(99,1,.5),x=c(rnorm(50),rep(0,49)),z=rep(c(1,0),c(50,49)))
require(psych);fa.parallel(d8a)
The data are random, and there are only three variables, so a second factor certainly wouldn't make sense, and that's what the parallel analysis indicates.* The results also corroborate what @Alexis said regarding "The Misunderstanding".
Say I interpret this analysis as follows: “Parallel analysis suggests that only factors [not components] with eigenvalue of 1.2E-6 or more should be retained.” This makes a certain amount of sense because that's the value of the first simulated eigenvalue that is larger than the "real" eigenvalue, and all eigenvalues thereafter necessarily decrease. It's an awkward way to report that result, but it's at least consistent with the reasoning that one should look very skeptically at any factors (or components) with eigenvalues that aren't much larger than the corresponding eigenvalues from simulated, uncorrelated data. This should be the case consistently after the first instance on the scree plot where the simulated eigenvalue exceeds the corresponding, real eigenvalue. In the above example, the simulated third factor is very slightly smaller than the "real" third factor, but nobody in their right mind is going to retain a three-factor solution here.
*In this case, R says, "Parallel analysis suggests that the number of factors = 1 and the number of components = 2," but hopefully most of us know not to trust our software to interpret our plots for us...I definitely would not retain the second component just because it's infinitesimally larger than the second simulated component. | How to correctly interpret a parallel analysis in exploratory factor analysis? | Your example is certainly not clear, but it might not be nonsense either. Briefly, consider the possibility that the example is basing its decision rule on the eigenvalue of the first simulated factor | How to correctly interpret a parallel analysis in exploratory factor analysis?
Your example is certainly not clear, but it might not be nonsense either. Briefly, consider the possibility that the example is basing its decision rule on the eigenvalue of the first simulated factor that is larger than the real factor of the same factor number.
Here's another example in r:
d8a=data.frame(y=rbinom(99,1,.5),x=c(rnorm(50),rep(0,49)),z=rep(c(1,0),c(50,49)))
require(psych);fa.parallel(d8a)
The data are random, and there are only three variables, so a second factor certainly wouldn't make sense, and that's what the parallel analysis indicates.* The results also corroborate what @Alexis said regarding "The Misunderstanding".
Say I interpret this analysis as follows: “Parallel analysis suggests that only factors [not components] with eigenvalue of 1.2E-6 or more should be retained.” This makes a certain amount of sense because that's the value of the first simulated eigenvalue that is larger than the "real" eigenvalue, and all eigenvalues thereafter necessarily decrease. It's an awkward way to report that result, but it's at least consistent with the reasoning that one should look very skeptically at any factors (or components) with eigenvalues that aren't much larger than the corresponding eigenvalues from simulated, uncorrelated data. This should be the case consistently after the first instance on the scree plot where the simulated eigenvalue exceeds the corresponding, real eigenvalue. In the above example, the simulated third factor is very slightly smaller than the "real" third factor, but nobody in their right mind is going to retain a three-factor solution here.
*In this case, R says, "Parallel analysis suggests that the number of factors = 1 and the number of components = 2," but hopefully most of us know not to trust our software to interpret our plots for us...I definitely would not retain the second component just because it's infinitesimally larger than the second simulated component. | How to correctly interpret a parallel analysis in exploratory factor analysis?
Your example is certainly not clear, but it might not be nonsense either. Briefly, consider the possibility that the example is basing its decision rule on the eigenvalue of the first simulated factor |
30,899 | Why are there no one-inflated count data models? | A one-inflated Poisson model for a count $Y_i$ is
$$\begin{align}\Pr(Y_i = 1) &= \pi_i +(1-\pi_i)\cdot\mu_i\mathrm{e}^{-\mu_i}\\
\Pr(Y_i = y_i) &= (1-\pi_i)\cdot\frac{\mu_i^{y_i}\mathrm{e}^{-\mu_i}}{y_i!} \qquad \text{when } y_i\neq 1
\end{align}$$
where the Poisson mean $\mu_i$ & Bernoulli probability $\pi_i$ are related to the predictors through appropriate link functions. You can define a similar model to inflate probabilities for any values you choose.
Still, zero has a special (& once controversial) place among the counting numbers—in a sense representing the absence of anything to count. And it's the "nothing" vs "something" distinction, rather than the "one" vs "any other count" distinction that tends to be relevant across a wide range of phenomena we like to model: there's one process that gives a nought, one, two, ... count & another that gives no count at all. | Why are there no one-inflated count data models? | A one-inflated Poisson model for a count $Y_i$ is
$$\begin{align}\Pr(Y_i = 1) &= \pi_i +(1-\pi_i)\cdot\mu_i\mathrm{e}^{-\mu_i}\\
\Pr(Y_i = y_i) &= (1-\pi_i)\cdot\frac{\mu_i^{y_i}\mathrm{e}^{-\mu_i}}{y | Why are there no one-inflated count data models?
A one-inflated Poisson model for a count $Y_i$ is
$$\begin{align}\Pr(Y_i = 1) &= \pi_i +(1-\pi_i)\cdot\mu_i\mathrm{e}^{-\mu_i}\\
\Pr(Y_i = y_i) &= (1-\pi_i)\cdot\frac{\mu_i^{y_i}\mathrm{e}^{-\mu_i}}{y_i!} \qquad \text{when } y_i\neq 1
\end{align}$$
where the Poisson mean $\mu_i$ & Bernoulli probability $\pi_i$ are related to the predictors through appropriate link functions. You can define a similar model to inflate probabilities for any values you choose.
Still, zero has a special (& once controversial) place among the counting numbers—in a sense representing the absence of anything to count. And it's the "nothing" vs "something" distinction, rather than the "one" vs "any other count" distinction that tends to be relevant across a wide range of phenomena we like to model: there's one process that gives a nought, one, two, ... count & another that gives no count at all. | Why are there no one-inflated count data models?
A one-inflated Poisson model for a count $Y_i$ is
$$\begin{align}\Pr(Y_i = 1) &= \pi_i +(1-\pi_i)\cdot\mu_i\mathrm{e}^{-\mu_i}\\
\Pr(Y_i = y_i) &= (1-\pi_i)\cdot\frac{\mu_i^{y_i}\mathrm{e}^{-\mu_i}}{y |
30,900 | Why are there no one-inflated count data models? | The R package VGAM has function vglm which can be used to fit all sorts of Poisson-esque models. You can use it to specify a one-inflated model, so something like vglm(Y~X,family=oipospoisson(),data=data). See here for more details. | Why are there no one-inflated count data models? | The R package VGAM has function vglm which can be used to fit all sorts of Poisson-esque models. You can use it to specify a one-inflated model, so something like vglm(Y~X,family=oipospoisson(),data=d | Why are there no one-inflated count data models?
The R package VGAM has function vglm which can be used to fit all sorts of Poisson-esque models. You can use it to specify a one-inflated model, so something like vglm(Y~X,family=oipospoisson(),data=data). See here for more details. | Why are there no one-inflated count data models?
The R package VGAM has function vglm which can be used to fit all sorts of Poisson-esque models. You can use it to specify a one-inflated model, so something like vglm(Y~X,family=oipospoisson(),data=d |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.