idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
27,501
Hypothesis testing. Why center the sampling distribution on H0?
Suppose $\boldsymbol X = (X_1, X_2, \ldots, X_n)$ is a sample drawn from a normal distribution with unknown mean $\mu$ and known variance $\sigma^2$. The sample mean $\bar X$ is therefore normal with mean $\mu$ and variance $\sigma^2/n$. On this much, I think there can be no possibility of disagreement. Now, you propose that our test statistic is $$Z = \frac{\bar X - \mu}{\sigma/\sqrt{n}} \sim \operatorname{Normal}(0,1).$$ Right? BUT THIS IS NOT A STATISTIC. Why? Because $\mu$ is an unknown parameter. A statistic is a function of the sample that does not depend on any unknown parameters. Therefore, an assumption must be made about $\mu$ in order for $Z$ to be a statistic. One such assumption is to write $$H_0 : \mu = \mu_0, \quad \text{vs.} \quad H_1 : \mu \ne \mu_0,$$ under which $$Z \mid H_0 = \frac{\bar X - \mu_0}{\sigma/\sqrt{n}} \sim \operatorname{Normal}(0,1),$$ which is a statistic. By contrast, you propose to use $\mu = \bar X$ itself. In that case, $Z = 0$ identically, and it is not even a random variable, let alone normally distributed. There is nothing to test.
Hypothesis testing. Why center the sampling distribution on H0?
Suppose $\boldsymbol X = (X_1, X_2, \ldots, X_n)$ is a sample drawn from a normal distribution with unknown mean $\mu$ and known variance $\sigma^2$. The sample mean $\bar X$ is therefore normal with
Hypothesis testing. Why center the sampling distribution on H0? Suppose $\boldsymbol X = (X_1, X_2, \ldots, X_n)$ is a sample drawn from a normal distribution with unknown mean $\mu$ and known variance $\sigma^2$. The sample mean $\bar X$ is therefore normal with mean $\mu$ and variance $\sigma^2/n$. On this much, I think there can be no possibility of disagreement. Now, you propose that our test statistic is $$Z = \frac{\bar X - \mu}{\sigma/\sqrt{n}} \sim \operatorname{Normal}(0,1).$$ Right? BUT THIS IS NOT A STATISTIC. Why? Because $\mu$ is an unknown parameter. A statistic is a function of the sample that does not depend on any unknown parameters. Therefore, an assumption must be made about $\mu$ in order for $Z$ to be a statistic. One such assumption is to write $$H_0 : \mu = \mu_0, \quad \text{vs.} \quad H_1 : \mu \ne \mu_0,$$ under which $$Z \mid H_0 = \frac{\bar X - \mu_0}{\sigma/\sqrt{n}} \sim \operatorname{Normal}(0,1),$$ which is a statistic. By contrast, you propose to use $\mu = \bar X$ itself. In that case, $Z = 0$ identically, and it is not even a random variable, let alone normally distributed. There is nothing to test.
Hypothesis testing. Why center the sampling distribution on H0? Suppose $\boldsymbol X = (X_1, X_2, \ldots, X_n)$ is a sample drawn from a normal distribution with unknown mean $\mu$ and known variance $\sigma^2$. The sample mean $\bar X$ is therefore normal with
27,502
Hypothesis testing. Why center the sampling distribution on H0?
However, because the shape of this assumed distribution is actually based on the sample data, centering it on H0 seems like an odd choice to me. This is actually not true. The shape of this assumed distribution comes from accepting $H_0$ as true. Sample is not directly involved in that, other than by some assumptions. Using the sample directly, is not enough. You need also the null hypothesis to hold. If one would instead use the sampling distribution of the statistic, i.e. center the distribution on the sample statistic, then hypothesis testing would correspond to estimating the probability of H0 given the samples. The question is: how do you estimate a probability of something which you assume is true. In our case if you assume $H_0$ as true, is futile to try to estimate the probability that $H_0$ is true. I thus feel that centering the distribution on H0 is an unneccessary complication. You don't have two distributions there, there is only one, the one assumed to be your ground truth, aka the one which comes with $H_0$. There is however a sampling distribution derived from sample, but this is not involved in the hypotheses you use. I good exercise would be to try to replicate the same logic with an asymmetric distribution. Take chi-square distribution like in chi square independence test. Are you able to reproduce it? I think the answer is no.
Hypothesis testing. Why center the sampling distribution on H0?
However, because the shape of this assumed distribution is actually based on the sample data, centering it on H0 seems like an odd choice to me. This is actually not true. The shape of this assumed
Hypothesis testing. Why center the sampling distribution on H0? However, because the shape of this assumed distribution is actually based on the sample data, centering it on H0 seems like an odd choice to me. This is actually not true. The shape of this assumed distribution comes from accepting $H_0$ as true. Sample is not directly involved in that, other than by some assumptions. Using the sample directly, is not enough. You need also the null hypothesis to hold. If one would instead use the sampling distribution of the statistic, i.e. center the distribution on the sample statistic, then hypothesis testing would correspond to estimating the probability of H0 given the samples. The question is: how do you estimate a probability of something which you assume is true. In our case if you assume $H_0$ as true, is futile to try to estimate the probability that $H_0$ is true. I thus feel that centering the distribution on H0 is an unneccessary complication. You don't have two distributions there, there is only one, the one assumed to be your ground truth, aka the one which comes with $H_0$. There is however a sampling distribution derived from sample, but this is not involved in the hypotheses you use. I good exercise would be to try to replicate the same logic with an asymmetric distribution. Take chi-square distribution like in chi square independence test. Are you able to reproduce it? I think the answer is no.
Hypothesis testing. Why center the sampling distribution on H0? However, because the shape of this assumed distribution is actually based on the sample data, centering it on H0 seems like an odd choice to me. This is actually not true. The shape of this assumed
27,503
Hypothesis testing. Why center the sampling distribution on H0?
From what I gather, you are arguing that it makes more sense to 'flip' $H_0$ and $H_1$. I find it helpful to think of hypothesis testing as a proof by contradiction. We assume $H_0$ to be true, then show that evidence indicates such an assumption is flawed, thus justifying the rejection of $H_0$ in favor of $H_1$. This works because when we assume $H_0$ and center our distribution there, we can determine how likely/unlikely our observation is. For example, if $H_0: \mu = 0$ vs. $H_1: \mu \neq 0$ and we determine from our testing that there is a less that 5% chance that the true mean $\mu$ actually equals 0, we can reject $H_0$ with 95% confidence. The reverse is not necessarily true. Say we do an experiment and determine that there is actually a 30% chance that the null hypothesis still holds. We cannot reject the null, but we also do not accept it. This situation does not show that $H_0$ (the null) is true, but that we do not have the evidence to show that it is false. Now imagine if we flipped this situation. Say we assume $H_1$ and find that given our results, the likelihood of $H_0$ is 5% or less, what does that mean? Sure we can reject the null, be can we necessarily accept $H_1$? It is hard to justify accepting the thing we assumed to be true in the beginning. Showing that $H_0$ is false is not the result we are after; we want to argue in favor of $H_1$. By doing the test in the way you describe, we are showing that we do not have evidence to say that $H_1$ is false, which is subtly different than arguing $H_1$ is true.
Hypothesis testing. Why center the sampling distribution on H0?
From what I gather, you are arguing that it makes more sense to 'flip' $H_0$ and $H_1$. I find it helpful to think of hypothesis testing as a proof by contradiction. We assume $H_0$ to be true, then
Hypothesis testing. Why center the sampling distribution on H0? From what I gather, you are arguing that it makes more sense to 'flip' $H_0$ and $H_1$. I find it helpful to think of hypothesis testing as a proof by contradiction. We assume $H_0$ to be true, then show that evidence indicates such an assumption is flawed, thus justifying the rejection of $H_0$ in favor of $H_1$. This works because when we assume $H_0$ and center our distribution there, we can determine how likely/unlikely our observation is. For example, if $H_0: \mu = 0$ vs. $H_1: \mu \neq 0$ and we determine from our testing that there is a less that 5% chance that the true mean $\mu$ actually equals 0, we can reject $H_0$ with 95% confidence. The reverse is not necessarily true. Say we do an experiment and determine that there is actually a 30% chance that the null hypothesis still holds. We cannot reject the null, but we also do not accept it. This situation does not show that $H_0$ (the null) is true, but that we do not have the evidence to show that it is false. Now imagine if we flipped this situation. Say we assume $H_1$ and find that given our results, the likelihood of $H_0$ is 5% or less, what does that mean? Sure we can reject the null, be can we necessarily accept $H_1$? It is hard to justify accepting the thing we assumed to be true in the beginning. Showing that $H_0$ is false is not the result we are after; we want to argue in favor of $H_1$. By doing the test in the way you describe, we are showing that we do not have evidence to say that $H_1$ is false, which is subtly different than arguing $H_1$ is true.
Hypothesis testing. Why center the sampling distribution on H0? From what I gather, you are arguing that it makes more sense to 'flip' $H_0$ and $H_1$. I find it helpful to think of hypothesis testing as a proof by contradiction. We assume $H_0$ to be true, then
27,504
Adjust for everything you have in propensity score?
I've personally been asking this question for at least 5 years since for me it's the "big" practical question for using propensity score matching on observational data to estimate causal effects. This is a superb question and there's a subtle disagreement that runs deep in the statistics versus computer science communities. From my experience statisticians tend to advocate "throwing the kitchen sink" of observable inputs into the estimation of the propensity score, while computer scientists tend to advocate a theoretical reason for the inputs (though statisticians may occasionally mention the importance of theory in justifying selection of inputs into the propensity score model). The difference, I believe, stems from the fact that computer scientists (in particular Judea Pearl) tend to think of causal in terms of directed acyclic graphs. When viewing causality through directed acyclic graphs, it's fairly easy to see that you can condition on a so-called "collider" variable, which may "un-block" backdoor paths and actually induce bias into your estimation of a causal effect. My takeaway? If you have solid theory on what affects selection into the treatment, use that in the propensity score estimation. Then conduct a sensitivity analysis to determine how sensitive your estimate is to unobserved confounding variables. If you have almost no theory to guide you, then throw in the "kitchen sink" and then conduct a sensitivity analysis. A note on selecting inputs for the propensity score model (this may be obvious but it's worth noting for others unfamiliar with estimating causal effects from observational data): Don't control for post-treatment variables. That is, you want your inputs in the propensity score model to be measured before the treatment and your outcome to be measured after the treatment. In observational data this practically means that you need three waves of data, with a detailed set of baseline of covariates, treatment measured in the second wave, and the outcome measured in the final wave.
Adjust for everything you have in propensity score?
I've personally been asking this question for at least 5 years since for me it's the "big" practical question for using propensity score matching on observational data to estimate causal effects. This
Adjust for everything you have in propensity score? I've personally been asking this question for at least 5 years since for me it's the "big" practical question for using propensity score matching on observational data to estimate causal effects. This is a superb question and there's a subtle disagreement that runs deep in the statistics versus computer science communities. From my experience statisticians tend to advocate "throwing the kitchen sink" of observable inputs into the estimation of the propensity score, while computer scientists tend to advocate a theoretical reason for the inputs (though statisticians may occasionally mention the importance of theory in justifying selection of inputs into the propensity score model). The difference, I believe, stems from the fact that computer scientists (in particular Judea Pearl) tend to think of causal in terms of directed acyclic graphs. When viewing causality through directed acyclic graphs, it's fairly easy to see that you can condition on a so-called "collider" variable, which may "un-block" backdoor paths and actually induce bias into your estimation of a causal effect. My takeaway? If you have solid theory on what affects selection into the treatment, use that in the propensity score estimation. Then conduct a sensitivity analysis to determine how sensitive your estimate is to unobserved confounding variables. If you have almost no theory to guide you, then throw in the "kitchen sink" and then conduct a sensitivity analysis. A note on selecting inputs for the propensity score model (this may be obvious but it's worth noting for others unfamiliar with estimating causal effects from observational data): Don't control for post-treatment variables. That is, you want your inputs in the propensity score model to be measured before the treatment and your outcome to be measured after the treatment. In observational data this practically means that you need three waves of data, with a detailed set of baseline of covariates, treatment measured in the second wave, and the outcome measured in the final wave.
Adjust for everything you have in propensity score? I've personally been asking this question for at least 5 years since for me it's the "big" practical question for using propensity score matching on observational data to estimate causal effects. This
27,505
Adjust for everything you have in propensity score?
In the absence of subject matter knowledge, overinclusion of variables is generally better than underinclusion, and there is little reason to do model selection to build a PS. What is more important is to build a flexible model. My default approach is to spline every continuous variable and to not look at $P$-values for variables in the PS, i.e., I use a flexible additive logistic regression model. There are many advantages of covariate adjustment using the logit PS. I typically spline the logit of PS to include as a multiple degree of freedom adjustment variable, after doing due diligence regarding non-overlap regions. See http://www.citeulike.org/user/harrelfe/article/13340175 and http://www.citeulike.org/user/harrelfe/article/13265389 and more articles in http://www.citeulike.org/user/harrelfe/tag/propensity-score. You have to be sure to also include as separate covariates the likely strong predictors of $Y$ as PS is just for bias adjustment, not for capturing outcome heterogeneity. I am dubious of any matching method that results in discarding matchable observations or that is highly dependent on dataset order. Discarded observations have a lot to say about how covariate effects should be estimated.
Adjust for everything you have in propensity score?
In the absence of subject matter knowledge, overinclusion of variables is generally better than underinclusion, and there is little reason to do model selection to build a PS. What is more important
Adjust for everything you have in propensity score? In the absence of subject matter knowledge, overinclusion of variables is generally better than underinclusion, and there is little reason to do model selection to build a PS. What is more important is to build a flexible model. My default approach is to spline every continuous variable and to not look at $P$-values for variables in the PS, i.e., I use a flexible additive logistic regression model. There are many advantages of covariate adjustment using the logit PS. I typically spline the logit of PS to include as a multiple degree of freedom adjustment variable, after doing due diligence regarding non-overlap regions. See http://www.citeulike.org/user/harrelfe/article/13340175 and http://www.citeulike.org/user/harrelfe/article/13265389 and more articles in http://www.citeulike.org/user/harrelfe/tag/propensity-score. You have to be sure to also include as separate covariates the likely strong predictors of $Y$ as PS is just for bias adjustment, not for capturing outcome heterogeneity. I am dubious of any matching method that results in discarding matchable observations or that is highly dependent on dataset order. Discarded observations have a lot to say about how covariate effects should be estimated.
Adjust for everything you have in propensity score? In the absence of subject matter knowledge, overinclusion of variables is generally better than underinclusion, and there is little reason to do model selection to build a PS. What is more important
27,506
Adjust for everything you have in propensity score?
Theoretical insight, institutional knowledge, and good research in the field should be your guide about what $X$s to match on. There is no deterministic variable selection procedure that will tell you which variables to choose. Here are some general guidelines. The Conditional Independence Assumption (CIA) will be satisfied if $X$ includes all of the variables that affect both (not either, but both) participation and outcomes. Including $X$s affected by the treatment, either ex post or ex ante in anticipation of treatment, will invalidate the assumption. For example, if an agent knows that the vaccine is coming, he may adjust his pre-shot behavior. Including instruments – variables that affect participation and not outcomes – is also a bad idea. They will not help with selection bias and may worsen the support problem drastically. For example, if some people are encouraged to take up treatment, you don't want to condition on that. The inclusion of irrelevant variables in the propensity score specification can increase the variance since either some treated have to be discarded from the analysis or control units have to be used more than once or because the bandwidth has to increase. In short, the kitchen sink approach is definitely not recommended. The CIA cannot be tested without experimental data or "over-identifying" assumptions (as in the case of the pre-program test or other false placebo tests). If you have enough historical data, I would definitely try the latter on your carefully curated set. Response to edit: I can't comment on the kidneys since that is too far outside my area (other than pies, which I know something about). Urban seems like a variable that affects both participation and outcome through the costs associated with travel to the hospital for treatment and examination. It might pick up some of the unobservables that keep us up at night. The anticipation story I have in mind is that people may adjust their behavior if they know they will be treated in the future, for example by changing their diets.
Adjust for everything you have in propensity score?
Theoretical insight, institutional knowledge, and good research in the field should be your guide about what $X$s to match on. There is no deterministic variable selection procedure that will tell you
Adjust for everything you have in propensity score? Theoretical insight, institutional knowledge, and good research in the field should be your guide about what $X$s to match on. There is no deterministic variable selection procedure that will tell you which variables to choose. Here are some general guidelines. The Conditional Independence Assumption (CIA) will be satisfied if $X$ includes all of the variables that affect both (not either, but both) participation and outcomes. Including $X$s affected by the treatment, either ex post or ex ante in anticipation of treatment, will invalidate the assumption. For example, if an agent knows that the vaccine is coming, he may adjust his pre-shot behavior. Including instruments – variables that affect participation and not outcomes – is also a bad idea. They will not help with selection bias and may worsen the support problem drastically. For example, if some people are encouraged to take up treatment, you don't want to condition on that. The inclusion of irrelevant variables in the propensity score specification can increase the variance since either some treated have to be discarded from the analysis or control units have to be used more than once or because the bandwidth has to increase. In short, the kitchen sink approach is definitely not recommended. The CIA cannot be tested without experimental data or "over-identifying" assumptions (as in the case of the pre-program test or other false placebo tests). If you have enough historical data, I would definitely try the latter on your carefully curated set. Response to edit: I can't comment on the kidneys since that is too far outside my area (other than pies, which I know something about). Urban seems like a variable that affects both participation and outcome through the costs associated with travel to the hospital for treatment and examination. It might pick up some of the unobservables that keep us up at night. The anticipation story I have in mind is that people may adjust their behavior if they know they will be treated in the future, for example by changing their diets.
Adjust for everything you have in propensity score? Theoretical insight, institutional knowledge, and good research in the field should be your guide about what $X$s to match on. There is no deterministic variable selection procedure that will tell you
27,507
Adjust for everything you have in propensity score?
Because the propensity score model is purely predictive - you're not interested in any coefficients - I've always understood it than you can hurl in all your variables that affect both cohort entry and outcome. You can twist these variables as you wish - square them, root them, all types of interactions, etc. etc. - as long as you're increasing the predictive quality of your model. In theory, you shouldn't even have to worry about hold-out data for your predictive model as you have no desire to generalise these results past your sample (basically, the risk of 'overfitting' isn't a problem). Finally, you don't have to limit yourself to logistic regression; as you're modelling a binary output, you might even use a GAM model - basically, anything to improve the prediction rates. ( I must add as a contrary note to @statsRus' point on use: in my experience it's the computer scientists who use all variables while the statisticians who carefully consider each one. I guess different work backgrounds produce different working habits. ) As for use of the score, it's generally discouraged to use it as a covariate - it has less impact - and certainly not alongside the variables used to make the scoring variable. An argument might be made if, in the propensity score, you categorised a continuous variable - age for instance - where you might then include the continuous version in the model but really, don't categorise the variable the first place... Using the score for matching (with calipers - especially variable 1:N matching) is popular but I believe the most impactful technique is as Inverse Proportional Treatment Weights (IPTW) - although I've not used this method and I can't remember how it works. Try looking at Peter C. Austin's work at the University of Toronto - he's written loads of papers on propensity scores. Here's one on matching for instance.
Adjust for everything you have in propensity score?
Because the propensity score model is purely predictive - you're not interested in any coefficients - I've always understood it than you can hurl in all your variables that affect both cohort entry an
Adjust for everything you have in propensity score? Because the propensity score model is purely predictive - you're not interested in any coefficients - I've always understood it than you can hurl in all your variables that affect both cohort entry and outcome. You can twist these variables as you wish - square them, root them, all types of interactions, etc. etc. - as long as you're increasing the predictive quality of your model. In theory, you shouldn't even have to worry about hold-out data for your predictive model as you have no desire to generalise these results past your sample (basically, the risk of 'overfitting' isn't a problem). Finally, you don't have to limit yourself to logistic regression; as you're modelling a binary output, you might even use a GAM model - basically, anything to improve the prediction rates. ( I must add as a contrary note to @statsRus' point on use: in my experience it's the computer scientists who use all variables while the statisticians who carefully consider each one. I guess different work backgrounds produce different working habits. ) As for use of the score, it's generally discouraged to use it as a covariate - it has less impact - and certainly not alongside the variables used to make the scoring variable. An argument might be made if, in the propensity score, you categorised a continuous variable - age for instance - where you might then include the continuous version in the model but really, don't categorise the variable the first place... Using the score for matching (with calipers - especially variable 1:N matching) is popular but I believe the most impactful technique is as Inverse Proportional Treatment Weights (IPTW) - although I've not used this method and I can't remember how it works. Try looking at Peter C. Austin's work at the University of Toronto - he's written loads of papers on propensity scores. Here's one on matching for instance.
Adjust for everything you have in propensity score? Because the propensity score model is purely predictive - you're not interested in any coefficients - I've always understood it than you can hurl in all your variables that affect both cohort entry an
27,508
Can someone please explain to me what the particular scenarios mean?
In scenario 1, there are two bivariate Normal distributions. Here I show two such probability density functions (PDFs) superimposed in a pseudo-3D plot. One has a mean near $(0,0)$ (at the left) and the other has a mean near $(3,3)$. Samples are drawn independently from each. I took the same number ($300$) so that we wouldn't have to compensate for different sample sizes in evaluating these data. Point symbols distinguish the two samples. The gray/white background is the best discriminator: points in gray are more likely to arise from the second distribution than the first. (The discriminator is elliptical, not linear, because these distributions have slightly different covariance matrices.) In scenario 2 we will look at two comparable datasets produced using mixture distributions. There are two mixtures. Each one is determined by ten distinct Normal distributions. They all have different covariance matrices (which I do not show) and different means. Here are the locations of their means (which I have termed "nuclei"): A mixture of Gaussians is best described in terms of the generative model. One first generates a discrete variable that determines which of the component Gaussians to use, and then generates an observation from the chosen density. To draw an set of independent observations from a mixture, you first pick one of its components at random and then draw a value from that component. The PDF of a mixture is a weighted sum of PDFs of the components, with the weights being the chance of selecting each component in that first stage. Here are the PDFs of the two mixtures. I drew them with a little extra transparency so you can see them better in the middle where they overlap: To make the two scenarios easier to compare, the means and covariance matrices of these two PDFs we chosen to closely match the corresponding means and covariances of the two bivariate Normal PDFs used in scenario 1. To emulate scenario 2 (the mixture distributions), I drew samples of 300 independent values from each of the two datasets by selecting each of their components with a probability of $1/10$ and then independently drawing a value from the selected component. Because the selection of components is random, the number of draws from each component was not always exactly $30 = 300 \times 1/10$, but it was usually close to that. Here is the result: The black dots show the ten component means for each of the two distributions. Clustered around each black dot are approximately 30 samples. However, there is much intermingling of values, so it is impossible from this figure to determine which samples were drawn from which component. In the case of mixtures of tightly clustered Gaussians the story is different. A linear decision boundary is unlikely to be optimal, and in fact is not. The optimal decision boundary is nonlinear and disjoint, and as such will be much more difficult to obtain." The background in that last figure is the best discriminator for these two mixture distributions. It is complicated because the distributions are complicated; obviously it is not just a line or smooth curve, such as appeared in scenario 1. I believe the entire point of this comparison lies in our option, as analysts, to choose which model we want to use to analyze either one of these two datasets. Because we would not in practice know which model is appropriate, we could try using a mixture model for the data in scenario 1 and we could equally well try using a Normal model for the data in scenario 2. We would likely be fairly successful in any case due to the relatively low overlap (between blue and red sample points). Nevertheless, the different (equally valid) models can produce distinctly different discriminators (especially in areas where data are sparse).
Can someone please explain to me what the particular scenarios mean?
In scenario 1, there are two bivariate Normal distributions. Here I show two such probability density functions (PDFs) superimposed in a pseudo-3D plot. One has a mean near $(0,0)$ (at the left) and
Can someone please explain to me what the particular scenarios mean? In scenario 1, there are two bivariate Normal distributions. Here I show two such probability density functions (PDFs) superimposed in a pseudo-3D plot. One has a mean near $(0,0)$ (at the left) and the other has a mean near $(3,3)$. Samples are drawn independently from each. I took the same number ($300$) so that we wouldn't have to compensate for different sample sizes in evaluating these data. Point symbols distinguish the two samples. The gray/white background is the best discriminator: points in gray are more likely to arise from the second distribution than the first. (The discriminator is elliptical, not linear, because these distributions have slightly different covariance matrices.) In scenario 2 we will look at two comparable datasets produced using mixture distributions. There are two mixtures. Each one is determined by ten distinct Normal distributions. They all have different covariance matrices (which I do not show) and different means. Here are the locations of their means (which I have termed "nuclei"): A mixture of Gaussians is best described in terms of the generative model. One first generates a discrete variable that determines which of the component Gaussians to use, and then generates an observation from the chosen density. To draw an set of independent observations from a mixture, you first pick one of its components at random and then draw a value from that component. The PDF of a mixture is a weighted sum of PDFs of the components, with the weights being the chance of selecting each component in that first stage. Here are the PDFs of the two mixtures. I drew them with a little extra transparency so you can see them better in the middle where they overlap: To make the two scenarios easier to compare, the means and covariance matrices of these two PDFs we chosen to closely match the corresponding means and covariances of the two bivariate Normal PDFs used in scenario 1. To emulate scenario 2 (the mixture distributions), I drew samples of 300 independent values from each of the two datasets by selecting each of their components with a probability of $1/10$ and then independently drawing a value from the selected component. Because the selection of components is random, the number of draws from each component was not always exactly $30 = 300 \times 1/10$, but it was usually close to that. Here is the result: The black dots show the ten component means for each of the two distributions. Clustered around each black dot are approximately 30 samples. However, there is much intermingling of values, so it is impossible from this figure to determine which samples were drawn from which component. In the case of mixtures of tightly clustered Gaussians the story is different. A linear decision boundary is unlikely to be optimal, and in fact is not. The optimal decision boundary is nonlinear and disjoint, and as such will be much more difficult to obtain." The background in that last figure is the best discriminator for these two mixture distributions. It is complicated because the distributions are complicated; obviously it is not just a line or smooth curve, such as appeared in scenario 1. I believe the entire point of this comparison lies in our option, as analysts, to choose which model we want to use to analyze either one of these two datasets. Because we would not in practice know which model is appropriate, we could try using a mixture model for the data in scenario 1 and we could equally well try using a Normal model for the data in scenario 2. We would likely be fairly successful in any case due to the relatively low overlap (between blue and red sample points). Nevertheless, the different (equally valid) models can produce distinctly different discriminators (especially in areas where data are sparse).
Can someone please explain to me what the particular scenarios mean? In scenario 1, there are two bivariate Normal distributions. Here I show two such probability density functions (PDFs) superimposed in a pseudo-3D plot. One has a mean near $(0,0)$ (at the left) and
27,509
Can someone please explain to me what the particular scenarios mean?
The point being made in section 2.3 of the book (where this quote come from) is that if the source of the data is from Scenario 1, there is nothing better you can do than a linear division (as in figure 2.1). Any finer tuning is actually self-delusion: you should then expect to get worse results predicting cases outside the training data if you do not use the optimal linear division. However, if the source of the data is from Scenario 2, you can reasonably expect the low variance of each of the $10$ source distributions to make it more likely that data points of the same colour will tend to cluster together in a non-linear manner, and so a non-linear approach may be more skilful. The example the book gives is that of looking at the colours of nearest neighbours: figure 2.2 shows the classification boundary if you look at the 15 nearest neighbours (a fairly smooth non-linear boundary) while figure 2.3 looks at the boundary if you look at just 1 nearest neighbour (a very jagged boundary). I suspect that the point being made is that the value of statistical or machine learning techniques depends on the source of the data, and that some techniques are better in some circumstances, and others in others. But it is also possible to generalise ideas from different methods and come up with further techniques, as section 2.4 and figure 2.5 do with what the book calls the "Bayes classifier".
Can someone please explain to me what the particular scenarios mean?
The point being made in section 2.3 of the book (where this quote come from) is that if the source of the data is from Scenario 1, there is nothing better you can do than a linear division (as in figu
Can someone please explain to me what the particular scenarios mean? The point being made in section 2.3 of the book (where this quote come from) is that if the source of the data is from Scenario 1, there is nothing better you can do than a linear division (as in figure 2.1). Any finer tuning is actually self-delusion: you should then expect to get worse results predicting cases outside the training data if you do not use the optimal linear division. However, if the source of the data is from Scenario 2, you can reasonably expect the low variance of each of the $10$ source distributions to make it more likely that data points of the same colour will tend to cluster together in a non-linear manner, and so a non-linear approach may be more skilful. The example the book gives is that of looking at the colours of nearest neighbours: figure 2.2 shows the classification boundary if you look at the 15 nearest neighbours (a fairly smooth non-linear boundary) while figure 2.3 looks at the boundary if you look at just 1 nearest neighbour (a very jagged boundary). I suspect that the point being made is that the value of statistical or machine learning techniques depends on the source of the data, and that some techniques are better in some circumstances, and others in others. But it is also possible to generalise ideas from different methods and come up with further techniques, as section 2.4 and figure 2.5 do with what the book calls the "Bayes classifier".
Can someone please explain to me what the particular scenarios mean? The point being made in section 2.3 of the book (where this quote come from) is that if the source of the data is from Scenario 1, there is nothing better you can do than a linear division (as in figu
27,510
Can someone please explain to me what the particular scenarios mean?
whuber made an extraordinary good statement. Here I just want to add some more details. The scenario 1 is talking about linear discriminative analysis(LDA) where the decision boundary is linear and whuber is describing a more general quadratic discriminative analysis(QDA) where the decision boundary is a quadratic function. Of course, LDA is a special case of QDA. A linear decision boundary is optimal to scenario 1 because solving the classification problem using maximum likelihood estimation gives a linear solution. At the same time, even though LDA looks very different from linear regression, the decision boundaries given by these two methods are very similar. Intuitively, if we think these two decision boundaries are two straight lines, these two lines will have the same slope but different intercepts. For more mathematical details, I would recommend this blog which gives a great and detailed explanation.
Can someone please explain to me what the particular scenarios mean?
whuber made an extraordinary good statement. Here I just want to add some more details. The scenario 1 is talking about linear discriminative analysis(LDA) where the decision boundary is linear and w
Can someone please explain to me what the particular scenarios mean? whuber made an extraordinary good statement. Here I just want to add some more details. The scenario 1 is talking about linear discriminative analysis(LDA) where the decision boundary is linear and whuber is describing a more general quadratic discriminative analysis(QDA) where the decision boundary is a quadratic function. Of course, LDA is a special case of QDA. A linear decision boundary is optimal to scenario 1 because solving the classification problem using maximum likelihood estimation gives a linear solution. At the same time, even though LDA looks very different from linear regression, the decision boundaries given by these two methods are very similar. Intuitively, if we think these two decision boundaries are two straight lines, these two lines will have the same slope but different intercepts. For more mathematical details, I would recommend this blog which gives a great and detailed explanation.
Can someone please explain to me what the particular scenarios mean? whuber made an extraordinary good statement. Here I just want to add some more details. The scenario 1 is talking about linear discriminative analysis(LDA) where the decision boundary is linear and w
27,511
Is regression of x on y clearly better than y on x in this case?
Lots of lab papers, especially the instrument testing experiments, apply such x on y regression. They argue that from the data collection in the experiment, the y conditions are controlled, and get x from the instrument reading (introducing some error in it). This is the original physical model of the experiment, so the x~y+error is more suitable. To minimize the experiment error, sometimes, y being controlled on the same condition, then x is measured for several times (or repeated experiment). This procedure may help you to understand the logic behind them and find x~y+error more clearly.
Is regression of x on y clearly better than y on x in this case?
Lots of lab papers, especially the instrument testing experiments, apply such x on y regression. They argue that from the data collection in the experiment, the y conditions are controlled, and get x
Is regression of x on y clearly better than y on x in this case? Lots of lab papers, especially the instrument testing experiments, apply such x on y regression. They argue that from the data collection in the experiment, the y conditions are controlled, and get x from the instrument reading (introducing some error in it). This is the original physical model of the experiment, so the x~y+error is more suitable. To minimize the experiment error, sometimes, y being controlled on the same condition, then x is measured for several times (or repeated experiment). This procedure may help you to understand the logic behind them and find x~y+error more clearly.
Is regression of x on y clearly better than y on x in this case? Lots of lab papers, especially the instrument testing experiments, apply such x on y regression. They argue that from the data collection in the experiment, the y conditions are controlled, and get x
27,512
Is regression of x on y clearly better than y on x in this case?
As is typically the case, different analyses answer different questions. Both $Y\text{ on }X$ and $X\text{ on }Y$ could be valid here, you just want to make sure your analysis matches the question you want to answer. (For more along these lines, you may want to read my answer here: What is the difference between linear regression on Y with X and X with Y?) You are right that if all you will want to do is predict the most likely $Y$ value given knowledge of an $X$ value, you would regress $Y\text{ on }X$. However, if you want to understand how these measures are related to each other, you might want to use an errors-in-variables approach, since you believe that there is measurement error in $X$. On the other hand, regressing $X\text{ on }Y$ (and assuming $Y$ is perfectly error-free--a so-called gold standard) allows you to study the measurement properties of $X$. For example, you can determine if the instrument becomes biased as the true value increases (or decreases) by assessing whether the function is straight or curved. When trying to understand the properties of a measurement instrument, understanding the nature of the measurement error is very important, and this can be done by regressing $X\text{ on }Y$. For instance, when checking for homoscedasticity, you can determine if the measurement error varies as a function of the level of the true value of the construct. It is often the case with instruments that there is more measurement error at the extremes of its range than in the middle of its applicable range (i.e., its 'sweet spot'), so you can determine this, or perhaps determine what its most appropriate range is. You can also estimate the amount of measurement error in your instrument with the root mean squared error (the residual standard deviation); of course this assumes homoscedasticity, but you can also get estimates at differing points on $Y$ via fitting a smooth function, like a spline, to the residuals. Given these considerations, I'm guessing $X\text{ on }Y$ is better, but it certainly depends on what your goals are.
Is regression of x on y clearly better than y on x in this case?
As is typically the case, different analyses answer different questions. Both $Y\text{ on }X$ and $X\text{ on }Y$ could be valid here, you just want to make sure your analysis matches the question yo
Is regression of x on y clearly better than y on x in this case? As is typically the case, different analyses answer different questions. Both $Y\text{ on }X$ and $X\text{ on }Y$ could be valid here, you just want to make sure your analysis matches the question you want to answer. (For more along these lines, you may want to read my answer here: What is the difference between linear regression on Y with X and X with Y?) You are right that if all you will want to do is predict the most likely $Y$ value given knowledge of an $X$ value, you would regress $Y\text{ on }X$. However, if you want to understand how these measures are related to each other, you might want to use an errors-in-variables approach, since you believe that there is measurement error in $X$. On the other hand, regressing $X\text{ on }Y$ (and assuming $Y$ is perfectly error-free--a so-called gold standard) allows you to study the measurement properties of $X$. For example, you can determine if the instrument becomes biased as the true value increases (or decreases) by assessing whether the function is straight or curved. When trying to understand the properties of a measurement instrument, understanding the nature of the measurement error is very important, and this can be done by regressing $X\text{ on }Y$. For instance, when checking for homoscedasticity, you can determine if the measurement error varies as a function of the level of the true value of the construct. It is often the case with instruments that there is more measurement error at the extremes of its range than in the middle of its applicable range (i.e., its 'sweet spot'), so you can determine this, or perhaps determine what its most appropriate range is. You can also estimate the amount of measurement error in your instrument with the root mean squared error (the residual standard deviation); of course this assumes homoscedasticity, but you can also get estimates at differing points on $Y$ via fitting a smooth function, like a spline, to the residuals. Given these considerations, I'm guessing $X\text{ on }Y$ is better, but it certainly depends on what your goals are.
Is regression of x on y clearly better than y on x in this case? As is typically the case, different analyses answer different questions. Both $Y\text{ on }X$ and $X\text{ on }Y$ could be valid here, you just want to make sure your analysis matches the question yo
27,513
Is regression of x on y clearly better than y on x in this case?
Prediction and Forecasting Yes you are correct, when you view this as a problem of prediction, a Y-on-X regression will give you a model such that given a instrument measurement you can make an unbiased estimate of the accurate lab measurement, without doing the lab procedure. Put another way, if you are just interested in $E[Y|X]$ then you want Y-on-X regression. This may seem counter-intuitive because the error structure is not the "real" one. Assuming that the lab method is a gold standard error free method, then we "know" that the true data-generative model is $X_i = \beta Y_i + \epsilon_i$ where $Y_i$ and $\epsilon_i$ are independent identically distribution, and $E[\epsilon]=0$ We are interested in getting the best estimate of $E[Y_i|X_i]$. Because of our independence assumption we can rearrange the above: $Y_i = \frac{X_i - \epsilon}{\beta}$ Now, taking expectations given $X_i$ is where things get hairy $E[Y_i|X_i] = \frac{1}{\beta} X_i - \frac{1}{\beta} E[\epsilon_i|X_i]$ The problem is the $E[\epsilon_i|X_i]$ term - is it equal to zero? It doesn't actually matter, because you can never see it, and we are only modelling linear terms (or the argument extend up to whatever terms you are modelling). Any dependence between $\epsilon$ and $X$ can simply be absorbed into the constant we are estimating. Explicitly, without loss of generality we can let $\epsilon_i = \gamma X_i + \eta_i$ Where $E[\eta_i|X] = 0$ by definition, so that we now have $Y_I = \frac{1}{\beta} X_i - \frac{\gamma}{\beta} X_i - \frac{1}{\beta} \eta_i$ $Y_I = \frac{1-\gamma}{\beta} X_i - \frac{1}{\beta} \eta_i $ which satisfies all the requirements of OLS, since $\eta$ is now exogenous. It doesn't matter in the slightest that the error term also contains a $\beta$ since neither $\beta$ nor $\sigma$ are known anyway and must be estimated. We can therefore simply replace those constants with new ones and use the normal approach $Y_I = {\alpha} X_i + \eta_i $ Notice that we have NOT estimated the quantity $\beta$ that I originally wrote down - we have built the best model we can for using X as a proxy for Y. Instrument Analysis The person who set you this question, clearly didn't want the answer above since they say X-on-Y is the correct method, so why might they have wanted that? Most likely they were considering the task of understanding the instrument. As discussed in Vincent's answer, if you want to know about they want the instrument behaves, the X-on-Y is the way to go. Going back to the first equation above: $X_i = \beta Y_i + \epsilon_i$ The person setting the question could have been thinking of calibration. An instrument is said to be calibrated when it has expectation equal to the true value - that is $E[X_i|Y_i] = Y_i$. Clearly in order to calibrate $X$ you need to find $\beta$, and so to calibrate an instrument you need to do X-on-Y regression. Shrinkage Calibration is an intuitively sensible requirement of an instrument, but it can also cause confusion. Notice, that even a well calibrated instrument will not be showing you the expected value of $Y$! To get $E[Y|X]$ you still need to do the Y-on-X regression, even with a well calibrated instrument. This estimate will generally look like a shrunk version of the instrument value (remember the $\gamma$ term that crept in). In particular, to get a really good estimate of $E[Y|X]$ you should include your prior knowledge of the distribution of $Y$. This then leads to concepts such as regression-to-the-mean and empirical bayes. Example in R One way to get a feel for what is going on here is to make some data and try the methods out. The code below compares X-on-Y with Y-on-X for prediction and calibration and you can quickly see that X-on-Y is no good for the prediction model, but is the correct procedure for calibration. library(data.table) library(ggplot2) N = 100 beta = 0.7 c = 4.4 DT = data.table(Y = rt(N, 5), epsilon = rt(N,8)) DT[, X := 0.7*Y + c + epsilon] YonX = DT[, lm(Y~X)] # Y = alpha_1 X + alpha_0 + eta XonY = DT[, lm(X~Y)] # X = beta_1 Y + beta_0 + epsilon YonX.c = YonX$coef[1] # c = alpha_0 YonX.m = YonX$coef[2] # m = alpha_1 # For X on Y will need to rearrage after the fit. # Fitting model X = beta_1 Y + beta_0 # Y = X/beta_1 - beta_0/beta_1 XonY.c = -XonY$coef[1]/XonY$coef[2] # c = -beta_0/beta_1 XonY.m = 1.0/XonY$coef[2] # m = 1/ beta_1 ggplot(DT, aes(x = X, y =Y)) + geom_point() + geom_abline(intercept = YonX.c, slope = YonX.m, color = "red") + geom_abline(intercept = XonY.c, slope = XonY.m, color = "blue") # Generate a fresh sample DT2 = data.table(Y = rt(N, 5), epsilon = rt(N,8)) DT2[, X := 0.7*Y + c + epsilon] DT2[, YonX.predict := YonX.c + YonX.m * X] DT2[, XonY.predict := XonY.c + XonY.m * X] cat("YonX sum of squares error for prediction: ", DT2[, sum((YonX.predict - Y)^2)]) cat("XonY sum of squares error for prediction: ", DT2[, sum((XonY.predict - Y)^2)]) # Generate lots of samples at the same Y DT3 = data.table(Y = 4.0, epsilon = rt(N,8)) DT3[, X := 0.7*Y + c + epsilon] DT3[, YonX.predict := YonX.c + YonX.m * X] DT3[, XonY.predict := XonY.c + XonY.m * X] cat("Expected value of X at a given Y (calibrated using YonX) should be close to 4: ", DT3[, mean(YonX.predict)]) cat("Expected value of X at a gievn Y (calibrated using XonY) should be close to 4: ", DT3[, mean(XonY.predict)]) ggplot(DT3) + geom_density(aes(x = YonX.predict), fill = "red", alpha = 0.5) + geom_density(aes(x = XonY.predict), fill = "blue", alpha = 0.5) + geom_vline(x = 4.0, size = 2) + ggtitle("Calibration at 4.0") The two regression lines are plotted over the data And then the sum of squares error for Y is measured for both fits on a new sample. > cat("YonX sum of squares error for prediction: ", DT2[, sum((YonX.predict - Y)^2)]) YonX sum of squares error for prediction: 77.33448 > cat("XonY sum of squares error for prediction: ", DT2[, sum((XonY.predict - Y)^2)]) XonY sum of squares error for prediction: 183.0144 Alternatively a sample can be generated at a fixed Y (in this case 4) and then average of those estimates taken. You can now see that the Y-on-X predictor is not well calibrated having an expected value much lower than Y. The X-on-Y predictor, is well calibrated having an expected value close to Y. > cat("Expected value of X at a given Y (calibrated using YonX) should be close to 4: ", DT3[, mean(YonX.predict)]) Expected value of X at a given Y (calibrated using YonX) should be close to 4: 1.305579 > cat("Expected value of X at a gievn Y (calibrated using XonY) should be close to 4: ", DT3[, mean(XonY.predict)]) Expected value of X at a gievn Y (calibrated using XonY) should be close to 4: 3.465205 The distribution of the two prediction can been seen in a density plot.
Is regression of x on y clearly better than y on x in this case?
Prediction and Forecasting Yes you are correct, when you view this as a problem of prediction, a Y-on-X regression will give you a model such that given a instrument measurement you can make an unbias
Is regression of x on y clearly better than y on x in this case? Prediction and Forecasting Yes you are correct, when you view this as a problem of prediction, a Y-on-X regression will give you a model such that given a instrument measurement you can make an unbiased estimate of the accurate lab measurement, without doing the lab procedure. Put another way, if you are just interested in $E[Y|X]$ then you want Y-on-X regression. This may seem counter-intuitive because the error structure is not the "real" one. Assuming that the lab method is a gold standard error free method, then we "know" that the true data-generative model is $X_i = \beta Y_i + \epsilon_i$ where $Y_i$ and $\epsilon_i$ are independent identically distribution, and $E[\epsilon]=0$ We are interested in getting the best estimate of $E[Y_i|X_i]$. Because of our independence assumption we can rearrange the above: $Y_i = \frac{X_i - \epsilon}{\beta}$ Now, taking expectations given $X_i$ is where things get hairy $E[Y_i|X_i] = \frac{1}{\beta} X_i - \frac{1}{\beta} E[\epsilon_i|X_i]$ The problem is the $E[\epsilon_i|X_i]$ term - is it equal to zero? It doesn't actually matter, because you can never see it, and we are only modelling linear terms (or the argument extend up to whatever terms you are modelling). Any dependence between $\epsilon$ and $X$ can simply be absorbed into the constant we are estimating. Explicitly, without loss of generality we can let $\epsilon_i = \gamma X_i + \eta_i$ Where $E[\eta_i|X] = 0$ by definition, so that we now have $Y_I = \frac{1}{\beta} X_i - \frac{\gamma}{\beta} X_i - \frac{1}{\beta} \eta_i$ $Y_I = \frac{1-\gamma}{\beta} X_i - \frac{1}{\beta} \eta_i $ which satisfies all the requirements of OLS, since $\eta$ is now exogenous. It doesn't matter in the slightest that the error term also contains a $\beta$ since neither $\beta$ nor $\sigma$ are known anyway and must be estimated. We can therefore simply replace those constants with new ones and use the normal approach $Y_I = {\alpha} X_i + \eta_i $ Notice that we have NOT estimated the quantity $\beta$ that I originally wrote down - we have built the best model we can for using X as a proxy for Y. Instrument Analysis The person who set you this question, clearly didn't want the answer above since they say X-on-Y is the correct method, so why might they have wanted that? Most likely they were considering the task of understanding the instrument. As discussed in Vincent's answer, if you want to know about they want the instrument behaves, the X-on-Y is the way to go. Going back to the first equation above: $X_i = \beta Y_i + \epsilon_i$ The person setting the question could have been thinking of calibration. An instrument is said to be calibrated when it has expectation equal to the true value - that is $E[X_i|Y_i] = Y_i$. Clearly in order to calibrate $X$ you need to find $\beta$, and so to calibrate an instrument you need to do X-on-Y regression. Shrinkage Calibration is an intuitively sensible requirement of an instrument, but it can also cause confusion. Notice, that even a well calibrated instrument will not be showing you the expected value of $Y$! To get $E[Y|X]$ you still need to do the Y-on-X regression, even with a well calibrated instrument. This estimate will generally look like a shrunk version of the instrument value (remember the $\gamma$ term that crept in). In particular, to get a really good estimate of $E[Y|X]$ you should include your prior knowledge of the distribution of $Y$. This then leads to concepts such as regression-to-the-mean and empirical bayes. Example in R One way to get a feel for what is going on here is to make some data and try the methods out. The code below compares X-on-Y with Y-on-X for prediction and calibration and you can quickly see that X-on-Y is no good for the prediction model, but is the correct procedure for calibration. library(data.table) library(ggplot2) N = 100 beta = 0.7 c = 4.4 DT = data.table(Y = rt(N, 5), epsilon = rt(N,8)) DT[, X := 0.7*Y + c + epsilon] YonX = DT[, lm(Y~X)] # Y = alpha_1 X + alpha_0 + eta XonY = DT[, lm(X~Y)] # X = beta_1 Y + beta_0 + epsilon YonX.c = YonX$coef[1] # c = alpha_0 YonX.m = YonX$coef[2] # m = alpha_1 # For X on Y will need to rearrage after the fit. # Fitting model X = beta_1 Y + beta_0 # Y = X/beta_1 - beta_0/beta_1 XonY.c = -XonY$coef[1]/XonY$coef[2] # c = -beta_0/beta_1 XonY.m = 1.0/XonY$coef[2] # m = 1/ beta_1 ggplot(DT, aes(x = X, y =Y)) + geom_point() + geom_abline(intercept = YonX.c, slope = YonX.m, color = "red") + geom_abline(intercept = XonY.c, slope = XonY.m, color = "blue") # Generate a fresh sample DT2 = data.table(Y = rt(N, 5), epsilon = rt(N,8)) DT2[, X := 0.7*Y + c + epsilon] DT2[, YonX.predict := YonX.c + YonX.m * X] DT2[, XonY.predict := XonY.c + XonY.m * X] cat("YonX sum of squares error for prediction: ", DT2[, sum((YonX.predict - Y)^2)]) cat("XonY sum of squares error for prediction: ", DT2[, sum((XonY.predict - Y)^2)]) # Generate lots of samples at the same Y DT3 = data.table(Y = 4.0, epsilon = rt(N,8)) DT3[, X := 0.7*Y + c + epsilon] DT3[, YonX.predict := YonX.c + YonX.m * X] DT3[, XonY.predict := XonY.c + XonY.m * X] cat("Expected value of X at a given Y (calibrated using YonX) should be close to 4: ", DT3[, mean(YonX.predict)]) cat("Expected value of X at a gievn Y (calibrated using XonY) should be close to 4: ", DT3[, mean(XonY.predict)]) ggplot(DT3) + geom_density(aes(x = YonX.predict), fill = "red", alpha = 0.5) + geom_density(aes(x = XonY.predict), fill = "blue", alpha = 0.5) + geom_vline(x = 4.0, size = 2) + ggtitle("Calibration at 4.0") The two regression lines are plotted over the data And then the sum of squares error for Y is measured for both fits on a new sample. > cat("YonX sum of squares error for prediction: ", DT2[, sum((YonX.predict - Y)^2)]) YonX sum of squares error for prediction: 77.33448 > cat("XonY sum of squares error for prediction: ", DT2[, sum((XonY.predict - Y)^2)]) XonY sum of squares error for prediction: 183.0144 Alternatively a sample can be generated at a fixed Y (in this case 4) and then average of those estimates taken. You can now see that the Y-on-X predictor is not well calibrated having an expected value much lower than Y. The X-on-Y predictor, is well calibrated having an expected value close to Y. > cat("Expected value of X at a given Y (calibrated using YonX) should be close to 4: ", DT3[, mean(YonX.predict)]) Expected value of X at a given Y (calibrated using YonX) should be close to 4: 1.305579 > cat("Expected value of X at a gievn Y (calibrated using XonY) should be close to 4: ", DT3[, mean(XonY.predict)]) Expected value of X at a gievn Y (calibrated using XonY) should be close to 4: 3.465205 The distribution of the two prediction can been seen in a density plot.
Is regression of x on y clearly better than y on x in this case? Prediction and Forecasting Yes you are correct, when you view this as a problem of prediction, a Y-on-X regression will give you a model such that given a instrument measurement you can make an unbias
27,514
Is regression of x on y clearly better than y on x in this case?
It depends on your assumptions about the variance of X and the variance of Y for Ordinary Least Squares. If Y has the only source of variance and X has zero variance, then use X to estimate Y. If the assumptions are the other way around (X has the only variance and Y has zero variance), then use Y to estimate X. If both X and Y are assumed to have variance, then you may need to consider Total Least Squares. A good description of TLS was written up at this link. The paper is geared toward trading, but section 3 does a good job of describing TLS. Edit 1 (09/10/2013) =============================================== I originally assumed that this was some sort of homework problem, so I didn't get real specific about "the answer" to the OP's question. But, after reading other answers, it looks like it's OK to get a little more detailed. Quoting part of the OP's question: "....The levels are also measured using a very accurate laboratory procedure...." The above statement says that there are two measurements, one from the instrument and one from the lab procedure. The statement also implies that the variance for the laboratory procedure is low compared to the variance for the instrument. Another quote from the OP's question is: "....The laboratory procedure measure is denoted by y....." So, from the above two statements, Y has the lower variance. So, the least error-prone technique is to use Y to estimate X. The "answer provided" was correct.
Is regression of x on y clearly better than y on x in this case?
It depends on your assumptions about the variance of X and the variance of Y for Ordinary Least Squares. If Y has the only source of variance and X has zero variance, then use X to estimate Y. If
Is regression of x on y clearly better than y on x in this case? It depends on your assumptions about the variance of X and the variance of Y for Ordinary Least Squares. If Y has the only source of variance and X has zero variance, then use X to estimate Y. If the assumptions are the other way around (X has the only variance and Y has zero variance), then use Y to estimate X. If both X and Y are assumed to have variance, then you may need to consider Total Least Squares. A good description of TLS was written up at this link. The paper is geared toward trading, but section 3 does a good job of describing TLS. Edit 1 (09/10/2013) =============================================== I originally assumed that this was some sort of homework problem, so I didn't get real specific about "the answer" to the OP's question. But, after reading other answers, it looks like it's OK to get a little more detailed. Quoting part of the OP's question: "....The levels are also measured using a very accurate laboratory procedure...." The above statement says that there are two measurements, one from the instrument and one from the lab procedure. The statement also implies that the variance for the laboratory procedure is low compared to the variance for the instrument. Another quote from the OP's question is: "....The laboratory procedure measure is denoted by y....." So, from the above two statements, Y has the lower variance. So, the least error-prone technique is to use Y to estimate X. The "answer provided" was correct.
Is regression of x on y clearly better than y on x in this case? It depends on your assumptions about the variance of X and the variance of Y for Ordinary Least Squares. If Y has the only source of variance and X has zero variance, then use X to estimate Y. If
27,515
Definition of dispersion parameter for quasipoisson family
The dispersion parameter in the quasi-Poisson model Let us first see how the dispersion parameter is calculated in the model using quasi-Poisson likelihood. One assumption in Poisson regression is that the conditional variance and the conditional mean of the response $Y$ are the same: $$ V(Y_{i}|\eta_{i})=E(Y_{i}|\eta_{i})=\mu_{i} $$ where $\eta_{i}$ is the linear predictor. Sometimes we observe that the variance is greater than the mean which is called overdispersion. The quasi-Poisson likelihood model is a simple remedy for overdispersed count data because it introduces a dispersion parameter ($\phi$) into the Poisson model, so that the conditional variance of the response is now a linear function of the mean: $$ V(Y_{i}|\eta_{i})=\phi\mu_{i} $$ If $\phi>1$, the conditional variance increases more rapidly than its mean. Now, $\phi$ is estimated as: $$ \widehat{\phi}=\frac{1}{n-k}\sum\frac{(Y_{i}-\hat{\mu_{i}})^2}{\hat{\mu_{i}}} $$ where $n$ is the sample size, $k$ is the number of estimated parameters (including the intercept) and $\widehat{\mu_{i}}=g^{-1}(\widehat{\eta_{i}})$ is the fitted expectation of $Y_{i}$ ($g^{-1}$ is the inverse link function). Note that the formulation above is simply the Pearson $\chi^{2}$ divided by the residual degrees of freedom: $\widehat{\phi}=\chi^{2}/df$. Let's estimate the dispersion parameter in your example in R: n <- 20 k <- 2 1/(n-k)*sum((C.OD-fitted(glm.fit.no.OD))^2/fitted(glm.fit.no.OD)) [1] 1.4703 # Via Pearson residuals sum(residuals(glm.fit.no.OD, type="pearson")^2)/df.residual(glm.fit.no.OD) [1] 1.4703 The estimated dispersion parameter is $1.47$. This is the same as given by the model output from the quasi-Poisson model: summary(glm.fit.with.OD) [...] (Dispersion parameter for quasipoisson family taken to be 1.470301) [...] The dispersion parameter in the negative binomial model There are several approaches to model count data with overdispersion. One very popular approach negative binomial regression. The conditional variance-mean relationship in the negative binomial model is: $$ V(Y_{i}|\eta_{i})=\mu_{i}+\mu_{i}^{2}/\phi=\mu_{i}(1+\mu_{i}/\phi) $$ where the second term provides the overdispersion where smaller $\phi$s denote stronger overdispersion. As $\phi\rightarrow\infty$, the variance approaches the mean and the distribution approaches the Poisson distribution as the second term gets very small. Sometimes, $1/\phi$ is used instead and if $1/\phi\rightarrow 0$, the distribution approaches the Poisson. In contrast to the quasi-Poisson model, the conditional variance is now a quadratic function of the mean (see this paper for more information). Note that the dispersion parameter is the multiplicative factor $1+\mu_{i}\phi$, which depends on $\mu_{i}$ (in contrast to the quasi-Poisson model). We can fit a negative binomial regression using glm.nb from the MASS package: library(MASS) glm.negbin <- glm.nb(C.OD ~ x) summary(glm.negbin) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.4055 0.2847 1.424 0.154360 xarable 1.2622 0.3381 3.734 0.000189 *** (Dispersion parameter for Negative Binomial(6.9569) family taken to be 1) Theta: 6.96 Std. Err.: 6.74 The estimate for $\phi$ is $6.96$ (it's called theta in the output). The reciprocal of $\phi$ is sometimes used and is $1/6.96\approx0.144$ in this case. Because $1/\phi>0$ we have overdispersion. Other methods to model overdispersed count data I will outline three approaches here which can be found in the book Bayesian Modeling Using WinBUGS by Ioannis Ntzoufras (section 8.31, pages 283-286; section 9.2.3, pages 315-318). Estimating overdispersion using a negative binomial model Our model is: $$ \begin{align} Y_{i} &\sim \text{NB}(\pi_{i},r_{i}) \\ \pi_{i} &= \frac{r_{i}}{r_{i}+\lambda_{i}} \\ \log(\lambda_{i}) &= \beta_{0} + \sum_{j=1}^{p}\beta_{j}X_{ij} \end{align} $$ We have two groups, so the two lambdas are $\lambda_{1}=e^{\beta_{0}}, \lambda_{2}=e^{\beta_{0}+\beta_{1}}$. The dispersion index is given by $\text{DI}=1+\lambda/r$. If $\text{DI}>0$ we have overdispersion. Here is our WinBUGS/JAGS model (I used JAGS to sample from the posterior): library(rjags) library(R2jags) sink("Negbin_model.txt") cat(" model { for (i in 1:n) { y[i] ~ dnegbin(p.ind[i], r.ind[i]) p.ind[i] <- r.ind[i]/(r.ind[i] + lambda.ind[i]) log(lambda.ind[i]) <- beta[1] + beta[2]*x[i] r.ind[i] <- r[ x[i] + 1 ] } lambda[1] <- exp(beta[1]) lambda[2] <- exp(beta[1] + beta[2]) beta1 <- exp(beta[2]) for(j in 1:2) { logr.cont[j] ~ dunif(0, 10) log(r.cont[j]) <- logr.cont[j] r[j] <- round(r.cont[j]) #r[j] ~dgamma(0.001, 0.001) beta[j] ~ dnorm(0.0, 0.0001) DI[j] <- (1 + lambda[j])/r[j] vari[j] <- lambda[j]*DI[j] p[j] <- r[j]/(r[j]+lambda[j]) } } ",fill=TRUE) sink() # Bundle data win.data <- list(y = C.OD, x = as.numeric(x)-1, n = length(x)) # Inits function inits <- function(){ list(beta=rlnorm(2), logr.cont=runif(2, 0,10))} params <- c("beta", "lambda", "r", "DI", "p", "vari") # MCMC settings nc <- 3 # Number of chains ni <- 50000 # Number of draws from posterior per chain nb <- 10000 # Number of draws to discard as burn-in nt <- 1 # Thinning rate # Start Gibbs sampling #out <- bugs(data=win.data, inits=inits, parameters.to.save=params, # model.file="Negbin_model.txt", n.thin=nt, n.chains=nc, # n.burnin=nb, n.iter=ni, debug = TRUE, program="OpenBUGS") #print(out, dig = 3) out <- jags( data = win.data, parameters.to.save = params, model.file = "Negbin_model.txt", n.chains = nc, n.iter = ni, n.burnin = nb, n.thin=nt, inits=inits, progress.bar="text") #out <- update(out, n.iter=50000) out The output is: Inference for Bugs model at "Negbin_model.txt", fit using jags, 3 chains, each with 50000 iterations (first 10000 discarded) n.sims = 120000 iterations saved mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff DI[1] 0.141 0.347 0.000 0.001 0.010 0.098 1.166 1.001 59000 DI[2] 1.025 1.387 0.000 0.032 0.516 1.520 4.486 1.001 67000 beta[1] 0.378 0.269 -0.176 0.203 0.386 0.562 0.880 1.001 35000 beta[2] 1.287 0.326 0.664 1.066 1.280 1.502 1.950 1.001 22000 lambda[1] 1.512 0.404 0.839 1.225 1.471 1.755 2.412 1.001 35000 lambda[2] 5.381 1.035 3.657 4.704 5.285 5.933 7.700 1.001 27000 p[1] 0.942 0.111 0.585 0.945 0.994 0.999 1.000 1.001 120000 p[2] 0.682 0.271 0.205 0.441 0.699 0.973 1.000 1.001 33000 r[1] 2452.090 4589.758 2.000 25.000 240.000 2326.000 17602.050 1.001 59000 r[2] 1143.373 3351.097 2.000 4.000 12.000 193.000 13614.000 1.001 80000 vari[1] 0.232 0.696 0.000 0.002 0.015 0.143 1.923 1.001 57000 vari[2] 5.964 10.345 0.002 0.170 2.629 7.883 30.075 1.001 56000 deviance 83.371 2.319 80.238 81.701 82.964 84.510 89.135 1.001 120000 The posterior median dispersion index for the group "arable" (DI[2]) is larger than zero indicating overdispersion. On the other hand, the dispersion index for the group "grassland" (DI[1]) is only slightly larger than zero. Let's look at the posterior density of the dispersion index for the group "arable" and calculate the 95% Highest Posterior Density intervals (HDP) for the dispersion indices: library(ggplot2) library(runjags) jagsfit.matrix <- rbind(as.matrix(as.mcmc(out)[[1]]), as.matrix(as.mcmc(out)[[2]]), as.matrix(as.mcmc(out)[[3]])) name <- "DI[2]" vect <- jagsfit.matrix[, name] vect.plot <- vect[vect<=20] mcmc.combined <- combine.mcmc(as.mcmc(out)) hpd.ints <- HPDinterval(mcmc.combined, prob=0.95) hpd.ints #hdr(mcmc.combined[,"DI"], prob=95, h=hdrbw(mcmc.combined[,"DI"], gridsize=1000000, HDRlevel=0.95), nn=5000) plot.frame <- data.frame(dispersion=vect.plot) ggplot(plot.frame, aes(x=vect.plot)) + geom_density(alpha=0.5, fill="#1B4F97", color="#1B4F97") + geom_vline(xintercept = c(0, 3.561465e+00), alpha=0.6, size=1) + xlim(c(0,20)) + ylab("Density") + xlab("Dispersion index") + ggtitle("Posterior distribution of the dispersion index for the group \"arable\"") + theme(axis.title.y =element_text(vjust=0.4, size=20, angle=90)) + theme(axis.title.x =element_text(vjust=0, size=20, angle=0)) + theme(axis.text.x =element_text(size=15, colour = "black")) + theme(axis.text.y =element_text(size=17, colour = "black")) + theme(panel.background = element_rect(fill = "grey85", colour = NA), panel.grid.major = element_line(colour = "white"), panel.grid.minor = element_line(colour = "grey90", size = 0.25)) The 95% HDP for the dispersion parameter for the group "arable" ranges from $0$ to $3.56$ (marked by the vertical grey lines in the graphic above). Estimating overdispersion using a Poisson-log-normal model First, let's define our model: $$ \begin{align} Y_{i} &\sim \text{Poisson}(\lambda_{i}) \\ \log(\lambda_{i}) &= \mu_{i}+b_{i} \\ \mu_{i} &= \beta_{0} + \sum_{j=1}^{p}\beta_{j}X_{ij} \\ b_{i} &\sim \mathcal{N}(0, \sigma^{2}) \end{align} $$ The mean and the variance then are $$ \begin{align} E(Y|\lambda,\sigma_{b}^{2}) &= \lambda e^{\sigma_{b}^{2}/2} \\ V(Y|\lambda, \sigma_{b}^{2}) &= \lambda e^{\sigma_{b}^{2}/2} + \lambda^{2}e^{2\sigma_{b}^{2}} - \lambda^{2}e^{\sigma_{b}^{2}} \end{align} $$ So we simply add $b_{i}$ to the linear predictor to take the overdispersion into account. We can build the dispersion index (DI) into the model and estimate it for each group separately: sink("Poisson.OD.t.test.txt") cat(" model { # Priors alpha ~ dnorm(0,0.001) beta ~ dnorm(0,0.001) sigma ~ dunif(0, 10) sigma2 <- sigma*sigma tau <- 1 / sigma2 maybe_overdisp <- mean(exp_eps[]) kappa[1] <- exp(alpha) kappa[2] <- exp(alpha + beta) mean.x[1] <- exp(alpha + 0.5*sigma2) mean.x[2] <- exp(alpha + beta + 0.5*sigma2) vari[1] <- kappa[1]*exp(sigma2/2)+kappa[1]*kappa[1]*exp(2*sigma2) - kappa[1]*kappa[1]*exp(sigma2) vari[2] <- kappa[2]*exp(sigma2/2)+kappa[2]*kappa[2]*exp(2*sigma2) - kappa[2]*kappa[2]*exp(sigma2) DI[1] <- vari[1]/mean.x[1] DI[2] <- vari[2]/mean.x[2] # Likelihood for (i in 1:n) { C.OD[i] ~ dpois(lambda[i]) log(lambda[i]) <- alpha + beta*x[i] + eps[i] eps[i] ~ dnorm(0, tau) exp_eps[i] <- exp(eps[i]) #di.index.ind[i] <- 1 + exp(eps[i])*(es-1)*sqrt(es) } } ",fill=TRUE) sink() # Bundle data win.data <- list(C.OD = C.OD, x = as.numeric(x)-1, n = length(x)) # Inits function inits <- function(){ list(alpha=rlnorm(1), beta=rlnorm(1), sigma=rlnorm(1))} # Parameters to estimate params <- c("alpha", "beta", "sigma", "sigma2", "maybe_overdisp", "DI", "mean.x", "vari", "kappa") # MCMC settings nc <- 3 # Number of chains ni <- 50000 # Number of draws from posterior per chain nb <- 10000 # Number of draws to discard as burn-in nt <- 1 # Thinning rate # Start Gibbs sampling out2 <- jags( data = win.data, parameters.to.save = params, model.file = "Poisson.OD.t.test.txt", n.chains = nc, n.iter = ni, n.burnin = nb, n.thin=nt, inits=inits, progress.bar="text") The output is Inference for Bugs model at "Poisson.OD.t.test.txt", fit using jags, 3 chains, each with 50000 iterations (first 10000 discarded) n.sims = 120000 iterations saved mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff DI[1] 1.746 16.499 1.004 1.132 1.312 1.666 4.054 1.001 100000 DI[2] 3.573 22.604 1.013 1.486 2.133 3.404 12.319 1.001 8500 alpha 0.273 0.324 -0.416 0.069 0.293 0.496 0.857 1.001 4800 beta 1.294 0.384 0.559 1.042 1.286 1.538 2.076 1.001 7400 kappa[1] 1.382 0.436 0.660 1.072 1.340 1.642 2.355 1.001 4800 kappa[2] 4.911 1.076 2.927 4.199 4.868 5.560 7.167 1.001 23000 maybe_overdisp 1.119 0.176 0.900 1.006 1.075 1.187 1.570 1.001 49000 mean.x[1] 1.581 0.565 0.794 1.220 1.506 1.847 2.802 1.001 7000 mean.x[2] 5.632 1.505 3.591 4.728 5.413 6.225 9.022 1.001 31000 sigma 0.461 0.232 0.050 0.300 0.442 0.601 0.970 1.009 1300 sigma2 0.266 0.260 0.002 0.090 0.195 0.361 0.940 1.009 1300 vari[1] 6.195 846.592 0.925 1.501 1.986 2.844 9.543 1.001 45000 vari[2] 32.314 1013.170 4.830 7.456 11.123 19.352 98.875 1.001 11000 deviance 74.771 5.949 64.349 70.338 74.364 78.967 86.505 1.001 5100 First note the posterior medians of the groups means.x are close to the observed means of the data ($1.51$ vs. $1.5$ and $5.41$ vs. $5.3$). The posterior variances vari for the groups are also close to the observed variances ($1.99$ vs. $1.39$ and $11.12$ vs. $10.68$). Importantly, the posterior median of the estimate $\hat{\beta_{1}}$ (beta) ($1.286$) is very close to the estimate calculated by glm using family="quasipoisson" which was $1.262$. The posterior mean and median of the dispersion index (DI) is $1.75$ and $1.31$ for the group "grassland" and $3.57$ and $2.13$ for the group "arable". It seems that the data for the group "arable" is more overdispersed than the group "grassland". The dispersion parameter estimated by glm with quasi-Poisson likelihood was around $1.47$ which is in between the posterior medians of the two dispersion indices, so our estimations look reasonable. Let's look at the posterior density of the dispersion index for the group "arable" and calculate the 95% Highest Posterior Density intervals (HDP) for the dispersion indices: library(ggplot2) library(runjags) jagsfit.matrix <- rbind(as.matrix(as.mcmc(out2)[[1]]), as.matrix(as.mcmc(out2)[[2]]), as.matrix(as.mcmc(out2)[[3]])) name <- "DI[2]" vect <- jagsfit.matrix[, name] vect.plot <- vect[vect<=20] mcmc.combined <- combine.mcmc(as.mcmc(out2)) hpd.ints <- HPDinterval(mcmc.combined, prob=0.95) hpd.ints lower upper DI[1] 1.000000e+00 3.0691670 DI[2] 1.000001e+00 8.5232449 alpha -3.729040e-01 0.8911411 beta 5.598711e-01 2.0766436 deviance 6.397199e+01 86.0493853 kappa[1] 5.908833e-01 2.2448459 kappa[2] 2.821622e+00 7.0389753 maybe_overdisp 8.565536e-01 1.4860521 mean.x[1] 6.856225e-01 2.5876715 mean.x[2] 3.300125e+00 8.3619700 sigma 4.044946e-04 0.8686939 sigma2 9.929906e-08 0.7545791 vari[1] 6.201473e-01 6.6375401 vari[2] 3.416068e+00 61.4973745 attr(,"Probability") [1] 0.95 plot.frame <- data.frame(dispersion=vect.plot) ggplot(plot.frame, aes(x=vect.plot)) + geom_density(alpha=0.5, fill="#1B4F97", color="#1B4F97") + geom_vline(xintercept = c(1, 8.5232449), alpha=0.6, size=1) + xlim(c(0,20)) + ylab("Density") + xlab("Dispersion index") + ggtitle("Posterior distribution of the dispersion index for the group \"arable\"") + theme(axis.title.y =element_text(vjust=0.4, size=20, angle=90)) + theme(axis.title.x =element_text(vjust=0, size=20, angle=0)) + theme(axis.text.x =element_text(size=15, colour = "black")) + theme(axis.text.y =element_text(size=17, colour = "black")) + theme(panel.background = element_rect(fill = "grey85", colour = NA), panel.grid.major = element_line(colour = "white"), panel.grid.minor = element_line(colour = "grey90", size = 0.25)) The 95% HDP for the dispersion parameter for the group "arable" ranges from $1$ to $8.52$ (marked by the vertical grey lines in the graphic above). It is important to note that we used a uniform prior for the standard deviation of $b_{i}$. There are other possibilities and the posterior distribution of the dispersion index can vary depending on the prior. Other priors include but are not limited to: uniform on the variance ($\sigma^{2}$), half-normal prior on $\sigma$ or $\sigma^{2}$, half-Cauchy on $\sigma$ and others. Estimating overdispersion using a Poisson-gamma model We can also model the data using a Poisson-gamma model: $$ \begin{align} Y_{i} &\sim \text{Poisson}(\lambda_{i}u_{i}) \\ u_{i} &\sim \text{Gamma}(r_{i}, r_{i}) \end{align} $$ The WinBUGS model (or OpenBUGS, JAGS) is as follows: sink("gamma_mix.txt") cat(" model{ for(i in 1:n){ y[i] ~ dpois(mu.ind[i]) mu.ind[i] <- mu[i]*u[i] log(mu[i]) <- beta[1]+beta[2]*x[i] u[i] ~ dgamma(r[x[i]+1], r[x[i]+1]) } mean.u <- mean(u[]) lambda[1] <- exp(beta[1]) lambda[2] <- exp(beta[1] + beta[2]) for (j in 1:2){ r[j] ~ dgamma(0.001, 0.001) beta[j] ~ dnorm(0.0, 0.0001) DI[j] <- (1+lambda[j]/r[j]) vari[j] <- lambda[j]*DI[j] p[j] <- r[j]/(r[j]+lambda[j]) } } ",fill=TRUE) sink() # Bundle data win.data <- list(y = C.OD, x = as.numeric(x)-1, n = length(x)) # Inits function inits <- function(){ list(beta=rlnorm(2), r=rlnorm(2))} # Parameters to estimate params <- c("beta", "lambda", "r", "DI", "mean.x", "vari", "tau", "s", "s2", "mean.u") # MCMC settings nc <- 3 # Number of chains ni <- 50000 # Number of draws from posterior per chain nb <- 10000 # Number of draws to discard as burn-in nt <- 1 # Thinning rate out <- jags( data = win.data, parameters.to.save = params, model.file = "gamma_mix.txt", n.chains = nc, n.iter = ni, n.burnin = nb, n.thin=nt, inits=inits, progress.bar="text") The output is Inference for Bugs model at "gamma_mix.txt", fit using jags, 3 chains, each with 50000 iterations (first 10000 discarded) n.sims = 120000 iterations saved mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff DI[1] 1.180 0.492 1.001 1.008 1.038 1.163 2.196 1.002 5600 DI[2] 2.280 1.734 1.017 1.248 1.774 2.665 6.580 1.001 8500 beta[1] 0.375 0.286 -0.206 0.189 0.383 0.567 0.909 1.002 3100 beta[2] 1.292 0.352 0.612 1.058 1.287 1.523 1.995 1.002 3500 lambda[1] 1.515 0.445 0.814 1.209 1.467 1.763 2.481 1.002 3100 lambda[2] 5.414 1.180 3.524 4.649 5.287 6.009 8.087 1.001 12000 mean.u 1.001 0.093 0.820 0.952 0.998 1.045 1.207 1.001 34000 r[1] 184.429 373.385 1.358 8.872 37.503 172.345 1378.710 1.002 2100 r[2] 37.280 103.862 1.076 3.162 6.733 20.971 303.187 1.001 28000 vari[1] 1.861 2.106 0.863 1.294 1.600 1.998 4.259 1.001 27000 vari[2] 13.170 17.311 4.585 6.442 9.057 14.297 45.626 1.001 6500 deviance 76.089 5.763 65.979 71.679 75.802 80.428 87.121 1.001 16000 Again, the posterior means (lambda) and variances (vari) are very close to the observed ones. The estimate for the coefficient beta[2] is again practically identical to the estimates we've got using the Poisson-log-normal approach (i.e. $\approx 1.29$). The dispersion indices for the two groups are about $1.04$ for "grassland" and $1.77$ for "arable". These are very close to the observed overdispersion which are $0.926$ for "grassland" and $2.01$ for "arable", respectively. The HDPs and the posterior density of the dispersion index for the group "arable" is jagsfit.matrix <- rbind(as.matrix(as.mcmc(out)[[1]]), as.matrix(as.mcmc(out)[[2]]), as.matrix(as.mcmc(out)[[3]])) name <- "DI[2]" vect <- jagsfit.matrix[, name] vect.plot <- vect[vect<=20] mcmc.combined <- combine.mcmc(as.mcmc(out)) hpd.ints <- HPDinterval(mcmc.combined, prob=0.95) hpd.ints lower upper DI[1] 1.0002195 1.7847379 DI[2] 1.0022835 5.1658954 beta[1] -0.1984061 0.9146267 beta[2] 0.5995819 1.9807803 deviance 65.7749045 86.8262578 lambda[1] 0.7592323 2.3828823 lambda[2] 3.2764333 7.6725830 mean.u 0.8060325 1.1907674 r[1] 0.1508372 900.3718394 r[2] 0.2372253 193.6838708 vari[1] 0.6524941 3.2896264 vari[2] 3.5796354 33.4126294 attr(,"Probability") [1] 0.95 plot.frame <- data.frame(dispersion=vect.plot) ggplot(plot.frame, aes(x=vect.plot)) + geom_density(alpha=0.5, fill="#1B4F97", color="#1B4F97") + geom_vline(xintercept = c(1, 5.1658954), alpha=0.6, size=1) + xlim(c(0,20)) + ylab("Density") + xlab("Dispersion index") + ggtitle("Posterior distribution of the dispersion index for the group \"arable\"") + theme(axis.title.y =element_text(vjust=0.4, size=20, angle=90)) + theme(axis.title.x =element_text(vjust=0, size=20, angle=0)) + theme(axis.text.x =element_text(size=15, colour = "black")) + theme(axis.text.y =element_text(size=17, colour = "black")) + theme(panel.background = element_rect(fill = "grey85", colour = NA), panel.grid.major = element_line(colour = "white"), panel.grid.minor = element_line(colour = "grey90", size = 0.25)) The 95% HDP for the dispersion parameter for the group "arable" ranges from $1$ to $5.17$ (marked by the vertical grey lines in the graphic above). The interval is smaller than the interval obtained by the Poisson-log-normal approach, which was ranging from $1$ to $8.52$. Calculate dispersion parameter as in the quasi-Poisson model The dispersion parameter in the quasi-Poisson GLM is estiamted as follows: $$ \widehat{\mathrm{DI}}=\frac{1}{n-k}\sum_{i}^{n}r_{P,i}^{2} $$ where $n$ is the sample size, $k$ the number of estimated parameters and $r_{P,i}$ are the Pearson residuals: $$ r_{P}=\frac{y-\mu}{\sqrt{\mu}} $$ The dispersion index can be estimated using normal Poisson regression in WinBUGS: sink("poisson_dispersion.txt") cat(" model{ for(i in 1:n){ y[i] ~ dpois(mu[i]) log(mu[i]) <- beta[1] + beta[2]*x[i] fitted.y[i] <- exp(beta[1]+beta[2]*x[i]) } DI.index <- 1/(n-2)*sum(pow((y[]-fitted.y[]),2)/fitted.y[]) for (j in 1:2){ beta[j] ~ dnorm(0.0, 0.0001) } } ",fill=TRUE) sink() # Bundle data win.data <- list(y = C.OD, x = as.numeric(x)-1, n = length(x)) # Inits function inits <- function(){ list(beta=rlnorm(2))} # Parameters to estimate params <- c("beta", "DI.index") # MCMC settings nc <- 3 # Number of chains ni <- 50000 # Number of draws from posterior per chain nb <- 10000 # Number of draws to discard as burn-in nt <- 1 # Thinning rate out <- jags( data = win.data, parameters.to.save = params, model.file = "poisson_lognormal.txt", n.chains = nc, n.iter = ni, n.burnin = nb, n.thin=nt, inits=inits, progress.bar="text") out mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff DI.index 1.637 0.272 1.354 1.441 1.560 1.750 2.359 1.001 4500 beta[1] 0.375 0.261 -0.170 0.208 0.388 0.556 0.855 1.002 2700 beta[2] 1.282 0.294 0.729 1.080 1.273 1.474 1.886 1.001 4300 deviance 84.491 2.068 82.536 83.054 83.869 85.265 89.928 1.001 44000 The posterior median of the dispersion index is 1.56 and the 95%-HDI is ranging from $1.345$ to $2.172$ and the value of $1.47$ as estimated by glm is well within the 95%-HDI. Heres the density plot of the posterior distribution of the dispersion index:
Definition of dispersion parameter for quasipoisson family
The dispersion parameter in the quasi-Poisson model Let us first see how the dispersion parameter is calculated in the model using quasi-Poisson likelihood. One assumption in Poisson regression is tha
Definition of dispersion parameter for quasipoisson family The dispersion parameter in the quasi-Poisson model Let us first see how the dispersion parameter is calculated in the model using quasi-Poisson likelihood. One assumption in Poisson regression is that the conditional variance and the conditional mean of the response $Y$ are the same: $$ V(Y_{i}|\eta_{i})=E(Y_{i}|\eta_{i})=\mu_{i} $$ where $\eta_{i}$ is the linear predictor. Sometimes we observe that the variance is greater than the mean which is called overdispersion. The quasi-Poisson likelihood model is a simple remedy for overdispersed count data because it introduces a dispersion parameter ($\phi$) into the Poisson model, so that the conditional variance of the response is now a linear function of the mean: $$ V(Y_{i}|\eta_{i})=\phi\mu_{i} $$ If $\phi>1$, the conditional variance increases more rapidly than its mean. Now, $\phi$ is estimated as: $$ \widehat{\phi}=\frac{1}{n-k}\sum\frac{(Y_{i}-\hat{\mu_{i}})^2}{\hat{\mu_{i}}} $$ where $n$ is the sample size, $k$ is the number of estimated parameters (including the intercept) and $\widehat{\mu_{i}}=g^{-1}(\widehat{\eta_{i}})$ is the fitted expectation of $Y_{i}$ ($g^{-1}$ is the inverse link function). Note that the formulation above is simply the Pearson $\chi^{2}$ divided by the residual degrees of freedom: $\widehat{\phi}=\chi^{2}/df$. Let's estimate the dispersion parameter in your example in R: n <- 20 k <- 2 1/(n-k)*sum((C.OD-fitted(glm.fit.no.OD))^2/fitted(glm.fit.no.OD)) [1] 1.4703 # Via Pearson residuals sum(residuals(glm.fit.no.OD, type="pearson")^2)/df.residual(glm.fit.no.OD) [1] 1.4703 The estimated dispersion parameter is $1.47$. This is the same as given by the model output from the quasi-Poisson model: summary(glm.fit.with.OD) [...] (Dispersion parameter for quasipoisson family taken to be 1.470301) [...] The dispersion parameter in the negative binomial model There are several approaches to model count data with overdispersion. One very popular approach negative binomial regression. The conditional variance-mean relationship in the negative binomial model is: $$ V(Y_{i}|\eta_{i})=\mu_{i}+\mu_{i}^{2}/\phi=\mu_{i}(1+\mu_{i}/\phi) $$ where the second term provides the overdispersion where smaller $\phi$s denote stronger overdispersion. As $\phi\rightarrow\infty$, the variance approaches the mean and the distribution approaches the Poisson distribution as the second term gets very small. Sometimes, $1/\phi$ is used instead and if $1/\phi\rightarrow 0$, the distribution approaches the Poisson. In contrast to the quasi-Poisson model, the conditional variance is now a quadratic function of the mean (see this paper for more information). Note that the dispersion parameter is the multiplicative factor $1+\mu_{i}\phi$, which depends on $\mu_{i}$ (in contrast to the quasi-Poisson model). We can fit a negative binomial regression using glm.nb from the MASS package: library(MASS) glm.negbin <- glm.nb(C.OD ~ x) summary(glm.negbin) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 0.4055 0.2847 1.424 0.154360 xarable 1.2622 0.3381 3.734 0.000189 *** (Dispersion parameter for Negative Binomial(6.9569) family taken to be 1) Theta: 6.96 Std. Err.: 6.74 The estimate for $\phi$ is $6.96$ (it's called theta in the output). The reciprocal of $\phi$ is sometimes used and is $1/6.96\approx0.144$ in this case. Because $1/\phi>0$ we have overdispersion. Other methods to model overdispersed count data I will outline three approaches here which can be found in the book Bayesian Modeling Using WinBUGS by Ioannis Ntzoufras (section 8.31, pages 283-286; section 9.2.3, pages 315-318). Estimating overdispersion using a negative binomial model Our model is: $$ \begin{align} Y_{i} &\sim \text{NB}(\pi_{i},r_{i}) \\ \pi_{i} &= \frac{r_{i}}{r_{i}+\lambda_{i}} \\ \log(\lambda_{i}) &= \beta_{0} + \sum_{j=1}^{p}\beta_{j}X_{ij} \end{align} $$ We have two groups, so the two lambdas are $\lambda_{1}=e^{\beta_{0}}, \lambda_{2}=e^{\beta_{0}+\beta_{1}}$. The dispersion index is given by $\text{DI}=1+\lambda/r$. If $\text{DI}>0$ we have overdispersion. Here is our WinBUGS/JAGS model (I used JAGS to sample from the posterior): library(rjags) library(R2jags) sink("Negbin_model.txt") cat(" model { for (i in 1:n) { y[i] ~ dnegbin(p.ind[i], r.ind[i]) p.ind[i] <- r.ind[i]/(r.ind[i] + lambda.ind[i]) log(lambda.ind[i]) <- beta[1] + beta[2]*x[i] r.ind[i] <- r[ x[i] + 1 ] } lambda[1] <- exp(beta[1]) lambda[2] <- exp(beta[1] + beta[2]) beta1 <- exp(beta[2]) for(j in 1:2) { logr.cont[j] ~ dunif(0, 10) log(r.cont[j]) <- logr.cont[j] r[j] <- round(r.cont[j]) #r[j] ~dgamma(0.001, 0.001) beta[j] ~ dnorm(0.0, 0.0001) DI[j] <- (1 + lambda[j])/r[j] vari[j] <- lambda[j]*DI[j] p[j] <- r[j]/(r[j]+lambda[j]) } } ",fill=TRUE) sink() # Bundle data win.data <- list(y = C.OD, x = as.numeric(x)-1, n = length(x)) # Inits function inits <- function(){ list(beta=rlnorm(2), logr.cont=runif(2, 0,10))} params <- c("beta", "lambda", "r", "DI", "p", "vari") # MCMC settings nc <- 3 # Number of chains ni <- 50000 # Number of draws from posterior per chain nb <- 10000 # Number of draws to discard as burn-in nt <- 1 # Thinning rate # Start Gibbs sampling #out <- bugs(data=win.data, inits=inits, parameters.to.save=params, # model.file="Negbin_model.txt", n.thin=nt, n.chains=nc, # n.burnin=nb, n.iter=ni, debug = TRUE, program="OpenBUGS") #print(out, dig = 3) out <- jags( data = win.data, parameters.to.save = params, model.file = "Negbin_model.txt", n.chains = nc, n.iter = ni, n.burnin = nb, n.thin=nt, inits=inits, progress.bar="text") #out <- update(out, n.iter=50000) out The output is: Inference for Bugs model at "Negbin_model.txt", fit using jags, 3 chains, each with 50000 iterations (first 10000 discarded) n.sims = 120000 iterations saved mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff DI[1] 0.141 0.347 0.000 0.001 0.010 0.098 1.166 1.001 59000 DI[2] 1.025 1.387 0.000 0.032 0.516 1.520 4.486 1.001 67000 beta[1] 0.378 0.269 -0.176 0.203 0.386 0.562 0.880 1.001 35000 beta[2] 1.287 0.326 0.664 1.066 1.280 1.502 1.950 1.001 22000 lambda[1] 1.512 0.404 0.839 1.225 1.471 1.755 2.412 1.001 35000 lambda[2] 5.381 1.035 3.657 4.704 5.285 5.933 7.700 1.001 27000 p[1] 0.942 0.111 0.585 0.945 0.994 0.999 1.000 1.001 120000 p[2] 0.682 0.271 0.205 0.441 0.699 0.973 1.000 1.001 33000 r[1] 2452.090 4589.758 2.000 25.000 240.000 2326.000 17602.050 1.001 59000 r[2] 1143.373 3351.097 2.000 4.000 12.000 193.000 13614.000 1.001 80000 vari[1] 0.232 0.696 0.000 0.002 0.015 0.143 1.923 1.001 57000 vari[2] 5.964 10.345 0.002 0.170 2.629 7.883 30.075 1.001 56000 deviance 83.371 2.319 80.238 81.701 82.964 84.510 89.135 1.001 120000 The posterior median dispersion index for the group "arable" (DI[2]) is larger than zero indicating overdispersion. On the other hand, the dispersion index for the group "grassland" (DI[1]) is only slightly larger than zero. Let's look at the posterior density of the dispersion index for the group "arable" and calculate the 95% Highest Posterior Density intervals (HDP) for the dispersion indices: library(ggplot2) library(runjags) jagsfit.matrix <- rbind(as.matrix(as.mcmc(out)[[1]]), as.matrix(as.mcmc(out)[[2]]), as.matrix(as.mcmc(out)[[3]])) name <- "DI[2]" vect <- jagsfit.matrix[, name] vect.plot <- vect[vect<=20] mcmc.combined <- combine.mcmc(as.mcmc(out)) hpd.ints <- HPDinterval(mcmc.combined, prob=0.95) hpd.ints #hdr(mcmc.combined[,"DI"], prob=95, h=hdrbw(mcmc.combined[,"DI"], gridsize=1000000, HDRlevel=0.95), nn=5000) plot.frame <- data.frame(dispersion=vect.plot) ggplot(plot.frame, aes(x=vect.plot)) + geom_density(alpha=0.5, fill="#1B4F97", color="#1B4F97") + geom_vline(xintercept = c(0, 3.561465e+00), alpha=0.6, size=1) + xlim(c(0,20)) + ylab("Density") + xlab("Dispersion index") + ggtitle("Posterior distribution of the dispersion index for the group \"arable\"") + theme(axis.title.y =element_text(vjust=0.4, size=20, angle=90)) + theme(axis.title.x =element_text(vjust=0, size=20, angle=0)) + theme(axis.text.x =element_text(size=15, colour = "black")) + theme(axis.text.y =element_text(size=17, colour = "black")) + theme(panel.background = element_rect(fill = "grey85", colour = NA), panel.grid.major = element_line(colour = "white"), panel.grid.minor = element_line(colour = "grey90", size = 0.25)) The 95% HDP for the dispersion parameter for the group "arable" ranges from $0$ to $3.56$ (marked by the vertical grey lines in the graphic above). Estimating overdispersion using a Poisson-log-normal model First, let's define our model: $$ \begin{align} Y_{i} &\sim \text{Poisson}(\lambda_{i}) \\ \log(\lambda_{i}) &= \mu_{i}+b_{i} \\ \mu_{i} &= \beta_{0} + \sum_{j=1}^{p}\beta_{j}X_{ij} \\ b_{i} &\sim \mathcal{N}(0, \sigma^{2}) \end{align} $$ The mean and the variance then are $$ \begin{align} E(Y|\lambda,\sigma_{b}^{2}) &= \lambda e^{\sigma_{b}^{2}/2} \\ V(Y|\lambda, \sigma_{b}^{2}) &= \lambda e^{\sigma_{b}^{2}/2} + \lambda^{2}e^{2\sigma_{b}^{2}} - \lambda^{2}e^{\sigma_{b}^{2}} \end{align} $$ So we simply add $b_{i}$ to the linear predictor to take the overdispersion into account. We can build the dispersion index (DI) into the model and estimate it for each group separately: sink("Poisson.OD.t.test.txt") cat(" model { # Priors alpha ~ dnorm(0,0.001) beta ~ dnorm(0,0.001) sigma ~ dunif(0, 10) sigma2 <- sigma*sigma tau <- 1 / sigma2 maybe_overdisp <- mean(exp_eps[]) kappa[1] <- exp(alpha) kappa[2] <- exp(alpha + beta) mean.x[1] <- exp(alpha + 0.5*sigma2) mean.x[2] <- exp(alpha + beta + 0.5*sigma2) vari[1] <- kappa[1]*exp(sigma2/2)+kappa[1]*kappa[1]*exp(2*sigma2) - kappa[1]*kappa[1]*exp(sigma2) vari[2] <- kappa[2]*exp(sigma2/2)+kappa[2]*kappa[2]*exp(2*sigma2) - kappa[2]*kappa[2]*exp(sigma2) DI[1] <- vari[1]/mean.x[1] DI[2] <- vari[2]/mean.x[2] # Likelihood for (i in 1:n) { C.OD[i] ~ dpois(lambda[i]) log(lambda[i]) <- alpha + beta*x[i] + eps[i] eps[i] ~ dnorm(0, tau) exp_eps[i] <- exp(eps[i]) #di.index.ind[i] <- 1 + exp(eps[i])*(es-1)*sqrt(es) } } ",fill=TRUE) sink() # Bundle data win.data <- list(C.OD = C.OD, x = as.numeric(x)-1, n = length(x)) # Inits function inits <- function(){ list(alpha=rlnorm(1), beta=rlnorm(1), sigma=rlnorm(1))} # Parameters to estimate params <- c("alpha", "beta", "sigma", "sigma2", "maybe_overdisp", "DI", "mean.x", "vari", "kappa") # MCMC settings nc <- 3 # Number of chains ni <- 50000 # Number of draws from posterior per chain nb <- 10000 # Number of draws to discard as burn-in nt <- 1 # Thinning rate # Start Gibbs sampling out2 <- jags( data = win.data, parameters.to.save = params, model.file = "Poisson.OD.t.test.txt", n.chains = nc, n.iter = ni, n.burnin = nb, n.thin=nt, inits=inits, progress.bar="text") The output is Inference for Bugs model at "Poisson.OD.t.test.txt", fit using jags, 3 chains, each with 50000 iterations (first 10000 discarded) n.sims = 120000 iterations saved mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff DI[1] 1.746 16.499 1.004 1.132 1.312 1.666 4.054 1.001 100000 DI[2] 3.573 22.604 1.013 1.486 2.133 3.404 12.319 1.001 8500 alpha 0.273 0.324 -0.416 0.069 0.293 0.496 0.857 1.001 4800 beta 1.294 0.384 0.559 1.042 1.286 1.538 2.076 1.001 7400 kappa[1] 1.382 0.436 0.660 1.072 1.340 1.642 2.355 1.001 4800 kappa[2] 4.911 1.076 2.927 4.199 4.868 5.560 7.167 1.001 23000 maybe_overdisp 1.119 0.176 0.900 1.006 1.075 1.187 1.570 1.001 49000 mean.x[1] 1.581 0.565 0.794 1.220 1.506 1.847 2.802 1.001 7000 mean.x[2] 5.632 1.505 3.591 4.728 5.413 6.225 9.022 1.001 31000 sigma 0.461 0.232 0.050 0.300 0.442 0.601 0.970 1.009 1300 sigma2 0.266 0.260 0.002 0.090 0.195 0.361 0.940 1.009 1300 vari[1] 6.195 846.592 0.925 1.501 1.986 2.844 9.543 1.001 45000 vari[2] 32.314 1013.170 4.830 7.456 11.123 19.352 98.875 1.001 11000 deviance 74.771 5.949 64.349 70.338 74.364 78.967 86.505 1.001 5100 First note the posterior medians of the groups means.x are close to the observed means of the data ($1.51$ vs. $1.5$ and $5.41$ vs. $5.3$). The posterior variances vari for the groups are also close to the observed variances ($1.99$ vs. $1.39$ and $11.12$ vs. $10.68$). Importantly, the posterior median of the estimate $\hat{\beta_{1}}$ (beta) ($1.286$) is very close to the estimate calculated by glm using family="quasipoisson" which was $1.262$. The posterior mean and median of the dispersion index (DI) is $1.75$ and $1.31$ for the group "grassland" and $3.57$ and $2.13$ for the group "arable". It seems that the data for the group "arable" is more overdispersed than the group "grassland". The dispersion parameter estimated by glm with quasi-Poisson likelihood was around $1.47$ which is in between the posterior medians of the two dispersion indices, so our estimations look reasonable. Let's look at the posterior density of the dispersion index for the group "arable" and calculate the 95% Highest Posterior Density intervals (HDP) for the dispersion indices: library(ggplot2) library(runjags) jagsfit.matrix <- rbind(as.matrix(as.mcmc(out2)[[1]]), as.matrix(as.mcmc(out2)[[2]]), as.matrix(as.mcmc(out2)[[3]])) name <- "DI[2]" vect <- jagsfit.matrix[, name] vect.plot <- vect[vect<=20] mcmc.combined <- combine.mcmc(as.mcmc(out2)) hpd.ints <- HPDinterval(mcmc.combined, prob=0.95) hpd.ints lower upper DI[1] 1.000000e+00 3.0691670 DI[2] 1.000001e+00 8.5232449 alpha -3.729040e-01 0.8911411 beta 5.598711e-01 2.0766436 deviance 6.397199e+01 86.0493853 kappa[1] 5.908833e-01 2.2448459 kappa[2] 2.821622e+00 7.0389753 maybe_overdisp 8.565536e-01 1.4860521 mean.x[1] 6.856225e-01 2.5876715 mean.x[2] 3.300125e+00 8.3619700 sigma 4.044946e-04 0.8686939 sigma2 9.929906e-08 0.7545791 vari[1] 6.201473e-01 6.6375401 vari[2] 3.416068e+00 61.4973745 attr(,"Probability") [1] 0.95 plot.frame <- data.frame(dispersion=vect.plot) ggplot(plot.frame, aes(x=vect.plot)) + geom_density(alpha=0.5, fill="#1B4F97", color="#1B4F97") + geom_vline(xintercept = c(1, 8.5232449), alpha=0.6, size=1) + xlim(c(0,20)) + ylab("Density") + xlab("Dispersion index") + ggtitle("Posterior distribution of the dispersion index for the group \"arable\"") + theme(axis.title.y =element_text(vjust=0.4, size=20, angle=90)) + theme(axis.title.x =element_text(vjust=0, size=20, angle=0)) + theme(axis.text.x =element_text(size=15, colour = "black")) + theme(axis.text.y =element_text(size=17, colour = "black")) + theme(panel.background = element_rect(fill = "grey85", colour = NA), panel.grid.major = element_line(colour = "white"), panel.grid.minor = element_line(colour = "grey90", size = 0.25)) The 95% HDP for the dispersion parameter for the group "arable" ranges from $1$ to $8.52$ (marked by the vertical grey lines in the graphic above). It is important to note that we used a uniform prior for the standard deviation of $b_{i}$. There are other possibilities and the posterior distribution of the dispersion index can vary depending on the prior. Other priors include but are not limited to: uniform on the variance ($\sigma^{2}$), half-normal prior on $\sigma$ or $\sigma^{2}$, half-Cauchy on $\sigma$ and others. Estimating overdispersion using a Poisson-gamma model We can also model the data using a Poisson-gamma model: $$ \begin{align} Y_{i} &\sim \text{Poisson}(\lambda_{i}u_{i}) \\ u_{i} &\sim \text{Gamma}(r_{i}, r_{i}) \end{align} $$ The WinBUGS model (or OpenBUGS, JAGS) is as follows: sink("gamma_mix.txt") cat(" model{ for(i in 1:n){ y[i] ~ dpois(mu.ind[i]) mu.ind[i] <- mu[i]*u[i] log(mu[i]) <- beta[1]+beta[2]*x[i] u[i] ~ dgamma(r[x[i]+1], r[x[i]+1]) } mean.u <- mean(u[]) lambda[1] <- exp(beta[1]) lambda[2] <- exp(beta[1] + beta[2]) for (j in 1:2){ r[j] ~ dgamma(0.001, 0.001) beta[j] ~ dnorm(0.0, 0.0001) DI[j] <- (1+lambda[j]/r[j]) vari[j] <- lambda[j]*DI[j] p[j] <- r[j]/(r[j]+lambda[j]) } } ",fill=TRUE) sink() # Bundle data win.data <- list(y = C.OD, x = as.numeric(x)-1, n = length(x)) # Inits function inits <- function(){ list(beta=rlnorm(2), r=rlnorm(2))} # Parameters to estimate params <- c("beta", "lambda", "r", "DI", "mean.x", "vari", "tau", "s", "s2", "mean.u") # MCMC settings nc <- 3 # Number of chains ni <- 50000 # Number of draws from posterior per chain nb <- 10000 # Number of draws to discard as burn-in nt <- 1 # Thinning rate out <- jags( data = win.data, parameters.to.save = params, model.file = "gamma_mix.txt", n.chains = nc, n.iter = ni, n.burnin = nb, n.thin=nt, inits=inits, progress.bar="text") The output is Inference for Bugs model at "gamma_mix.txt", fit using jags, 3 chains, each with 50000 iterations (first 10000 discarded) n.sims = 120000 iterations saved mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff DI[1] 1.180 0.492 1.001 1.008 1.038 1.163 2.196 1.002 5600 DI[2] 2.280 1.734 1.017 1.248 1.774 2.665 6.580 1.001 8500 beta[1] 0.375 0.286 -0.206 0.189 0.383 0.567 0.909 1.002 3100 beta[2] 1.292 0.352 0.612 1.058 1.287 1.523 1.995 1.002 3500 lambda[1] 1.515 0.445 0.814 1.209 1.467 1.763 2.481 1.002 3100 lambda[2] 5.414 1.180 3.524 4.649 5.287 6.009 8.087 1.001 12000 mean.u 1.001 0.093 0.820 0.952 0.998 1.045 1.207 1.001 34000 r[1] 184.429 373.385 1.358 8.872 37.503 172.345 1378.710 1.002 2100 r[2] 37.280 103.862 1.076 3.162 6.733 20.971 303.187 1.001 28000 vari[1] 1.861 2.106 0.863 1.294 1.600 1.998 4.259 1.001 27000 vari[2] 13.170 17.311 4.585 6.442 9.057 14.297 45.626 1.001 6500 deviance 76.089 5.763 65.979 71.679 75.802 80.428 87.121 1.001 16000 Again, the posterior means (lambda) and variances (vari) are very close to the observed ones. The estimate for the coefficient beta[2] is again practically identical to the estimates we've got using the Poisson-log-normal approach (i.e. $\approx 1.29$). The dispersion indices for the two groups are about $1.04$ for "grassland" and $1.77$ for "arable". These are very close to the observed overdispersion which are $0.926$ for "grassland" and $2.01$ for "arable", respectively. The HDPs and the posterior density of the dispersion index for the group "arable" is jagsfit.matrix <- rbind(as.matrix(as.mcmc(out)[[1]]), as.matrix(as.mcmc(out)[[2]]), as.matrix(as.mcmc(out)[[3]])) name <- "DI[2]" vect <- jagsfit.matrix[, name] vect.plot <- vect[vect<=20] mcmc.combined <- combine.mcmc(as.mcmc(out)) hpd.ints <- HPDinterval(mcmc.combined, prob=0.95) hpd.ints lower upper DI[1] 1.0002195 1.7847379 DI[2] 1.0022835 5.1658954 beta[1] -0.1984061 0.9146267 beta[2] 0.5995819 1.9807803 deviance 65.7749045 86.8262578 lambda[1] 0.7592323 2.3828823 lambda[2] 3.2764333 7.6725830 mean.u 0.8060325 1.1907674 r[1] 0.1508372 900.3718394 r[2] 0.2372253 193.6838708 vari[1] 0.6524941 3.2896264 vari[2] 3.5796354 33.4126294 attr(,"Probability") [1] 0.95 plot.frame <- data.frame(dispersion=vect.plot) ggplot(plot.frame, aes(x=vect.plot)) + geom_density(alpha=0.5, fill="#1B4F97", color="#1B4F97") + geom_vline(xintercept = c(1, 5.1658954), alpha=0.6, size=1) + xlim(c(0,20)) + ylab("Density") + xlab("Dispersion index") + ggtitle("Posterior distribution of the dispersion index for the group \"arable\"") + theme(axis.title.y =element_text(vjust=0.4, size=20, angle=90)) + theme(axis.title.x =element_text(vjust=0, size=20, angle=0)) + theme(axis.text.x =element_text(size=15, colour = "black")) + theme(axis.text.y =element_text(size=17, colour = "black")) + theme(panel.background = element_rect(fill = "grey85", colour = NA), panel.grid.major = element_line(colour = "white"), panel.grid.minor = element_line(colour = "grey90", size = 0.25)) The 95% HDP for the dispersion parameter for the group "arable" ranges from $1$ to $5.17$ (marked by the vertical grey lines in the graphic above). The interval is smaller than the interval obtained by the Poisson-log-normal approach, which was ranging from $1$ to $8.52$. Calculate dispersion parameter as in the quasi-Poisson model The dispersion parameter in the quasi-Poisson GLM is estiamted as follows: $$ \widehat{\mathrm{DI}}=\frac{1}{n-k}\sum_{i}^{n}r_{P,i}^{2} $$ where $n$ is the sample size, $k$ the number of estimated parameters and $r_{P,i}$ are the Pearson residuals: $$ r_{P}=\frac{y-\mu}{\sqrt{\mu}} $$ The dispersion index can be estimated using normal Poisson regression in WinBUGS: sink("poisson_dispersion.txt") cat(" model{ for(i in 1:n){ y[i] ~ dpois(mu[i]) log(mu[i]) <- beta[1] + beta[2]*x[i] fitted.y[i] <- exp(beta[1]+beta[2]*x[i]) } DI.index <- 1/(n-2)*sum(pow((y[]-fitted.y[]),2)/fitted.y[]) for (j in 1:2){ beta[j] ~ dnorm(0.0, 0.0001) } } ",fill=TRUE) sink() # Bundle data win.data <- list(y = C.OD, x = as.numeric(x)-1, n = length(x)) # Inits function inits <- function(){ list(beta=rlnorm(2))} # Parameters to estimate params <- c("beta", "DI.index") # MCMC settings nc <- 3 # Number of chains ni <- 50000 # Number of draws from posterior per chain nb <- 10000 # Number of draws to discard as burn-in nt <- 1 # Thinning rate out <- jags( data = win.data, parameters.to.save = params, model.file = "poisson_lognormal.txt", n.chains = nc, n.iter = ni, n.burnin = nb, n.thin=nt, inits=inits, progress.bar="text") out mu.vect sd.vect 2.5% 25% 50% 75% 97.5% Rhat n.eff DI.index 1.637 0.272 1.354 1.441 1.560 1.750 2.359 1.001 4500 beta[1] 0.375 0.261 -0.170 0.208 0.388 0.556 0.855 1.002 2700 beta[2] 1.282 0.294 0.729 1.080 1.273 1.474 1.886 1.001 4300 deviance 84.491 2.068 82.536 83.054 83.869 85.265 89.928 1.001 44000 The posterior median of the dispersion index is 1.56 and the 95%-HDI is ranging from $1.345$ to $2.172$ and the value of $1.47$ as estimated by glm is well within the 95%-HDI. Heres the density plot of the posterior distribution of the dispersion index:
Definition of dispersion parameter for quasipoisson family The dispersion parameter in the quasi-Poisson model Let us first see how the dispersion parameter is calculated in the model using quasi-Poisson likelihood. One assumption in Poisson regression is tha
27,516
When to use non-parametric regression?
Before looking on QQplots of residuals, you should assess the quality of fit, by plotting residuals against the predictors in the model (and possibly, also against other variables you have which you did not use). Non-linearity should show up in this plots. If the effect of variable $x$ really is linear, you expect the plot of residuals against $x$ to be "horizontal", without visible structure: * * * * * * * --------------------------------------*------------------------------x * * * * * * That is, a random horizontal "blob" of points, centered around the line resid=0. If the effect is non-linear, you expect to see some curvature in this plot. (and, please, ignore the QQplots until you got non-linearities sorted out, using plots as above!) You should also think about possible interactions (modelled usually by product terms), that is, the effect of one variable depends on the levels of another, (If all your three variables have high values at the same time, maybe that shows some particularly difficult patient? If so, interactions could be needed). If you go for some non-linear model, after having tried for interactions and transformations (did you try log(Cost)?) Did you try some box-cox-transformations? Since you have multiple regression, I don't think that loess is what you need, you should look for gam (generalized additive models, SAS should have that, in R it is in package mgcv).
When to use non-parametric regression?
Before looking on QQplots of residuals, you should assess the quality of fit, by plotting residuals against the predictors in the model (and possibly, also against other variables you have which you d
When to use non-parametric regression? Before looking on QQplots of residuals, you should assess the quality of fit, by plotting residuals against the predictors in the model (and possibly, also against other variables you have which you did not use). Non-linearity should show up in this plots. If the effect of variable $x$ really is linear, you expect the plot of residuals against $x$ to be "horizontal", without visible structure: * * * * * * * --------------------------------------*------------------------------x * * * * * * That is, a random horizontal "blob" of points, centered around the line resid=0. If the effect is non-linear, you expect to see some curvature in this plot. (and, please, ignore the QQplots until you got non-linearities sorted out, using plots as above!) You should also think about possible interactions (modelled usually by product terms), that is, the effect of one variable depends on the levels of another, (If all your three variables have high values at the same time, maybe that shows some particularly difficult patient? If so, interactions could be needed). If you go for some non-linear model, after having tried for interactions and transformations (did you try log(Cost)?) Did you try some box-cox-transformations? Since you have multiple regression, I don't think that loess is what you need, you should look for gam (generalized additive models, SAS should have that, in R it is in package mgcv).
When to use non-parametric regression? Before looking on QQplots of residuals, you should assess the quality of fit, by plotting residuals against the predictors in the model (and possibly, also against other variables you have which you d
27,517
When to use non-parametric regression?
A LOESS will always give a better fit than regression, unless the data truly lie along a straight line. LOESS is a locally linear approximation that is designed to pass close to the data. These methods are basically exploratory. And while it is dangerous to extrapolate a linear model beyond the limits of the fit, extrapolation would be reckless in the case of LOESS. If your model gives you negative costs, that's a pretty good sign that a linear regression is not appropriate on the variables you have. You say that you tried transformations. Did you take the log of cost against your predictors? In the nature of things, it is unlikely that there is a simple relationship between cost and the variables you mention. Sometimes the purpose of a linear regression is simply to demonstrate that some sort of correlation exists, and perhaps to select a sensible set of predictors.
When to use non-parametric regression?
A LOESS will always give a better fit than regression, unless the data truly lie along a straight line. LOESS is a locally linear approximation that is designed to pass close to the data. These method
When to use non-parametric regression? A LOESS will always give a better fit than regression, unless the data truly lie along a straight line. LOESS is a locally linear approximation that is designed to pass close to the data. These methods are basically exploratory. And while it is dangerous to extrapolate a linear model beyond the limits of the fit, extrapolation would be reckless in the case of LOESS. If your model gives you negative costs, that's a pretty good sign that a linear regression is not appropriate on the variables you have. You say that you tried transformations. Did you take the log of cost against your predictors? In the nature of things, it is unlikely that there is a simple relationship between cost and the variables you mention. Sometimes the purpose of a linear regression is simply to demonstrate that some sort of correlation exists, and perhaps to select a sensible set of predictors.
When to use non-parametric regression? A LOESS will always give a better fit than regression, unless the data truly lie along a straight line. LOESS is a locally linear approximation that is designed to pass close to the data. These method
27,518
When to use non-parametric regression?
Bravo for doing residual analysis. Puts you way ahead of the typical analyst. (Your description of the model is deficient in not describing the error structure, though.) You should be considering transformations of the X's as well as looking at transformations of the Y's. I realize that SAS is behind R in modeling with spline fits but I understand that recent versions have offered that capacity. Consider adding restricted cubic spline fits for the X terms. As a reference Frank Harrell's text "Regression Modeling Strategies" is hard to beat. It has solid statistical arguments for this approach. It is a parametric approach that allows discovery of structure in the data that would other wise be missed.
When to use non-parametric regression?
Bravo for doing residual analysis. Puts you way ahead of the typical analyst. (Your description of the model is deficient in not describing the error structure, though.) You should be considering tra
When to use non-parametric regression? Bravo for doing residual analysis. Puts you way ahead of the typical analyst. (Your description of the model is deficient in not describing the error structure, though.) You should be considering transformations of the X's as well as looking at transformations of the Y's. I realize that SAS is behind R in modeling with spline fits but I understand that recent versions have offered that capacity. Consider adding restricted cubic spline fits for the X terms. As a reference Frank Harrell's text "Regression Modeling Strategies" is hard to beat. It has solid statistical arguments for this approach. It is a parametric approach that allows discovery of structure in the data that would other wise be missed.
When to use non-parametric regression? Bravo for doing residual analysis. Puts you way ahead of the typical analyst. (Your description of the model is deficient in not describing the error structure, though.) You should be considering tra
27,519
When to use non-parametric regression?
I think kjetil has given you some good suggestions. I would add that non-normal residuals does not mean you have to jump from linear or nonlinear regression to nonparametric regression. By going to nonparametric regression you give up the structure of a functional form. There are robust regression alternative to OLS regression that you could go to first. Then generalized linear models and generalized additive models if next steps are needed. LOESS should in my view be your last resort. I think that I agree with kjetil on that.
When to use non-parametric regression?
I think kjetil has given you some good suggestions. I would add that non-normal residuals does not mean you have to jump from linear or nonlinear regression to nonparametric regression. By going to
When to use non-parametric regression? I think kjetil has given you some good suggestions. I would add that non-normal residuals does not mean you have to jump from linear or nonlinear regression to nonparametric regression. By going to nonparametric regression you give up the structure of a functional form. There are robust regression alternative to OLS regression that you could go to first. Then generalized linear models and generalized additive models if next steps are needed. LOESS should in my view be your last resort. I think that I agree with kjetil on that.
When to use non-parametric regression? I think kjetil has given you some good suggestions. I would add that non-normal residuals does not mean you have to jump from linear or nonlinear regression to nonparametric regression. By going to
27,520
How to create recommender system that integrates both collaborative filtering and content features?
Why are you considering a neural network before completely understanding the problem? Standard matrix factorization methods for collaborative filtering are able to leverage content features easily. For an example of how this can be done in a Bayesian setting see the Matchbox paper.
How to create recommender system that integrates both collaborative filtering and content features?
Why are you considering a neural network before completely understanding the problem? Standard matrix factorization methods for collaborative filtering are able to leverage content features easily. Fo
How to create recommender system that integrates both collaborative filtering and content features? Why are you considering a neural network before completely understanding the problem? Standard matrix factorization methods for collaborative filtering are able to leverage content features easily. For an example of how this can be done in a Bayesian setting see the Matchbox paper.
How to create recommender system that integrates both collaborative filtering and content features? Why are you considering a neural network before completely understanding the problem? Standard matrix factorization methods for collaborative filtering are able to leverage content features easily. Fo
27,521
How to create recommender system that integrates both collaborative filtering and content features?
Three papers about integrating matrix factorization with content features (here, topic model specifically): Deepak Agarwal and Bee-Chung Chen. 2010. fLDA: matrix factorization through latent dirichlet allocation. In Proceedings of the third ACM international conference on Web search and data mining (WSDM ’10). ACM, New York, NY, USA, 91-100. Hanhuai Shan and Arindam Banerjee. 2010. Generalized Probabilistic Matrix Factorizations for Collaborative Filtering. In Proceedings of the 2010 IEEE International Conference on Data Mining (ICDM ’10). IEEE Computer Society, Washington, DC, USA, 1025-1030. Chong Wang and David M. Blei. 2011. Collaborative topic modeling for recommending scientific articles. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ’11). ACM, New York, NY, USA, 448-456. I also would promote my own blog entry that discusses this issue a little bit: Topic Models meet Lantent Factor Models
How to create recommender system that integrates both collaborative filtering and content features?
Three papers about integrating matrix factorization with content features (here, topic model specifically): Deepak Agarwal and Bee-Chung Chen. 2010. fLDA: matrix factorization through latent dirichle
How to create recommender system that integrates both collaborative filtering and content features? Three papers about integrating matrix factorization with content features (here, topic model specifically): Deepak Agarwal and Bee-Chung Chen. 2010. fLDA: matrix factorization through latent dirichlet allocation. In Proceedings of the third ACM international conference on Web search and data mining (WSDM ’10). ACM, New York, NY, USA, 91-100. Hanhuai Shan and Arindam Banerjee. 2010. Generalized Probabilistic Matrix Factorizations for Collaborative Filtering. In Proceedings of the 2010 IEEE International Conference on Data Mining (ICDM ’10). IEEE Computer Society, Washington, DC, USA, 1025-1030. Chong Wang and David M. Blei. 2011. Collaborative topic modeling for recommending scientific articles. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ’11). ACM, New York, NY, USA, 448-456. I also would promote my own blog entry that discusses this issue a little bit: Topic Models meet Lantent Factor Models
How to create recommender system that integrates both collaborative filtering and content features? Three papers about integrating matrix factorization with content features (here, topic model specifically): Deepak Agarwal and Bee-Chung Chen. 2010. fLDA: matrix factorization through latent dirichle
27,522
How to create recommender system that integrates both collaborative filtering and content features?
There is no need for a neural network approach, collaborative filtering is an algorithm on itself. For your problem specifically, there is a good description of cf and recomender system on: ml-class.org (look for XVI: Recommender Systems). It is elegant, simple, and if you do it right (that is, use vectorized form, fast minimizers, and prepared gradients) it can be quite fast.
How to create recommender system that integrates both collaborative filtering and content features?
There is no need for a neural network approach, collaborative filtering is an algorithm on itself. For your problem specifically, there is a good description of cf and recomender system on: ml-class.o
How to create recommender system that integrates both collaborative filtering and content features? There is no need for a neural network approach, collaborative filtering is an algorithm on itself. For your problem specifically, there is a good description of cf and recomender system on: ml-class.org (look for XVI: Recommender Systems). It is elegant, simple, and if you do it right (that is, use vectorized form, fast minimizers, and prepared gradients) it can be quite fast.
How to create recommender system that integrates both collaborative filtering and content features? There is no need for a neural network approach, collaborative filtering is an algorithm on itself. For your problem specifically, there is a good description of cf and recomender system on: ml-class.o
27,523
Detecting parts of a song
I'm no expert on signal processing, but I know a fair bit about music theory. I'd say that, on the contrary, classical music would probably be some of the hardest music to analyze by simple mathematical methods. You'd best start with something simpler and more repetitive, such as pop or techno music. Pop often has a verse-chorus-verse...etc format that might be conducive to a simplistic version of your goals. Try using a Fourier Transform on your data to break it into its most prominent constituent frequencies, maybe hierarchically among different subsections. In particular you can look for different things based on how you want to group the "parts" of your data. The slowest oscillations in your pop music will probably be the shifts between verse and chorus and back to verse (maybe 0.75 oscillations per minute?). Next you might find higher frequency oscillations among your chord progressions, that is, among each full measure of your song (maybe around 6 oscillations per minute?). Next highest frequency I'd think would be a bar within a measure (maybe about 24 oscillations per minute?) within which the strumming pattern and syncopation of lyrics often repeat in pop/folk music. Getting down into the gory details, next you'll find the beats and rhythms that repeat within each bar of your music. Picking out and isolating one of these (at maybe 148 oscillations/beats per minute?) would likely yield a bass drum kick, or a cowbell hit, or something along a similar order. Somewhere in between beats and tones you might to find rapid stylistic elements of your music such as speed/sweep picking on an electric guitar, or fast vocal rapping rhythm. (I have no idea how fast these might be, but I would guess somewhere on the order of 1000 beats per minute or more). Lastly, fastly, and probably most complexly, are the elements of tone and timbre. I know that the "middle A" note is standardized to be 440 Hz, that is, 440 oscillations per SECOND. I'm sure there are techniques for differentiating based on tonal quality and timbre what kinds of instruments are being used; there are even fairly good algorithms for detecting human vocals. However like I said, I'm no signal processing expert.
Detecting parts of a song
I'm no expert on signal processing, but I know a fair bit about music theory. I'd say that, on the contrary, classical music would probably be some of the hardest music to analyze by simple mathematic
Detecting parts of a song I'm no expert on signal processing, but I know a fair bit about music theory. I'd say that, on the contrary, classical music would probably be some of the hardest music to analyze by simple mathematical methods. You'd best start with something simpler and more repetitive, such as pop or techno music. Pop often has a verse-chorus-verse...etc format that might be conducive to a simplistic version of your goals. Try using a Fourier Transform on your data to break it into its most prominent constituent frequencies, maybe hierarchically among different subsections. In particular you can look for different things based on how you want to group the "parts" of your data. The slowest oscillations in your pop music will probably be the shifts between verse and chorus and back to verse (maybe 0.75 oscillations per minute?). Next you might find higher frequency oscillations among your chord progressions, that is, among each full measure of your song (maybe around 6 oscillations per minute?). Next highest frequency I'd think would be a bar within a measure (maybe about 24 oscillations per minute?) within which the strumming pattern and syncopation of lyrics often repeat in pop/folk music. Getting down into the gory details, next you'll find the beats and rhythms that repeat within each bar of your music. Picking out and isolating one of these (at maybe 148 oscillations/beats per minute?) would likely yield a bass drum kick, or a cowbell hit, or something along a similar order. Somewhere in between beats and tones you might to find rapid stylistic elements of your music such as speed/sweep picking on an electric guitar, or fast vocal rapping rhythm. (I have no idea how fast these might be, but I would guess somewhere on the order of 1000 beats per minute or more). Lastly, fastly, and probably most complexly, are the elements of tone and timbre. I know that the "middle A" note is standardized to be 440 Hz, that is, 440 oscillations per SECOND. I'm sure there are techniques for differentiating based on tonal quality and timbre what kinds of instruments are being used; there are even fairly good algorithms for detecting human vocals. However like I said, I'm no signal processing expert.
Detecting parts of a song I'm no expert on signal processing, but I know a fair bit about music theory. I'd say that, on the contrary, classical music would probably be some of the hardest music to analyze by simple mathematic
27,524
Detecting parts of a song
The music is usually described using MPEG7 descriptors with some additional stuff like MFCCs calculated on the chunks of piece made by some moving window approach (i.e. you have some window size and hop, start with the window placed on the beginning of the sound, calculate the descriptors on the window, then move it by hop and repeat until the end is reached). This way a piece is transformed into a table; in your case it can be used to apply some clustering on the chunks and so detect those "parts".
Detecting parts of a song
The music is usually described using MPEG7 descriptors with some additional stuff like MFCCs calculated on the chunks of piece made by some moving window approach (i.e. you have some window size and h
Detecting parts of a song The music is usually described using MPEG7 descriptors with some additional stuff like MFCCs calculated on the chunks of piece made by some moving window approach (i.e. you have some window size and hop, start with the window placed on the beginning of the sound, calculate the descriptors on the window, then move it by hop and repeat until the end is reached). This way a piece is transformed into a table; in your case it can be used to apply some clustering on the chunks and so detect those "parts".
Detecting parts of a song The music is usually described using MPEG7 descriptors with some additional stuff like MFCCs calculated on the chunks of piece made by some moving window approach (i.e. you have some window size and h
27,525
Detecting parts of a song
There are a lot of different methods and a plethora of literature on this topic from a wide variety of perspectives. Here are a few highlights that might be good starting points for your search. If your background is more musical than mathematical or computational you might be interested in the works of David Cope most of his published works focus on the analysis of classical music pieces, but he has a private venture called recombinant that seems more general. A lot of his work used music as a language type models, but I believe at least some of his most recent work has shifted more toward the whole musical genome like approach. He has a lot of software available online, but it is generally written in Lisp and some can only run in various versions of Apple's OS though some should work in Linux or anywhere you can get common lisp to run. Analysis of signals and music in general has been a very popular problem in machine learning. There is good starting coverage in the Christopher Bishop texts Neural Networks for Pattern Recognition and Pattern Recognition and Machine Learning. Here is an example of a MSc paper that has the music classification part, but has good coverage on feature extraction, that author cites at least one of the Bishop texts and several other sources. He also recommends several sources for more current papers on the topics. Books that are more mathematical or statistical (at least by their authorship if not by their content): Since I mentioned Bishop and the computational perspective of machine learning I'd only be telling half the story if I didn't also suggest you take a glance at the more recent Elements of Statistical Learning (which is available for free legal download) by Hastie, Tibshirani, and Friedman. I don't remember there specifically being an audio processing example in this text, but a number of the methods covered could be adapted to this problem. One more text worth considering is Jan Beran's Statistics in Musicology. This provides a number of statistical tools specifically for the analysis of musical works and also has numerous references. Again there are many many other sources out there. A lot of this depends on what your background is and which approach to the problem you're most comfortable with. Hopefully at least some of this guides you a bit in your search for an answer. If you tell us more about your background, additional details about the problem, or ask a question in response to this post I'm sure I or many of the others here would be happy to direct you to more specific information. Best of luck!
Detecting parts of a song
There are a lot of different methods and a plethora of literature on this topic from a wide variety of perspectives. Here are a few highlights that might be good starting points for your search. If yo
Detecting parts of a song There are a lot of different methods and a plethora of literature on this topic from a wide variety of perspectives. Here are a few highlights that might be good starting points for your search. If your background is more musical than mathematical or computational you might be interested in the works of David Cope most of his published works focus on the analysis of classical music pieces, but he has a private venture called recombinant that seems more general. A lot of his work used music as a language type models, but I believe at least some of his most recent work has shifted more toward the whole musical genome like approach. He has a lot of software available online, but it is generally written in Lisp and some can only run in various versions of Apple's OS though some should work in Linux or anywhere you can get common lisp to run. Analysis of signals and music in general has been a very popular problem in machine learning. There is good starting coverage in the Christopher Bishop texts Neural Networks for Pattern Recognition and Pattern Recognition and Machine Learning. Here is an example of a MSc paper that has the music classification part, but has good coverage on feature extraction, that author cites at least one of the Bishop texts and several other sources. He also recommends several sources for more current papers on the topics. Books that are more mathematical or statistical (at least by their authorship if not by their content): Since I mentioned Bishop and the computational perspective of machine learning I'd only be telling half the story if I didn't also suggest you take a glance at the more recent Elements of Statistical Learning (which is available for free legal download) by Hastie, Tibshirani, and Friedman. I don't remember there specifically being an audio processing example in this text, but a number of the methods covered could be adapted to this problem. One more text worth considering is Jan Beran's Statistics in Musicology. This provides a number of statistical tools specifically for the analysis of musical works and also has numerous references. Again there are many many other sources out there. A lot of this depends on what your background is and which approach to the problem you're most comfortable with. Hopefully at least some of this guides you a bit in your search for an answer. If you tell us more about your background, additional details about the problem, or ask a question in response to this post I'm sure I or many of the others here would be happy to direct you to more specific information. Best of luck!
Detecting parts of a song There are a lot of different methods and a plethora of literature on this topic from a wide variety of perspectives. Here are a few highlights that might be good starting points for your search. If yo
27,526
Detecting parts of a song
Not a great answer but two places to look for research are: International Society for Music Information Retrieval has tons of published papers about just this topic, amazing how much info there is www.ismir.net & Echo Nest (A Startup with an API to do similar stuff) echonest.com UPDATE: they also released some open source fingerprinting code. http://echoprint.me/
Detecting parts of a song
Not a great answer but two places to look for research are: International Society for Music Information Retrieval has tons of published papers about just this topic, amazing how much info there is www
Detecting parts of a song Not a great answer but two places to look for research are: International Society for Music Information Retrieval has tons of published papers about just this topic, amazing how much info there is www.ismir.net & Echo Nest (A Startup with an API to do similar stuff) echonest.com UPDATE: they also released some open source fingerprinting code. http://echoprint.me/
Detecting parts of a song Not a great answer but two places to look for research are: International Society for Music Information Retrieval has tons of published papers about just this topic, amazing how much info there is www
27,527
Detecting parts of a song
I was interested in the similar problem. Here is the solution. It is not so old scientific proposal that is called scape plot. See this article for details (it looks nice). In addition, I would recommend you to also visit author's website since there is a lot of similar applications of statistics in music. When searching for other similar sources, I recommend to use the term Music Information Retrieval that includes similar areas.
Detecting parts of a song
I was interested in the similar problem. Here is the solution. It is not so old scientific proposal that is called scape plot. See this article for details (it looks nice). In addition, I would recom
Detecting parts of a song I was interested in the similar problem. Here is the solution. It is not so old scientific proposal that is called scape plot. See this article for details (it looks nice). In addition, I would recommend you to also visit author's website since there is a lot of similar applications of statistics in music. When searching for other similar sources, I recommend to use the term Music Information Retrieval that includes similar areas.
Detecting parts of a song I was interested in the similar problem. Here is the solution. It is not so old scientific proposal that is called scape plot. See this article for details (it looks nice). In addition, I would recom
27,528
Correlation between two variables of unequal size
No amount of imputation, time series analysis, GARCH models, interpolation, extrapolation, or other fancy algorithms will do anything to create information where it does not exist (although they can create that illusion ;-). The history of Y's price before X went public is useless for assessing their subsequent correlation. Sometimes (often preparatory to an IPO) analysts use internal accounting information (or records of private stock transactions) to retrospectively reconstruct hypothetical prices for X's stock before it went public. Conceivably such information could be used to enhance estimates of correlation, but given the extremely tentative nature of such backcasts, I doubt the effort would be of any help except initially when there are only a few days or weeks of prices for X available.
Correlation between two variables of unequal size
No amount of imputation, time series analysis, GARCH models, interpolation, extrapolation, or other fancy algorithms will do anything to create information where it does not exist (although they can c
Correlation between two variables of unequal size No amount of imputation, time series analysis, GARCH models, interpolation, extrapolation, or other fancy algorithms will do anything to create information where it does not exist (although they can create that illusion ;-). The history of Y's price before X went public is useless for assessing their subsequent correlation. Sometimes (often preparatory to an IPO) analysts use internal accounting information (or records of private stock transactions) to retrospectively reconstruct hypothetical prices for X's stock before it went public. Conceivably such information could be used to enhance estimates of correlation, but given the extremely tentative nature of such backcasts, I doubt the effort would be of any help except initially when there are only a few days or weeks of prices for X available.
Correlation between two variables of unequal size No amount of imputation, time series analysis, GARCH models, interpolation, extrapolation, or other fancy algorithms will do anything to create information where it does not exist (although they can c
27,529
Correlation between two variables of unequal size
So the problem is one of missing data (not all Y have a corresponding X, where correspondence is operationalized via time points). I don't think there is much to do here than just to throw away the Y you don't have an X for and calculate the correlation on the full pairs. You may want to read up on financial time series, though I don't have a good reference handy at this point (ideas, anyone?). Stock prices often exhibit time-varying volatilities, which can be modeled, e.g., by GARCH. It is conceivable that your two time series X and Y exhibit positive correlations during periods of low volatility (when the economy grows, all stock prices tend to increase), but negative correlations when overall volatility is high (on 9/11, airlines tanked while money fled to safer investments). So just calculating an overall correlation may be too dependent on your observation time frame. UPDATE: I think you may want to look at VAR (vector autoregressive) models.
Correlation between two variables of unequal size
So the problem is one of missing data (not all Y have a corresponding X, where correspondence is operationalized via time points). I don't think there is much to do here than just to throw away the Y
Correlation between two variables of unequal size So the problem is one of missing data (not all Y have a corresponding X, where correspondence is operationalized via time points). I don't think there is much to do here than just to throw away the Y you don't have an X for and calculate the correlation on the full pairs. You may want to read up on financial time series, though I don't have a good reference handy at this point (ideas, anyone?). Stock prices often exhibit time-varying volatilities, which can be modeled, e.g., by GARCH. It is conceivable that your two time series X and Y exhibit positive correlations during periods of low volatility (when the economy grows, all stock prices tend to increase), but negative correlations when overall volatility is high (on 9/11, airlines tanked while money fled to safer investments). So just calculating an overall correlation may be too dependent on your observation time frame. UPDATE: I think you may want to look at VAR (vector autoregressive) models.
Correlation between two variables of unequal size So the problem is one of missing data (not all Y have a corresponding X, where correspondence is operationalized via time points). I don't think there is much to do here than just to throw away the Y
27,530
Correlation between two variables of unequal size
@Jeromy Anglim specified this correctly. Having the extra information when only one of the time series existed would provide no value here. And in principle, the data should be sampled at the same time for it to be meaningful using conventional correlation measures. As a more general problem, I would add that there are techniques to deal with irregularly spaced time series data. You can search for "irregularly spaced time series correlation". Some of the recent work has been done on "Realized Volatility and Correlation" (Andersen, Bollerslev, Diebold, and Labys 1999) using high-frequency data.
Correlation between two variables of unequal size
@Jeromy Anglim specified this correctly. Having the extra information when only one of the time series existed would provide no value here. And in principle, the data should be sampled at the same t
Correlation between two variables of unequal size @Jeromy Anglim specified this correctly. Having the extra information when only one of the time series existed would provide no value here. And in principle, the data should be sampled at the same time for it to be meaningful using conventional correlation measures. As a more general problem, I would add that there are techniques to deal with irregularly spaced time series data. You can search for "irregularly spaced time series correlation". Some of the recent work has been done on "Realized Volatility and Correlation" (Andersen, Bollerslev, Diebold, and Labys 1999) using high-frequency data.
Correlation between two variables of unequal size @Jeromy Anglim specified this correctly. Having the extra information when only one of the time series existed would provide no value here. And in principle, the data should be sampled at the same t
27,531
Correlation between two variables of unequal size
Given the extra information in your comments I'd recommend looking at two correlations. The first would be the common time periods that the companies were both around. So, if one was around 2 years earlier you'd just drop that data and look at the rest. The second would be the relative time periods. In the second one you're not correlating actual time but time measured since the company went public. The former would be strongly influenced by general economic forces shared within the same time period. The latter would be influenced by properties shared by companies as they change after the IPO.
Correlation between two variables of unequal size
Given the extra information in your comments I'd recommend looking at two correlations. The first would be the common time periods that the companies were both around. So, if one was around 2 years
Correlation between two variables of unequal size Given the extra information in your comments I'd recommend looking at two correlations. The first would be the common time periods that the companies were both around. So, if one was around 2 years earlier you'd just drop that data and look at the rest. The second would be the relative time periods. In the second one you're not correlating actual time but time measured since the company went public. The former would be strongly influenced by general economic forces shared within the same time period. The latter would be influenced by properties shared by companies as they change after the IPO.
Correlation between two variables of unequal size Given the extra information in your comments I'd recommend looking at two correlations. The first would be the common time periods that the companies were both around. So, if one was around 2 years
27,532
Correlation between two variables of unequal size
Another way to solve such a problem is to impute the missing data for the shorter series using a time series model which may or may not make sense in a particular context. In your context, imputing the stock prices into the past would mean that you are asking the following counter-factual question: What would be the stock price for company X had it gone public n years in the past instead of when it actually went public? Such a data imputation could potentially be done by taking into account stock prices of related companies, general market trends etc. But, such an analysis may not make sense or may not be needed given the goals of your project.
Correlation between two variables of unequal size
Another way to solve such a problem is to impute the missing data for the shorter series using a time series model which may or may not make sense in a particular context. In your context, imputing t
Correlation between two variables of unequal size Another way to solve such a problem is to impute the missing data for the shorter series using a time series model which may or may not make sense in a particular context. In your context, imputing the stock prices into the past would mean that you are asking the following counter-factual question: What would be the stock price for company X had it gone public n years in the past instead of when it actually went public? Such a data imputation could potentially be done by taking into account stock prices of related companies, general market trends etc. But, such an analysis may not make sense or may not be needed given the goals of your project.
Correlation between two variables of unequal size Another way to solve such a problem is to impute the missing data for the shorter series using a time series model which may or may not make sense in a particular context. In your context, imputing t
27,533
Correlation between two variables of unequal size
Well a lot depends on the assumptions you make. If you assume that the data is stationary then more data for series one will give you abetter estimate of its volatility. This estimate can be used to improve the correlation estimate. So the follwoing statment is incorrect: "The history of Y's price before X went public is useless for assessing their subsequent correlation"
Correlation between two variables of unequal size
Well a lot depends on the assumptions you make. If you assume that the data is stationary then more data for series one will give you abetter estimate of its volatility. This estimate can be used to
Correlation between two variables of unequal size Well a lot depends on the assumptions you make. If you assume that the data is stationary then more data for series one will give you abetter estimate of its volatility. This estimate can be used to improve the correlation estimate. So the follwoing statment is incorrect: "The history of Y's price before X went public is useless for assessing their subsequent correlation"
Correlation between two variables of unequal size Well a lot depends on the assumptions you make. If you assume that the data is stationary then more data for series one will give you abetter estimate of its volatility. This estimate can be used to
27,534
Correlation between two variables of unequal size
This sounds like a problem for a machine learning algorithm. Therefore, I would try to figure out a set of features which describe a certain aspect of the trend and train on that. The whole machine learning theory is a bit to complex for this answer-box, but it would be useful for you to read into it. But honestly, I think that already exists out there. Where money can be made, people put their mind in it.
Correlation between two variables of unequal size
This sounds like a problem for a machine learning algorithm. Therefore, I would try to figure out a set of features which describe a certain aspect of the trend and train on that. The whole machine
Correlation between two variables of unequal size This sounds like a problem for a machine learning algorithm. Therefore, I would try to figure out a set of features which describe a certain aspect of the trend and train on that. The whole machine learning theory is a bit to complex for this answer-box, but it would be useful for you to read into it. But honestly, I think that already exists out there. Where money can be made, people put their mind in it.
Correlation between two variables of unequal size This sounds like a problem for a machine learning algorithm. Therefore, I would try to figure out a set of features which describe a certain aspect of the trend and train on that. The whole machine
27,535
How to find a confidence interval for the total number of events
I would choose to use the negative binomial distribution, which returns the probability that there will be X failures before the k_th success, when the constant probability of a success is p. Using an example k=17 # number of successes p=.6 # constant probability of success the mean and sd for the failures are given by mean.X <- k*(1-p)/p sd.X <- sqrt(k*(1-p)/p^2) The distribution of the failures X, will have approximately that shape plot(dnbinom(0:(mean.X + 3 * sd.X),k,p),type='l') So, the number of failures will be (with 95% confidence) approximately between qnbinom(.025,k,p) [1] 4 and qnbinom(.975,k,p) [1] 21 So you inerval would be [k+qnbinom(.025,k,p),k+qnbinom(.975,k,p)] (using the example's numbers [21,38] )
How to find a confidence interval for the total number of events
I would choose to use the negative binomial distribution, which returns the probability that there will be X failures before the k_th success, when the constant probability of a success is p. Using an
How to find a confidence interval for the total number of events I would choose to use the negative binomial distribution, which returns the probability that there will be X failures before the k_th success, when the constant probability of a success is p. Using an example k=17 # number of successes p=.6 # constant probability of success the mean and sd for the failures are given by mean.X <- k*(1-p)/p sd.X <- sqrt(k*(1-p)/p^2) The distribution of the failures X, will have approximately that shape plot(dnbinom(0:(mean.X + 3 * sd.X),k,p),type='l') So, the number of failures will be (with 95% confidence) approximately between qnbinom(.025,k,p) [1] 4 and qnbinom(.975,k,p) [1] 21 So you inerval would be [k+qnbinom(.025,k,p),k+qnbinom(.975,k,p)] (using the example's numbers [21,38] )
How to find a confidence interval for the total number of events I would choose to use the negative binomial distribution, which returns the probability that there will be X failures before the k_th success, when the constant probability of a success is p. Using an
27,536
How to find a confidence interval for the total number of events
Assuming you want to pick a distribution for n, p(n) you can apply Bayes law. You know that the probability of k events occuring given that n have actually occured is governed by a binomial distribtion $p(k|n) = {n \choose k} p^k (1-p)^{(n-k)}$ The thing you really want to know is the probability of n events having actually occured, given that you observed k. By Bayes lay: $p(n|k) = \frac{p(k|n)p(n)}{p(k)}$ By applying the theorem of total probability, we can write: $p(n|k) = \frac{p(k|n)p(n)}{\sum_{n'} p(k|n')p(n')}$ So without further information, about the distribution of $p(n)$ you can't really go any further. However, if you want to pick a distribution for $p(n)$ for which there is a value $n$ greater than which $p(n) = 0$, or sufficiently close to zero, then you can do a bit better. For example, assume that the distribution of $n$ is uniform in the range $[0,n_{max}]$. this case: $p(n) = \frac{1}{n_{max}}$ The Bayesian formulation simplifies to: $p(n|k) = \frac{p(k|n)}{\sum_{n'} p(k|n')}$ As for the final part of the problem, I agree that the best approach is to perform a cumulative summation over $p(n|k)$, to generate the cummulative probability distribution function, and iterate until the 0.95 limit is reached. Given that this question migrated from SO, toy sample code in python is attached below import numpy.random p = 0.8 nmax = 200 def factorial(n): if n == 0: return 1 return reduce( lambda a,b : a*b, xrange(1,n+1), 1 ) def ncr(n,r): return factorial(n) / (factorial(r) * factorial(n-r)) def binomProbability(n, k, p): p1 = ncr(n,k) p2 = p**k p3 = (1-p)**(n-k) return p1*p2*p3 def posterior( n, k, p ): def p_k_given_n( n, k ): return binomProbability(n, k, p) def p_n( n ): return 1./nmax def p_k( k ): return sum( [ p_n(nd)*p_k_given_n(nd,k) for nd in range(k,nmax) ] ) return (p_k_given_n(n,k) * p_n(n)) / p_k(k) observed_k = 80 p_n_given_k = [ posterior( n, observed_k, p ) for n in range(0,nmax) ] cp_n_given_k = numpy.cumsum(p_n_given_k) for n in xrange(0,nmax): print n, p_n_given_k[n], cp_n_given_k[n]
How to find a confidence interval for the total number of events
Assuming you want to pick a distribution for n, p(n) you can apply Bayes law. You know that the probability of k events occuring given that n have actually occured is governed by a binomial distribtio
How to find a confidence interval for the total number of events Assuming you want to pick a distribution for n, p(n) you can apply Bayes law. You know that the probability of k events occuring given that n have actually occured is governed by a binomial distribtion $p(k|n) = {n \choose k} p^k (1-p)^{(n-k)}$ The thing you really want to know is the probability of n events having actually occured, given that you observed k. By Bayes lay: $p(n|k) = \frac{p(k|n)p(n)}{p(k)}$ By applying the theorem of total probability, we can write: $p(n|k) = \frac{p(k|n)p(n)}{\sum_{n'} p(k|n')p(n')}$ So without further information, about the distribution of $p(n)$ you can't really go any further. However, if you want to pick a distribution for $p(n)$ for which there is a value $n$ greater than which $p(n) = 0$, or sufficiently close to zero, then you can do a bit better. For example, assume that the distribution of $n$ is uniform in the range $[0,n_{max}]$. this case: $p(n) = \frac{1}{n_{max}}$ The Bayesian formulation simplifies to: $p(n|k) = \frac{p(k|n)}{\sum_{n'} p(k|n')}$ As for the final part of the problem, I agree that the best approach is to perform a cumulative summation over $p(n|k)$, to generate the cummulative probability distribution function, and iterate until the 0.95 limit is reached. Given that this question migrated from SO, toy sample code in python is attached below import numpy.random p = 0.8 nmax = 200 def factorial(n): if n == 0: return 1 return reduce( lambda a,b : a*b, xrange(1,n+1), 1 ) def ncr(n,r): return factorial(n) / (factorial(r) * factorial(n-r)) def binomProbability(n, k, p): p1 = ncr(n,k) p2 = p**k p3 = (1-p)**(n-k) return p1*p2*p3 def posterior( n, k, p ): def p_k_given_n( n, k ): return binomProbability(n, k, p) def p_n( n ): return 1./nmax def p_k( k ): return sum( [ p_n(nd)*p_k_given_n(nd,k) for nd in range(k,nmax) ] ) return (p_k_given_n(n,k) * p_n(n)) / p_k(k) observed_k = 80 p_n_given_k = [ posterior( n, observed_k, p ) for n in range(0,nmax) ] cp_n_given_k = numpy.cumsum(p_n_given_k) for n in xrange(0,nmax): print n, p_n_given_k[n], cp_n_given_k[n]
How to find a confidence interval for the total number of events Assuming you want to pick a distribution for n, p(n) you can apply Bayes law. You know that the probability of k events occuring given that n have actually occured is governed by a binomial distribtio
27,537
How to find a confidence interval for the total number of events
If you measure $k$ events and know your detection efficiency is $p$ you can automatically correct your measured result up to the "true" count $k_\mathrm{true} = k/p$. Your question is then about finding the range of $k_\mathrm{true}$ where 95% of the observations will fall. You can use the Feldman-Cousins method to estimate this interval. If you have access to ROOT there is a class to do this calculation for you. You would calculate the upper and lower limits with Feldman-Cousins from the uncorrected number of events $k$ and then scale them up to 100% with $1/p$. This way the actual number of measurements determines your uncertainty, not some scaled number that wasn't measured. { gSystem->Load("libPhysics"); const double lvl = 0.95; TFeldmanCousins f(lvl); const double p = 0.95; const double k = 13; const double k_true = k/p; const double k_bg = 0; const double upper = f.CalculateUperLimit(k, k_bg) / p; const double lower = f.GetLowerLimit() / p; std::cout << "[" lower <<"..."<< k_true <<"..."<< upper << "]" << std::endl; }
How to find a confidence interval for the total number of events
If you measure $k$ events and know your detection efficiency is $p$ you can automatically correct your measured result up to the "true" count $k_\mathrm{true} = k/p$. Your question is then about findi
How to find a confidence interval for the total number of events If you measure $k$ events and know your detection efficiency is $p$ you can automatically correct your measured result up to the "true" count $k_\mathrm{true} = k/p$. Your question is then about finding the range of $k_\mathrm{true}$ where 95% of the observations will fall. You can use the Feldman-Cousins method to estimate this interval. If you have access to ROOT there is a class to do this calculation for you. You would calculate the upper and lower limits with Feldman-Cousins from the uncorrected number of events $k$ and then scale them up to 100% with $1/p$. This way the actual number of measurements determines your uncertainty, not some scaled number that wasn't measured. { gSystem->Load("libPhysics"); const double lvl = 0.95; TFeldmanCousins f(lvl); const double p = 0.95; const double k = 13; const double k_true = k/p; const double k_bg = 0; const double upper = f.CalculateUperLimit(k, k_bg) / p; const double lower = f.GetLowerLimit() / p; std::cout << "[" lower <<"..."<< k_true <<"..."<< upper << "]" << std::endl; }
How to find a confidence interval for the total number of events If you measure $k$ events and know your detection efficiency is $p$ you can automatically correct your measured result up to the "true" count $k_\mathrm{true} = k/p$. Your question is then about findi
27,538
How to find a confidence interval for the total number of events
I think you misunderstood the purpose of confidence intervals. Confidence intervals allow you to assess where the true value of the parameter is located. So, in your case, you can construct a confidence interval for $p$. It does not make sense to construct an interval for the data. Having said that, once you have an estimate of $p$ you can calculate the probability that you will observe different realizations such as 14, 15 etc using the binomial pdf.
How to find a confidence interval for the total number of events
I think you misunderstood the purpose of confidence intervals. Confidence intervals allow you to assess where the true value of the parameter is located. So, in your case, you can construct a confiden
How to find a confidence interval for the total number of events I think you misunderstood the purpose of confidence intervals. Confidence intervals allow you to assess where the true value of the parameter is located. So, in your case, you can construct a confidence interval for $p$. It does not make sense to construct an interval for the data. Having said that, once you have an estimate of $p$ you can calculate the probability that you will observe different realizations such as 14, 15 etc using the binomial pdf.
How to find a confidence interval for the total number of events I think you misunderstood the purpose of confidence intervals. Confidence intervals allow you to assess where the true value of the parameter is located. So, in your case, you can construct a confiden
27,539
Why is measure theory needed to understand continuous random variables and probability density functions in particular?
You arguably don't need measure theory to understand continuous random variables at all; those are just the random variables which are absolutely continuous with respect to Lebesgue measure. For most intents and purposes, the Riemann integral is sufficient in that case. After all, most commonly used probability densities have very nice regularity properties. Measure theory is needed, for example, when you need to justify things like the existence of sequences of random variables with prescribed joint distributions, or stochastic processes more generally (e.g., try proving that Brownian motion exists without measure theoretic results like the Kolmogorov extension and continuity theorems). Another benefit of using measure theory is that it unifies the seemingly similar but distinct continuous and discrete worlds, and allows talking about random variables which are neither. Elementary treatments of probability often duplicate effort by proving a result in the discrete case and then in the continuous case. Using measure theory, one can sometimes prove both (and more) at the same time with a proof that might better reveal the important factors at play. Finally, why isn't measure theory needed in the discrete case? This is arguably because the dominating measure involved (counting measure) is so easy to work with. For one, null sets don't matter, because the only set with zero counting measure is the empty set. Secondly, most calculations with discrete random variables amount to regular sums (albeit sometimes infinite). This makes problems involving discrete random variables tractable even with a very limited mathematical toolkit at your disposal.
Why is measure theory needed to understand continuous random variables and probability density funct
You arguably don't need measure theory to understand continuous random variables at all; those are just the random variables which are absolutely continuous with respect to Lebesgue measure. For most
Why is measure theory needed to understand continuous random variables and probability density functions in particular? You arguably don't need measure theory to understand continuous random variables at all; those are just the random variables which are absolutely continuous with respect to Lebesgue measure. For most intents and purposes, the Riemann integral is sufficient in that case. After all, most commonly used probability densities have very nice regularity properties. Measure theory is needed, for example, when you need to justify things like the existence of sequences of random variables with prescribed joint distributions, or stochastic processes more generally (e.g., try proving that Brownian motion exists without measure theoretic results like the Kolmogorov extension and continuity theorems). Another benefit of using measure theory is that it unifies the seemingly similar but distinct continuous and discrete worlds, and allows talking about random variables which are neither. Elementary treatments of probability often duplicate effort by proving a result in the discrete case and then in the continuous case. Using measure theory, one can sometimes prove both (and more) at the same time with a proof that might better reveal the important factors at play. Finally, why isn't measure theory needed in the discrete case? This is arguably because the dominating measure involved (counting measure) is so easy to work with. For one, null sets don't matter, because the only set with zero counting measure is the empty set. Secondly, most calculations with discrete random variables amount to regular sums (albeit sometimes infinite). This makes problems involving discrete random variables tractable even with a very limited mathematical toolkit at your disposal.
Why is measure theory needed to understand continuous random variables and probability density funct You arguably don't need measure theory to understand continuous random variables at all; those are just the random variables which are absolutely continuous with respect to Lebesgue measure. For most
27,540
Product of 2 Uniform random variables is greater than a constant with convolution
There's really not much point in doing a change of variables here because it doesn't really buy you anything (even if you were doing it for non-uniform RVs). But, if you insist, if you are trying to evaluate the integral: $$P(XY>\alpha) = \int_0^1\left(\int_0^1 f(x,y) I(xy>\alpha) dy\right)dx$$ you can't directly apply the substitution $x=z/y$ to the outer integral. You need to exchange the integrals first: $$= \int_0^1\left(\int_{x=0}^{x=1} f(x,y) I(xy>\alpha) dx\right)dy$$ Now, we can apply the substitution $x=z/y$, $dx=dz/dy$ and limits $z=0$ to $z=y$ to the inner integral: $$= \int_0^1\left(\int_{z=0}^{z=y} f(z/y,y) I(z>\alpha) \frac{dz}y\right)dy$$ Combining the integration limits and the indicator is difficult. We need to consider the cases where $y$ is less than and greater than $\alpha$ separately: \begin{align} &= \int_0^\alpha\left(\int_{z=0}^{z=y} f(z/y,y) I(z>\alpha) \frac{dz}y\right)dy + \int_\alpha^1\left(\int_{z=0}^{z=y} f(z/y,y) I(z>\alpha) \frac{dz}y\right)dy\\ &= 0 + \int_\alpha^1\left(\int_{z=\alpha}^{z=y} f(z/y,y) \frac{dz}y\right)dy \end{align} Note that in the case of the left integral, where $0\leq y \leq \alpha$, we also have $z \leq y \leq \alpha$, so the indicator is always zero, so that whole integral is 0. In the case of the right integral, we have $y > \alpha$, so for the inner integral $\int_{z=0}^{z=y}$, the indicator is zero for $0 \leq z \leq \alpha$ and one for $\alpha \leq z \leq y$, so that gives us our final limits. Now, knowing that $f(z/y,y)=1$ over the limits of integration, we can write: $$=\int_\alpha^1\left(\int_{z=\alpha}^{z=y}\frac{dz}y\right)dy$$ and I imagine you can finish it off to get the result $1-\alpha+\alpha \log \alpha$, which was already more or less given in another answer.
Product of 2 Uniform random variables is greater than a constant with convolution
There's really not much point in doing a change of variables here because it doesn't really buy you anything (even if you were doing it for non-uniform RVs). But, if you insist, if you are trying to e
Product of 2 Uniform random variables is greater than a constant with convolution There's really not much point in doing a change of variables here because it doesn't really buy you anything (even if you were doing it for non-uniform RVs). But, if you insist, if you are trying to evaluate the integral: $$P(XY>\alpha) = \int_0^1\left(\int_0^1 f(x,y) I(xy>\alpha) dy\right)dx$$ you can't directly apply the substitution $x=z/y$ to the outer integral. You need to exchange the integrals first: $$= \int_0^1\left(\int_{x=0}^{x=1} f(x,y) I(xy>\alpha) dx\right)dy$$ Now, we can apply the substitution $x=z/y$, $dx=dz/dy$ and limits $z=0$ to $z=y$ to the inner integral: $$= \int_0^1\left(\int_{z=0}^{z=y} f(z/y,y) I(z>\alpha) \frac{dz}y\right)dy$$ Combining the integration limits and the indicator is difficult. We need to consider the cases where $y$ is less than and greater than $\alpha$ separately: \begin{align} &= \int_0^\alpha\left(\int_{z=0}^{z=y} f(z/y,y) I(z>\alpha) \frac{dz}y\right)dy + \int_\alpha^1\left(\int_{z=0}^{z=y} f(z/y,y) I(z>\alpha) \frac{dz}y\right)dy\\ &= 0 + \int_\alpha^1\left(\int_{z=\alpha}^{z=y} f(z/y,y) \frac{dz}y\right)dy \end{align} Note that in the case of the left integral, where $0\leq y \leq \alpha$, we also have $z \leq y \leq \alpha$, so the indicator is always zero, so that whole integral is 0. In the case of the right integral, we have $y > \alpha$, so for the inner integral $\int_{z=0}^{z=y}$, the indicator is zero for $0 \leq z \leq \alpha$ and one for $\alpha \leq z \leq y$, so that gives us our final limits. Now, knowing that $f(z/y,y)=1$ over the limits of integration, we can write: $$=\int_\alpha^1\left(\int_{z=\alpha}^{z=y}\frac{dz}y\right)dy$$ and I imagine you can finish it off to get the result $1-\alpha+\alpha \log \alpha$, which was already more or less given in another answer.
Product of 2 Uniform random variables is greater than a constant with convolution There's really not much point in doing a change of variables here because it doesn't really buy you anything (even if you were doing it for non-uniform RVs). But, if you insist, if you are trying to e
27,541
Product of 2 Uniform random variables is greater than a constant with convolution
Some hints: Geometrical approaches are much easier for uniform RVs, but the general approach is to integrate the joint PDF in the region that satisfy $XY>\alpha$. The integral will basically look like below: $$\mathbb P(XY>\alpha)=\iint_{xy>\alpha} f_{X,Y}(x,y)dydx$$ The actual boundaries of the integrals will change with respect to your support.
Product of 2 Uniform random variables is greater than a constant with convolution
Some hints: Geometrical approaches are much easier for uniform RVs, but the general approach is to integrate the joint PDF in the region that satisfy $XY>\alpha$. The integral will basically look like
Product of 2 Uniform random variables is greater than a constant with convolution Some hints: Geometrical approaches are much easier for uniform RVs, but the general approach is to integrate the joint PDF in the region that satisfy $XY>\alpha$. The integral will basically look like below: $$\mathbb P(XY>\alpha)=\iint_{xy>\alpha} f_{X,Y}(x,y)dydx$$ The actual boundaries of the integrals will change with respect to your support.
Product of 2 Uniform random variables is greater than a constant with convolution Some hints: Geometrical approaches are much easier for uniform RVs, but the general approach is to integrate the joint PDF in the region that satisfy $XY>\alpha$. The integral will basically look like
27,542
Product of 2 Uniform random variables is greater than a constant with convolution
Multiple answers and partial answers here, some for the more general problem of multiplying $n$ independent standard uniform random variables. For $n = 2,$ the PDF of the product $Z = XY$ is $f(z) = -\log(z),$ for $0 < z < 1,$ which I believe agrees with @gunes' answer (+1) for the product of two standard uniform random variables. The following simulation gives a histogram in agreement with this PDF. The red superimposed curve shows this density function. set.seed(2020) x = runif(10^6); y = runif(10^6) z = x*y summary(z) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00000 0.06793 0.18690 0.25011 0.38269 0.99907 hist(z, prob=T, br=40, col="skyblue2") curve(-log(x), add=T, col="red", lwd=2) The CDF is $F_Z(z) = P(Z \le z) = z - z\log(z),$ for $0 < z < 1.$ So $F_Z(.5) = 0.8466$ is the requested probability. z = .5; z - z*log(z) [1] 0.8465736 An empirical CDF (ECDF), based on the million simulated values of $Z$ is shown below as a thin black line. The dashed red line is $F_Z(z)$ as given above. The match is essentially perfect within the resolution of the plot. plot(ecdf(z)) curve(x - x*log(x), add=T, col="red", lwd=3, lty="dashed") abline(v = .5, col= "blue", lty = "dotted") abline(h = 0.8455, col="blue", lty="dotted")
Product of 2 Uniform random variables is greater than a constant with convolution
Multiple answers and partial answers here, some for the more general problem of multiplying $n$ independent standard uniform random variables. For $n = 2,$ the PDF of the product $Z = XY$ is $f(z) = -
Product of 2 Uniform random variables is greater than a constant with convolution Multiple answers and partial answers here, some for the more general problem of multiplying $n$ independent standard uniform random variables. For $n = 2,$ the PDF of the product $Z = XY$ is $f(z) = -\log(z),$ for $0 < z < 1,$ which I believe agrees with @gunes' answer (+1) for the product of two standard uniform random variables. The following simulation gives a histogram in agreement with this PDF. The red superimposed curve shows this density function. set.seed(2020) x = runif(10^6); y = runif(10^6) z = x*y summary(z) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.00000 0.06793 0.18690 0.25011 0.38269 0.99907 hist(z, prob=T, br=40, col="skyblue2") curve(-log(x), add=T, col="red", lwd=2) The CDF is $F_Z(z) = P(Z \le z) = z - z\log(z),$ for $0 < z < 1.$ So $F_Z(.5) = 0.8466$ is the requested probability. z = .5; z - z*log(z) [1] 0.8465736 An empirical CDF (ECDF), based on the million simulated values of $Z$ is shown below as a thin black line. The dashed red line is $F_Z(z)$ as given above. The match is essentially perfect within the resolution of the plot. plot(ecdf(z)) curve(x - x*log(x), add=T, col="red", lwd=3, lty="dashed") abline(v = .5, col= "blue", lty = "dotted") abline(h = 0.8455, col="blue", lty="dotted")
Product of 2 Uniform random variables is greater than a constant with convolution Multiple answers and partial answers here, some for the more general problem of multiplying $n$ independent standard uniform random variables. For $n = 2,$ the PDF of the product $Z = XY$ is $f(z) = -
27,543
Product of 2 Uniform random variables is greater than a constant with convolution
You might indeed try some coordinate transforms. E.g. instead of integrating $$\int \int f(x,y) I(xy>a) dx dy$$ you could transform to other variables and integrate $$\int \int g(w,z) I(z>a) dw dz $$ In which case the indicator function is easier to evaluate. The transform Say you use $w = y$ and $z = xy$. The distribution function can be computed using the Jacobian $$J(w,z) = \frac{dx}{dw}\frac{dy}{dz} - \frac{dx}{dz}\frac{dy}{dw} = - \frac{1}{w}$$ and $$g(w,z) = f(x(w,z),y(w,z) )|J(w,z)| = \frac{1}{w}$$ Integration and domain For the integration we need to take care that the domain is $$0 \leq z \leq 1 \quad \text{and} \quad z \leq w \leq 1$$ And the domains for each coordinate are not independent. Now the integration becomes (the indicator function is gone now and you see it back in the formula as the lower limit for the integration with $dz$) $$\int_a^1 \int_{z}^1 \frac{1}{w} dw dz $$ The inner term is $$ \int_{1/z}^1 \frac{1}{w} dw = \log(w) \big|_{z}^1 = - \log(z)$$ and you get $$P(z > a) = \int_{a}^1 - \log(z) dz = z - z\log(z) \big|_{a}^1 = 1 - a + a \log(a)$$ Note that if you differentiate the expression that we used you get $$f(Z=a)= \partial_z P(Z\leq a) = \partial_z \int_{-\infty}^a \int g(w,z) dw dz = \int g(w,z) dw $$ And this is the way how people often compute the pdf $\int |y^{-1}| f(z/y,y) dy$ So using a coordinate transform is not so uncommon to compute a product distribution.
Product of 2 Uniform random variables is greater than a constant with convolution
You might indeed try some coordinate transforms. E.g. instead of integrating $$\int \int f(x,y) I(xy>a) dx dy$$ you could transform to other variables and integrate $$\int \int g(w,z) I(z>a) dw dz $$
Product of 2 Uniform random variables is greater than a constant with convolution You might indeed try some coordinate transforms. E.g. instead of integrating $$\int \int f(x,y) I(xy>a) dx dy$$ you could transform to other variables and integrate $$\int \int g(w,z) I(z>a) dw dz $$ In which case the indicator function is easier to evaluate. The transform Say you use $w = y$ and $z = xy$. The distribution function can be computed using the Jacobian $$J(w,z) = \frac{dx}{dw}\frac{dy}{dz} - \frac{dx}{dz}\frac{dy}{dw} = - \frac{1}{w}$$ and $$g(w,z) = f(x(w,z),y(w,z) )|J(w,z)| = \frac{1}{w}$$ Integration and domain For the integration we need to take care that the domain is $$0 \leq z \leq 1 \quad \text{and} \quad z \leq w \leq 1$$ And the domains for each coordinate are not independent. Now the integration becomes (the indicator function is gone now and you see it back in the formula as the lower limit for the integration with $dz$) $$\int_a^1 \int_{z}^1 \frac{1}{w} dw dz $$ The inner term is $$ \int_{1/z}^1 \frac{1}{w} dw = \log(w) \big|_{z}^1 = - \log(z)$$ and you get $$P(z > a) = \int_{a}^1 - \log(z) dz = z - z\log(z) \big|_{a}^1 = 1 - a + a \log(a)$$ Note that if you differentiate the expression that we used you get $$f(Z=a)= \partial_z P(Z\leq a) = \partial_z \int_{-\infty}^a \int g(w,z) dw dz = \int g(w,z) dw $$ And this is the way how people often compute the pdf $\int |y^{-1}| f(z/y,y) dy$ So using a coordinate transform is not so uncommon to compute a product distribution.
Product of 2 Uniform random variables is greater than a constant with convolution You might indeed try some coordinate transforms. E.g. instead of integrating $$\int \int f(x,y) I(xy>a) dx dy$$ you could transform to other variables and integrate $$\int \int g(w,z) I(z>a) dw dz $$
27,544
In the case of linear regression, if the parameters are uncorrelated, does this make the model better? If yes, why? [closed]
This depends on what you mean by "make the model better". Do you want to use this model to say something about how the world works, or to make predictions? if the covariates are uncorrelated, then the beta values associated with them will generally be close to independent. (This is related but not identical to the idea of parameter orthogonality.) This is useful if you want to interpret the betas as saying something about the real world and you don't want them to be confounded with each other. if you are concerned about the accuracy of the model's predictions then it doesn't really make any difference. The beta values will be correlated, but the predictions will be unaffected. You could orthogonalise your covariates and that would completely change the definition and interpretation of beta, but the fitted values, residuals and predictions would be the same as before.
In the case of linear regression, if the parameters are uncorrelated, does this make the model bette
This depends on what you mean by "make the model better". Do you want to use this model to say something about how the world works, or to make predictions? if the covariates are uncorrelated, then th
In the case of linear regression, if the parameters are uncorrelated, does this make the model better? If yes, why? [closed] This depends on what you mean by "make the model better". Do you want to use this model to say something about how the world works, or to make predictions? if the covariates are uncorrelated, then the beta values associated with them will generally be close to independent. (This is related but not identical to the idea of parameter orthogonality.) This is useful if you want to interpret the betas as saying something about the real world and you don't want them to be confounded with each other. if you are concerned about the accuracy of the model's predictions then it doesn't really make any difference. The beta values will be correlated, but the predictions will be unaffected. You could orthogonalise your covariates and that would completely change the definition and interpretation of beta, but the fitted values, residuals and predictions would be the same as before.
In the case of linear regression, if the parameters are uncorrelated, does this make the model bette This depends on what you mean by "make the model better". Do you want to use this model to say something about how the world works, or to make predictions? if the covariates are uncorrelated, then th
27,545
In the case of linear regression, if the parameters are uncorrelated, does this make the model better? If yes, why? [closed]
I presume by parameters you mean the features, which is quite unusual as @whuber commented. The next paragraph follows on this assumption. Not necessarily. Highly correlated features can cause multi-collinearity but this doesn't mean that a model with correlated features is worse than uncorrelated features. A model can have a set of correlated features that describe the target variable very well, or a set of uncorrelated features and that is not related to the target variable in any way. For parameter estimate uncorrelatedness, using a similar idea, assume you have uncorrelated random features that are also not related to the target variable. Since features are totally random, the parameter estimates will also be and show no correlation. So, still hard to say the model is better if you have no correlation.
In the case of linear regression, if the parameters are uncorrelated, does this make the model bette
I presume by parameters you mean the features, which is quite unusual as @whuber commented. The next paragraph follows on this assumption. Not necessarily. Highly correlated features can cause multi-c
In the case of linear regression, if the parameters are uncorrelated, does this make the model better? If yes, why? [closed] I presume by parameters you mean the features, which is quite unusual as @whuber commented. The next paragraph follows on this assumption. Not necessarily. Highly correlated features can cause multi-collinearity but this doesn't mean that a model with correlated features is worse than uncorrelated features. A model can have a set of correlated features that describe the target variable very well, or a set of uncorrelated features and that is not related to the target variable in any way. For parameter estimate uncorrelatedness, using a similar idea, assume you have uncorrelated random features that are also not related to the target variable. Since features are totally random, the parameter estimates will also be and show no correlation. So, still hard to say the model is better if you have no correlation.
In the case of linear regression, if the parameters are uncorrelated, does this make the model bette I presume by parameters you mean the features, which is quite unusual as @whuber commented. The next paragraph follows on this assumption. Not necessarily. Highly correlated features can cause multi-c
27,546
In the case of linear regression, if the parameters are uncorrelated, does this make the model better? If yes, why? [closed]
I agree with @gunes that you might stumble on cases that training on highly correlated features will yield better results than on an uncorrelated featureset, but provided that your features are good (i.e. explain the target well). In my experience though, it's better to get rid of highly correlated features, because this will simplify your model, and won't harm the predictability too much (because if cor(x, y) is high - its enough to know either of those features to get the prediction). For example if you have square feet of the house and number of rooms in it, those features are most likely are highly correlated, so you might consider taking just the most informative of them and by this simplifying the model, and still retain the accuracy. On the other hand, if all your features are uncorrelated, each one of them gives your model a different perspective on the problem, which will help it generalize better. Hope that helps. Cheers.
In the case of linear regression, if the parameters are uncorrelated, does this make the model bette
I agree with @gunes that you might stumble on cases that training on highly correlated features will yield better results than on an uncorrelated featureset, but provided that your features are good (
In the case of linear regression, if the parameters are uncorrelated, does this make the model better? If yes, why? [closed] I agree with @gunes that you might stumble on cases that training on highly correlated features will yield better results than on an uncorrelated featureset, but provided that your features are good (i.e. explain the target well). In my experience though, it's better to get rid of highly correlated features, because this will simplify your model, and won't harm the predictability too much (because if cor(x, y) is high - its enough to know either of those features to get the prediction). For example if you have square feet of the house and number of rooms in it, those features are most likely are highly correlated, so you might consider taking just the most informative of them and by this simplifying the model, and still retain the accuracy. On the other hand, if all your features are uncorrelated, each one of them gives your model a different perspective on the problem, which will help it generalize better. Hope that helps. Cheers.
In the case of linear regression, if the parameters are uncorrelated, does this make the model bette I agree with @gunes that you might stumble on cases that training on highly correlated features will yield better results than on an uncorrelated featureset, but provided that your features are good (
27,547
In the case of linear regression, if the parameters are uncorrelated, does this make the model better? If yes, why? [closed]
In my estimation, your question is more aligned with @whuber's third interpretation noted in the comments. Here's a simple linear regression model: $$ Y = \beta_{0} + \beta_{1}X_{1} + \epsilon. $$ I will assume you have already built a model and you are investigating the impact of a variable $X_{1}$ that you believe to have a causal effect on your dependent variable $Y$. At this point, you may want to investigate the effect of other variables on your outcome. However, you discovered that other features in your dataset are related to $Y$, or may predict $Y$, but have no association with $X_{1}$. In this case, I would argue that these variables can be safely omitted from your analysis. For the sake of this explanation, I assume you are not automating your choice of predictor variables and a basic explanatory model has already been considered. One of the primary goals of regression analysis is to 'separate out' the association of $X_{1}$ with other variables on the right-hand side of the equation so we can examine $X_{1}$'s unique influence on $Y$. Now, here's a second model with a control variable, $X_{2}$, included: $$ Y = \beta_{0} + \beta_{1}X_{1} + \beta_{1}X_{2} + \epsilon. $$ In general, two conditions must be met. First, the variable $X_{2}$ should also be associated with $Y$. Second, the variable should be correlated with $X_{1}$, but not perfectly correlated. If $X_{2}$ is correlated with $X_{1}$, then including it in the foregoing equation affords us the ability to examine the effect of $X_{1}$ on $Y$ while holding $X_{2}$ fixed. If, however, the latter condition is not met and $X_{2}$ is uncorrelated with $X_{1}$, then this variable can be dropped from the analysis. I would argue that it more likely should be dropped in cases where $X_{2}$ is explicitly measured, and explicitly included—and it is unrelated to the main explanatory variable(s) already in the model. Again, one important feature of multiple regression is to purge $X_{1}$'s correlation with $X_{2}$. Throwing in a series of orthogonal regressors, if large, decreases the precision of the estimated coefficients. So from my perspective, I wouldn’t say a model is “better” with more irrelevant controls on the right-hand side of your equation. I agree with @MichaelSidoroff's answer that once a set of uncorrelated features enters the model and you didn't have any a priori theoretical basis for including them, then each factor offers a different perspective on the phenomenon under study. Note why multiple regression is often not necessary in most randomized studies. Randomization expels any correlation between the main treatment variable (independent variable) under study and other observed (and unobserved) characteristics of individuals. Thus, there is no need to explicitly control for the other observed factors across individuals using a multiple regression framework, because the correlation has been removed (or at least we hope it has).
In the case of linear regression, if the parameters are uncorrelated, does this make the model bette
In my estimation, your question is more aligned with @whuber's third interpretation noted in the comments. Here's a simple linear regression model: $$ Y = \beta_{0} + \beta_{1}X_{1} + \epsilon. $$ I w
In the case of linear regression, if the parameters are uncorrelated, does this make the model better? If yes, why? [closed] In my estimation, your question is more aligned with @whuber's third interpretation noted in the comments. Here's a simple linear regression model: $$ Y = \beta_{0} + \beta_{1}X_{1} + \epsilon. $$ I will assume you have already built a model and you are investigating the impact of a variable $X_{1}$ that you believe to have a causal effect on your dependent variable $Y$. At this point, you may want to investigate the effect of other variables on your outcome. However, you discovered that other features in your dataset are related to $Y$, or may predict $Y$, but have no association with $X_{1}$. In this case, I would argue that these variables can be safely omitted from your analysis. For the sake of this explanation, I assume you are not automating your choice of predictor variables and a basic explanatory model has already been considered. One of the primary goals of regression analysis is to 'separate out' the association of $X_{1}$ with other variables on the right-hand side of the equation so we can examine $X_{1}$'s unique influence on $Y$. Now, here's a second model with a control variable, $X_{2}$, included: $$ Y = \beta_{0} + \beta_{1}X_{1} + \beta_{1}X_{2} + \epsilon. $$ In general, two conditions must be met. First, the variable $X_{2}$ should also be associated with $Y$. Second, the variable should be correlated with $X_{1}$, but not perfectly correlated. If $X_{2}$ is correlated with $X_{1}$, then including it in the foregoing equation affords us the ability to examine the effect of $X_{1}$ on $Y$ while holding $X_{2}$ fixed. If, however, the latter condition is not met and $X_{2}$ is uncorrelated with $X_{1}$, then this variable can be dropped from the analysis. I would argue that it more likely should be dropped in cases where $X_{2}$ is explicitly measured, and explicitly included—and it is unrelated to the main explanatory variable(s) already in the model. Again, one important feature of multiple regression is to purge $X_{1}$'s correlation with $X_{2}$. Throwing in a series of orthogonal regressors, if large, decreases the precision of the estimated coefficients. So from my perspective, I wouldn’t say a model is “better” with more irrelevant controls on the right-hand side of your equation. I agree with @MichaelSidoroff's answer that once a set of uncorrelated features enters the model and you didn't have any a priori theoretical basis for including them, then each factor offers a different perspective on the phenomenon under study. Note why multiple regression is often not necessary in most randomized studies. Randomization expels any correlation between the main treatment variable (independent variable) under study and other observed (and unobserved) characteristics of individuals. Thus, there is no need to explicitly control for the other observed factors across individuals using a multiple regression framework, because the correlation has been removed (or at least we hope it has).
In the case of linear regression, if the parameters are uncorrelated, does this make the model bette In my estimation, your question is more aligned with @whuber's third interpretation noted in the comments. Here's a simple linear regression model: $$ Y = \beta_{0} + \beta_{1}X_{1} + \epsilon. $$ I w
27,548
In the case of linear regression, if the parameters are uncorrelated, does this make the model better? If yes, why? [closed]
I am not a statistician, so I would be happy to be corrected by the other users if this answer is wrong/naive. Anyway: from the point of view of a numerical analyst, I would say yes, it is better, because then you can conclude that the matrix to (pseudo-)invert is well conditioned, and hence your solution will not be highly sensitive to perturbations of the input data (i.e., the observations that you are trying to fit).
In the case of linear regression, if the parameters are uncorrelated, does this make the model bette
I am not a statistician, so I would be happy to be corrected by the other users if this answer is wrong/naive. Anyway: from the point of view of a numerical analyst, I would say yes, it is better, bec
In the case of linear regression, if the parameters are uncorrelated, does this make the model better? If yes, why? [closed] I am not a statistician, so I would be happy to be corrected by the other users if this answer is wrong/naive. Anyway: from the point of view of a numerical analyst, I would say yes, it is better, because then you can conclude that the matrix to (pseudo-)invert is well conditioned, and hence your solution will not be highly sensitive to perturbations of the input data (i.e., the observations that you are trying to fit).
In the case of linear regression, if the parameters are uncorrelated, does this make the model bette I am not a statistician, so I would be happy to be corrected by the other users if this answer is wrong/naive. Anyway: from the point of view of a numerical analyst, I would say yes, it is better, bec
27,549
In the case of linear regression, if the parameters are uncorrelated, does this make the model better? If yes, why? [closed]
It is a very good question. The concept related to your question is Multicollinearity. When the predictor variables (a.k.a parameters) are correlated, we call that scenario as Multicollinearity. The presence or absence of Multicollinearity does not give and an indication of our model's accuracy. You can have an idea of the Multicollinearity in your model by running a regression analysis in any statistical software like 'Minitab' or 'SPSS'. In the output, you will see a metric called 'VIF'. It is the short form for the Variance Inflation Factor.VIF points out the variables that are correlated. So if the VIF>10, You can conclude that Multicollineariy affects your model in a bad way and it is better to drop those variables. This is the way that you can decide whether having uncorrelated parameters in the model make it better. If you need more information on this topic, please visit
In the case of linear regression, if the parameters are uncorrelated, does this make the model bette
It is a very good question. The concept related to your question is Multicollinearity. When the predictor variables (a.k.a parameters) are correlated, we call that scenario as Multicollinearity. The p
In the case of linear regression, if the parameters are uncorrelated, does this make the model better? If yes, why? [closed] It is a very good question. The concept related to your question is Multicollinearity. When the predictor variables (a.k.a parameters) are correlated, we call that scenario as Multicollinearity. The presence or absence of Multicollinearity does not give and an indication of our model's accuracy. You can have an idea of the Multicollinearity in your model by running a regression analysis in any statistical software like 'Minitab' or 'SPSS'. In the output, you will see a metric called 'VIF'. It is the short form for the Variance Inflation Factor.VIF points out the variables that are correlated. So if the VIF>10, You can conclude that Multicollineariy affects your model in a bad way and it is better to drop those variables. This is the way that you can decide whether having uncorrelated parameters in the model make it better. If you need more information on this topic, please visit
In the case of linear regression, if the parameters are uncorrelated, does this make the model bette It is a very good question. The concept related to your question is Multicollinearity. When the predictor variables (a.k.a parameters) are correlated, we call that scenario as Multicollinearity. The p
27,550
Can k-means be used for non normally distributed data?
Here is the full quote: K-means, being an instance of the Gaussian Mixture Model (GMM), assumes Gaussian data distribution [20][26]. It then follows that nearly all inliers (precisely 99.73%) will have point- to-centroid distances within 3 standard deviations ($\sigma$) from the population mean. It appears in section IV.A. The application to the Iris dataset, which, as you note, is not normally, distributed, appears in section V ("Experiments"). I do not see a logical problem with first noting an algorithm's properties under certain assumptions, such as normality, and then testing it in cases where the assumption is not valid. And of course, k-means can be applied to any dataset. Whether it yields useful results is a different matter.
Can k-means be used for non normally distributed data?
Here is the full quote: K-means, being an instance of the Gaussian Mixture Model (GMM), assumes Gaussian data distribution [20][26]. It then follows that nearly all inliers (precisely 99.73%) wil
Can k-means be used for non normally distributed data? Here is the full quote: K-means, being an instance of the Gaussian Mixture Model (GMM), assumes Gaussian data distribution [20][26]. It then follows that nearly all inliers (precisely 99.73%) will have point- to-centroid distances within 3 standard deviations ($\sigma$) from the population mean. It appears in section IV.A. The application to the Iris dataset, which, as you note, is not normally, distributed, appears in section V ("Experiments"). I do not see a logical problem with first noting an algorithm's properties under certain assumptions, such as normality, and then testing it in cases where the assumption is not valid. And of course, k-means can be applied to any dataset. Whether it yields useful results is a different matter.
Can k-means be used for non normally distributed data? Here is the full quote: K-means, being an instance of the Gaussian Mixture Model (GMM), assumes Gaussian data distribution [20][26]. It then follows that nearly all inliers (precisely 99.73%) wil
27,551
Can k-means be used for non normally distributed data?
I'm not sure what the question is exactly, but standard deviation isn't just defined for normal distributions. It's a measure relevant for all data distributions. The farther away you are from the mean (in terms of std) the more unlikely this point is to occur. The only thing special about the normal distribution, regarding the standard deviation is that you know the probability of a point occurring within 1, 2 or 3 standard deviations (e.g. you know that a point has a probability of 99.7% to lie within $\pm 3$ standard deviations from the mean). This however doesn't mean that standard deviation is irrelevant for other (possibly unknown) distributions. It is still relevant, but you don't know the probability associated with it.
Can k-means be used for non normally distributed data?
I'm not sure what the question is exactly, but standard deviation isn't just defined for normal distributions. It's a measure relevant for all data distributions. The farther away you are from the mea
Can k-means be used for non normally distributed data? I'm not sure what the question is exactly, but standard deviation isn't just defined for normal distributions. It's a measure relevant for all data distributions. The farther away you are from the mean (in terms of std) the more unlikely this point is to occur. The only thing special about the normal distribution, regarding the standard deviation is that you know the probability of a point occurring within 1, 2 or 3 standard deviations (e.g. you know that a point has a probability of 99.7% to lie within $\pm 3$ standard deviations from the mean). This however doesn't mean that standard deviation is irrelevant for other (possibly unknown) distributions. It is still relevant, but you don't know the probability associated with it.
Can k-means be used for non normally distributed data? I'm not sure what the question is exactly, but standard deviation isn't just defined for normal distributions. It's a measure relevant for all data distributions. The farther away you are from the mea
27,552
Mean or sum of gradients for weight updates in SGD
The following assumes a loss function $f$ that's expressed as a sum, not an average. Expressing the loss as an average means that the scaling $\frac{1}{n}$ is "baked in" and no further action is needed. In particular, note that F.mse_loss uses reduction="mean" by default, so in the case of OP's code, no further modification is necessary to achieve an average of gradients. Indeed, rescaling the gradients and using reduction="mean" does not accomplish the desired result and amounts to a reduction in the learning rate by a factor of $\frac{1}{n}$. Suppose that $G = \sum_{i=1}^n \nabla f(x_i)$ is the sum of the gradients for some minibatch with $n$ samples. The SGD update with learning rate (step size) $r$ is $$ x^{(t+1)} = x^{(t)}- r G. $$ Now suppose that you use the mean of the gradients instead. This will change the update. If we use learning rate $\tilde{r}$, we have $$ x^{(t+1)} = x^{(t)}- \frac{\tilde{r}}{n} G. $$ These expressions can be made to be equal by re-scaling either $r$ or $\tilde{r}$. So in that sense, the distinction between the mean and the sum is unimportant because $r$ is chosen by the researcher in either case, and choosing a good $r$ for the sum has an equivalent, rescaled $\tilde{r}$ for the mean. One reason to prefer using the mean, though, is that this de-couples the learning rate and the minibatch size, so that changing the number of samples in the minibatch will not implicitly change the learning rate. Note that it's standard to use the mean of the minibatch, rather than the entire training set. However, the same re-scaling argument above applies here, too -- if you're tuning the learning rate, for a fixed-size data set you'll find a learning rate which works well, and this learning rate can be re-scaled to be suitable for a gradient descent that uses the sum in place of some mean.
Mean or sum of gradients for weight updates in SGD
The following assumes a loss function $f$ that's expressed as a sum, not an average. Expressing the loss as an average means that the scaling $\frac{1}{n}$ is "baked in" and no further action is neede
Mean or sum of gradients for weight updates in SGD The following assumes a loss function $f$ that's expressed as a sum, not an average. Expressing the loss as an average means that the scaling $\frac{1}{n}$ is "baked in" and no further action is needed. In particular, note that F.mse_loss uses reduction="mean" by default, so in the case of OP's code, no further modification is necessary to achieve an average of gradients. Indeed, rescaling the gradients and using reduction="mean" does not accomplish the desired result and amounts to a reduction in the learning rate by a factor of $\frac{1}{n}$. Suppose that $G = \sum_{i=1}^n \nabla f(x_i)$ is the sum of the gradients for some minibatch with $n$ samples. The SGD update with learning rate (step size) $r$ is $$ x^{(t+1)} = x^{(t)}- r G. $$ Now suppose that you use the mean of the gradients instead. This will change the update. If we use learning rate $\tilde{r}$, we have $$ x^{(t+1)} = x^{(t)}- \frac{\tilde{r}}{n} G. $$ These expressions can be made to be equal by re-scaling either $r$ or $\tilde{r}$. So in that sense, the distinction between the mean and the sum is unimportant because $r$ is chosen by the researcher in either case, and choosing a good $r$ for the sum has an equivalent, rescaled $\tilde{r}$ for the mean. One reason to prefer using the mean, though, is that this de-couples the learning rate and the minibatch size, so that changing the number of samples in the minibatch will not implicitly change the learning rate. Note that it's standard to use the mean of the minibatch, rather than the entire training set. However, the same re-scaling argument above applies here, too -- if you're tuning the learning rate, for a fixed-size data set you'll find a learning rate which works well, and this learning rate can be re-scaled to be suitable for a gradient descent that uses the sum in place of some mean.
Mean or sum of gradients for weight updates in SGD The following assumes a loss function $f$ that's expressed as a sum, not an average. Expressing the loss as an average means that the scaling $\frac{1}{n}$ is "baked in" and no further action is neede
27,553
log mean vs mean log in statistics
There is a potential confusion in terminology here, as this question, for example, seems to take "log-mean" to be the mean of the logs. Putting aside that confusion, here's a simple example. Say you have 3 measurements with values of 1, 10, and 100. Their mean value is $\frac{111}{3}$=37. The base 10 logarithm of 37 is 1.57, which is the log of their mean value in the original scale. The base 10 logarithms of the original data are 0, 1, and 2; the mean of the logarithms is 1, corresponding to a value of 10 in the original scale. If a log transformation of the data is appropriate then you should typically do the transformation on the original data first, whatever you call that process.
log mean vs mean log in statistics
There is a potential confusion in terminology here, as this question, for example, seems to take "log-mean" to be the mean of the logs. Putting aside that confusion, here's a simple example. Say you h
log mean vs mean log in statistics There is a potential confusion in terminology here, as this question, for example, seems to take "log-mean" to be the mean of the logs. Putting aside that confusion, here's a simple example. Say you have 3 measurements with values of 1, 10, and 100. Their mean value is $\frac{111}{3}$=37. The base 10 logarithm of 37 is 1.57, which is the log of their mean value in the original scale. The base 10 logarithms of the original data are 0, 1, and 2; the mean of the logarithms is 1, corresponding to a value of 10 in the original scale. If a log transformation of the data is appropriate then you should typically do the transformation on the original data first, whatever you call that process.
log mean vs mean log in statistics There is a potential confusion in terminology here, as this question, for example, seems to take "log-mean" to be the mean of the logs. Putting aside that confusion, here's a simple example. Say you h
27,554
log mean vs mean log in statistics
Consumption and disposable income are often though of as exponential growth processes, something like $x_t=x_0e^{rt}$, where $r$ - rate of growth. If you take a log, you get $\ln x_t=\ln x_0+rt$, and the function becomes linear. So we linearize the model by taking a log. That's the motivation. In reality there's always a stochastic noise in the data. To see how it's handled let's first write the above equation in a difference form: $\Delta \ln x_t=\ln x_t -\ln x_{t-1}=r$. One way to make this process stochastic is to add noise to the rate of change as follows: $$\Delta \ln x_t=r+\varepsilon_t\\\varepsilon_t\sim\mathcal N(0,\sigma^2)$$ Suppose that we agree with this process. We want to estimate the parameters of the process. We have the following for the rate of growth: $$E[\Delta \ln x_t]=r$$ which leads to an obvious estimator $$\hat r=\frac 1 n\sum_{t=1}^n\Delta \ln x_t$$ So, for the random walk with a drift process like above the mean of the logarithm makes a sense in the estimator of the growth rate.
log mean vs mean log in statistics
Consumption and disposable income are often though of as exponential growth processes, something like $x_t=x_0e^{rt}$, where $r$ - rate of growth. If you take a log, you get $\ln x_t=\ln x_0+rt$, and
log mean vs mean log in statistics Consumption and disposable income are often though of as exponential growth processes, something like $x_t=x_0e^{rt}$, where $r$ - rate of growth. If you take a log, you get $\ln x_t=\ln x_0+rt$, and the function becomes linear. So we linearize the model by taking a log. That's the motivation. In reality there's always a stochastic noise in the data. To see how it's handled let's first write the above equation in a difference form: $\Delta \ln x_t=\ln x_t -\ln x_{t-1}=r$. One way to make this process stochastic is to add noise to the rate of change as follows: $$\Delta \ln x_t=r+\varepsilon_t\\\varepsilon_t\sim\mathcal N(0,\sigma^2)$$ Suppose that we agree with this process. We want to estimate the parameters of the process. We have the following for the rate of growth: $$E[\Delta \ln x_t]=r$$ which leads to an obvious estimator $$\hat r=\frac 1 n\sum_{t=1}^n\Delta \ln x_t$$ So, for the random walk with a drift process like above the mean of the logarithm makes a sense in the estimator of the growth rate.
log mean vs mean log in statistics Consumption and disposable income are often though of as exponential growth processes, something like $x_t=x_0e^{rt}$, where $r$ - rate of growth. If you take a log, you get $\ln x_t=\ln x_0+rt$, and
27,555
Does a positive interaction term imply correlation between its constituent variables?
No, a non-zero $\beta_3$ does not imply $A$ and $B$ are correlated. It implies $y$ is correlated with $AB$. Simple example: Imagine we have data on visits by people to a gas station. Let $A$ be the volume of someone's gas tank in gallons. Let $B$ be the price of gas at the time of the visit. Let $y$ be the spending on gas this visit. $A \cdot B$ is how much it would cost to fill the person's gas tank. $AB$ is almost certainly correlated with $y$, the spending on gas this visit. A positive $\beta_3$ in this trivial example does not imply that the size of someone's gas tank is correlated with the price of gas. A positive $\beta_3$ would mean that spending $y$ is positive related to the carrying capacity of someone's gas tank measured in dollars (i.e. $AB$).
Does a positive interaction term imply correlation between its constituent variables?
No, a non-zero $\beta_3$ does not imply $A$ and $B$ are correlated. It implies $y$ is correlated with $AB$. Simple example: Imagine we have data on visits by people to a gas station. Let $A$ be the v
Does a positive interaction term imply correlation between its constituent variables? No, a non-zero $\beta_3$ does not imply $A$ and $B$ are correlated. It implies $y$ is correlated with $AB$. Simple example: Imagine we have data on visits by people to a gas station. Let $A$ be the volume of someone's gas tank in gallons. Let $B$ be the price of gas at the time of the visit. Let $y$ be the spending on gas this visit. $A \cdot B$ is how much it would cost to fill the person's gas tank. $AB$ is almost certainly correlated with $y$, the spending on gas this visit. A positive $\beta_3$ in this trivial example does not imply that the size of someone's gas tank is correlated with the price of gas. A positive $\beta_3$ would mean that spending $y$ is positive related to the carrying capacity of someone's gas tank measured in dollars (i.e. $AB$).
Does a positive interaction term imply correlation between its constituent variables? No, a non-zero $\beta_3$ does not imply $A$ and $B$ are correlated. It implies $y$ is correlated with $AB$. Simple example: Imagine we have data on visits by people to a gas station. Let $A$ be the v
27,556
Does a positive interaction term imply correlation between its constituent variables?
Here is a potential applied counterexample: suppose $A$ is gender, $B$ are years of schooling and $y$ are labor-market earnings. So, after, say, 12 years of primary and secondary school and a three-year Bachelor degree, you would have completed 15 years of schooling. Then, it is not totally off to assume that $A$ and $B$ are uncorrelated - in the past, men used to have higher degrees, nowadays, if anything, women. So there probably was a moment in the (not so distant) past when gender and years of schooling were uncorrelated, and the correlation certainly is not strong today. And yet, it is not difficult to make a case that $\beta_3\neq0$, as an additional year of schooling may have a differential effect on earnings for men than for women. This would, for example, be the case when there is wage "discrimination" (in quotation marks as it is a hotly debated issue) mostly in jobs for more highly educated employees. Anecdotical evidence suggests that this may be the case, as male executives tend to be better paid than female ones. On the other hand, salaries in jobs that require less education may be more frequently determined by broad agreements between unions and employers' associations (at least in, for example, continental Europe), leaving less room for wage discrimination. (The quotation marks could for example be justified by the fact that this simple story does not account for sectors, experience, etc.)
Does a positive interaction term imply correlation between its constituent variables?
Here is a potential applied counterexample: suppose $A$ is gender, $B$ are years of schooling and $y$ are labor-market earnings. So, after, say, 12 years of primary and secondary school and a three-ye
Does a positive interaction term imply correlation between its constituent variables? Here is a potential applied counterexample: suppose $A$ is gender, $B$ are years of schooling and $y$ are labor-market earnings. So, after, say, 12 years of primary and secondary school and a three-year Bachelor degree, you would have completed 15 years of schooling. Then, it is not totally off to assume that $A$ and $B$ are uncorrelated - in the past, men used to have higher degrees, nowadays, if anything, women. So there probably was a moment in the (not so distant) past when gender and years of schooling were uncorrelated, and the correlation certainly is not strong today. And yet, it is not difficult to make a case that $\beta_3\neq0$, as an additional year of schooling may have a differential effect on earnings for men than for women. This would, for example, be the case when there is wage "discrimination" (in quotation marks as it is a hotly debated issue) mostly in jobs for more highly educated employees. Anecdotical evidence suggests that this may be the case, as male executives tend to be better paid than female ones. On the other hand, salaries in jobs that require less education may be more frequently determined by broad agreements between unions and employers' associations (at least in, for example, continental Europe), leaving less room for wage discrimination. (The quotation marks could for example be justified by the fact that this simple story does not account for sectors, experience, etc.)
Does a positive interaction term imply correlation between its constituent variables? Here is a potential applied counterexample: suppose $A$ is gender, $B$ are years of schooling and $y$ are labor-market earnings. So, after, say, 12 years of primary and secondary school and a three-ye
27,557
Generate random numbers with linear distribution
There are many methods. Here are a few. You could use rejection ("accept-reject") with a uniform envelope. could could use the inverse cdf method on the density, by working out the cdf and inverting it $X=F^{-1}(U)$ You could split into a uniform and a triangular part (i.e. a finite mixture of the two). The triangular part can be generated in any of several ways (e.g. the $\max$ of two uniforms, or using the inverse cdf method, ...) and then scaled to the right interval, and the uniform is trivial (simply scaled to the right interval). if $x_1$ is positive, you could treat as triangular on $(0,x_2)$ and then use rejection in the case where it's below $x_1$. This will work pretty well if $x_1/x_2$ is small (a good bit less than half, say). you could use the ziggurat method. There are a number of other approaches. The choice between them would depend on considerations such as how much convenience vs speed matters (if you only need a few thousand values, speed probably doesn't matter much; if you need to use it many many times with potentially long runs, there it may matter much more).
Generate random numbers with linear distribution
There are many methods. Here are a few. You could use rejection ("accept-reject") with a uniform envelope. could could use the inverse cdf method on the density, by working out the cdf and inverting
Generate random numbers with linear distribution There are many methods. Here are a few. You could use rejection ("accept-reject") with a uniform envelope. could could use the inverse cdf method on the density, by working out the cdf and inverting it $X=F^{-1}(U)$ You could split into a uniform and a triangular part (i.e. a finite mixture of the two). The triangular part can be generated in any of several ways (e.g. the $\max$ of two uniforms, or using the inverse cdf method, ...) and then scaled to the right interval, and the uniform is trivial (simply scaled to the right interval). if $x_1$ is positive, you could treat as triangular on $(0,x_2)$ and then use rejection in the case where it's below $x_1$. This will work pretty well if $x_1/x_2$ is small (a good bit less than half, say). you could use the ziggurat method. There are a number of other approaches. The choice between them would depend on considerations such as how much convenience vs speed matters (if you only need a few thousand values, speed probably doesn't matter much; if you need to use it many many times with potentially long runs, there it may matter much more).
Generate random numbers with linear distribution There are many methods. Here are a few. You could use rejection ("accept-reject") with a uniform envelope. could could use the inverse cdf method on the density, by working out the cdf and inverting
27,558
Generate random numbers with linear distribution
This reminds me of another post about a linear pdf which had functional form: $$ h(x) = \frac{1+\alpha x}{2}, \quad \quad x \in [-1,1], \quad \alpha\in[-1,1] $$ (source: tri.org.au) I called this an 'acute linear' distribution, or a cute linear distribution. If $X_1\sim \text{Triangular}(-1,1,1)$ and $X_2\sim \text{Uniform}(-1,1)$ are independent, then $$X\sim \alpha X_1+(1-\alpha ) X_2$$ ... has a cute Linear distribution. Pseudo-random number generation The cdf (within the domain of support) is: $$H = \frac{1}{4} (x+1) (\alpha (x-1)+2)$$ The inverse cdf is: $$x = H^{-1}(u) = \frac{\sqrt{\alpha ^2-2 \alpha +4 \alpha u+1}-1}{\alpha }$$ Replacing $u$ with a pseudo-random drawing from $\text{Uniform}(0,1)$ then yields a pseudo-random drawing from the above cute linear pdf $h(x)$. If you wish to change the scale, or shift it, you can transform the data $X_{data}$ you generate .. e.g. $Y = b + c X_{data}$ ... which should be able to generate the richness of whatever structure you desire (might require a little bit of playing around, depending on what you are holding fixed).
Generate random numbers with linear distribution
This reminds me of another post about a linear pdf which had functional form: $$ h(x) = \frac{1+\alpha x}{2}, \quad \quad x \in [-1,1], \quad \alpha\in[-1,1] $$ (source: tri.org.au) I called this a
Generate random numbers with linear distribution This reminds me of another post about a linear pdf which had functional form: $$ h(x) = \frac{1+\alpha x}{2}, \quad \quad x \in [-1,1], \quad \alpha\in[-1,1] $$ (source: tri.org.au) I called this an 'acute linear' distribution, or a cute linear distribution. If $X_1\sim \text{Triangular}(-1,1,1)$ and $X_2\sim \text{Uniform}(-1,1)$ are independent, then $$X\sim \alpha X_1+(1-\alpha ) X_2$$ ... has a cute Linear distribution. Pseudo-random number generation The cdf (within the domain of support) is: $$H = \frac{1}{4} (x+1) (\alpha (x-1)+2)$$ The inverse cdf is: $$x = H^{-1}(u) = \frac{\sqrt{\alpha ^2-2 \alpha +4 \alpha u+1}-1}{\alpha }$$ Replacing $u$ with a pseudo-random drawing from $\text{Uniform}(0,1)$ then yields a pseudo-random drawing from the above cute linear pdf $h(x)$. If you wish to change the scale, or shift it, you can transform the data $X_{data}$ you generate .. e.g. $Y = b + c X_{data}$ ... which should be able to generate the richness of whatever structure you desire (might require a little bit of playing around, depending on what you are holding fixed).
Generate random numbers with linear distribution This reminds me of another post about a linear pdf which had functional form: $$ h(x) = \frac{1+\alpha x}{2}, \quad \quad x \in [-1,1], \quad \alpha\in[-1,1] $$ (source: tri.org.au) I called this a
27,559
Applying Rubin's rule for combining multiply imputed datasets
Rubin's rules can only be applied to parameters following a normal distribution. For parameters with a F or Chi Square distribution a different set of formulas is needed: Allison, P. D. (2002). Missing data. Newbury Park, CA: Sage. For performing an ANOVA on multiple imputed datasets you could use the R package miceadds (pdf; miceadds::mi.anova). Update 1 Here is a complete example: Export your data from SPSS to R. In Spss save your dataset as .csv Read in your dataset: library(miceadds) dat <– read.csv(file='your-dataset.csv') Lets assume, that $reading$ is your dependent variable and that you have two factors gender, with male = 0 and female = 1 treatment, with control = 0 and 'received treatment' = 1 Now lets convert them to factors: dat$gender <- factor(dat$gender) dat$treatment <- factor(dat$treatment) Convert your dataset to a mids object, wehere we assume, that the first variable holds the imputation number (Imputation_ in SPSS): dat.mids <- as.mids(dat) Now you can perform an ANOVA: fit <- mi.anova(mi.res=dat.mids, formula="reading~gender*treatment", type=3) summary(fit) Update 2 This is a reply to your second comment: What you describe here is a data import/export related problem between SPSS and R. You could try to import the .sav file directly into R and there are a bunch of dedicated packages for that: foreign, rio, gdata, Hmisc, etc. I prefer the csv-way, but that's a matter of taste and/or depends on the nature of your problem. Maybe you should also check some tutorials on youtube or other sources on the internet. library(foreign) dat <- read.spss(file='path-to-sav', use.value.labels=F, to.data.frame=T) Update 3 This is a reply to your first comment: Yes, you can do your analysis in SPSS and pool the F values in miceadds (please note this example is taken from the miceadds::micombine.F help page): library(miceadds) Fvalues <- c(6.76 , 4.54 , 4.23 , 5.45 , 4.78, 6.76 , 4.54 , 4.23 , 5.45 , 4.78, 6.76 , 4.54 , 4.23 , 5.45 , 4.78, 6.76 , 4.54 , 4.23 , 5.45 , 4.78 ) micombine(Fvalues, df1=4)
Applying Rubin's rule for combining multiply imputed datasets
Rubin's rules can only be applied to parameters following a normal distribution. For parameters with a F or Chi Square distribution a different set of formulas is needed: Allison, P. D. (2002). Mis
Applying Rubin's rule for combining multiply imputed datasets Rubin's rules can only be applied to parameters following a normal distribution. For parameters with a F or Chi Square distribution a different set of formulas is needed: Allison, P. D. (2002). Missing data. Newbury Park, CA: Sage. For performing an ANOVA on multiple imputed datasets you could use the R package miceadds (pdf; miceadds::mi.anova). Update 1 Here is a complete example: Export your data from SPSS to R. In Spss save your dataset as .csv Read in your dataset: library(miceadds) dat <– read.csv(file='your-dataset.csv') Lets assume, that $reading$ is your dependent variable and that you have two factors gender, with male = 0 and female = 1 treatment, with control = 0 and 'received treatment' = 1 Now lets convert them to factors: dat$gender <- factor(dat$gender) dat$treatment <- factor(dat$treatment) Convert your dataset to a mids object, wehere we assume, that the first variable holds the imputation number (Imputation_ in SPSS): dat.mids <- as.mids(dat) Now you can perform an ANOVA: fit <- mi.anova(mi.res=dat.mids, formula="reading~gender*treatment", type=3) summary(fit) Update 2 This is a reply to your second comment: What you describe here is a data import/export related problem between SPSS and R. You could try to import the .sav file directly into R and there are a bunch of dedicated packages for that: foreign, rio, gdata, Hmisc, etc. I prefer the csv-way, but that's a matter of taste and/or depends on the nature of your problem. Maybe you should also check some tutorials on youtube or other sources on the internet. library(foreign) dat <- read.spss(file='path-to-sav', use.value.labels=F, to.data.frame=T) Update 3 This is a reply to your first comment: Yes, you can do your analysis in SPSS and pool the F values in miceadds (please note this example is taken from the miceadds::micombine.F help page): library(miceadds) Fvalues <- c(6.76 , 4.54 , 4.23 , 5.45 , 4.78, 6.76 , 4.54 , 4.23 , 5.45 , 4.78, 6.76 , 4.54 , 4.23 , 5.45 , 4.78, 6.76 , 4.54 , 4.23 , 5.45 , 4.78 ) micombine(Fvalues, df1=4)
Applying Rubin's rule for combining multiply imputed datasets Rubin's rules can only be applied to parameters following a normal distribution. For parameters with a F or Chi Square distribution a different set of formulas is needed: Allison, P. D. (2002). Mis
27,560
Applying Rubin's rule for combining multiply imputed datasets
You correctly wrote down the pooled estimator: $$ \bar{U} = \frac{1}{m} \sum_{i=1}^m U_i$$ Where $U_i$ represents the analytic results from the $i$-th imputed dataset. Normally, analytic results have some normal approximating distribution from which we draw inference or create confidence bounds. This is mainly done using the mean value ($U_i$) and its standard error. T-tests, linear regressions, logistic regressions, and basically most analyses can be adequately summarized in terms of that value $U_i$ and its standard error $\text{se}(U_i)$. Rubin's Rules uses the law of total variance to write down the variance as the sum of a between and within imputation variance: $$\text{var}(\bar{U}) = E[\text{var}(\bar{U}|U_i)] + \mbox{var}\left(E[\bar{U}|U_i]\right)$$ The first term is the within-variance such that $E[\text{var}(\bar{U}|U_i) = \frac{1}{m}\sum_{i=1}^m V_i$ where $V_i$ is the variance of the analysis result from the $i$-th complete or imputed dataset. The latter term is the between-imputation variance: $ \mbox{var}\left(E[\bar{U}|U_i]\right) = \frac{M+1}{M-1} \sum_{i=1}^m\left(U_i - \bar{U}\right)^2$. I've never quite grasped the DF correction here, but this is basically the accepted approach. Anyway, since the recommended number of imputations is small (Rubin suggests as few as 5), it is typically possible to compute this number by hand fitting each analysis. A by-hand example is listed below: require(mice) set.seed(123) nhimp <- mice(nhanes) sapply(1:5, function(i) { fit <- lm(chl ~ bmi, data=complete(nhimp, i)) print(c('coef'=coef(fit)[2], 'var'=vcov(fit)[2, 2])) }) Gives the following output: coef.bmi var 2.123417 4.542842 3.295818 3.801829 2.866338 3.034773 1.994418 4.124130 3.153911 3.531536 So the within variance is the average of the imputation specific point estimate variances: 3.8 (average of second column). The between variance is 0.35 variance of the first column). Using the DF correction we get variance 4.23. This agrees with the pool command given in the mice package. > fit <- with(data=nhimp,exp=lm(chl~bmi)) > summary(pool(fit)) est se t df Pr(>|t|) lo 95 hi 95 nmis fmi lambda (Intercept) 119.03466 54.716451 2.175482 19.12944 0.04233303 4.564233 233.505080 NA 0.1580941 0.07444487 bmi 2.68678 2.057294 1.305978 18.21792 0.20781073 -1.631731 7.005291 9 0.1853028 0.10051760 which shows the SE = 2.057 for the model coefficient, (Variance = SE**2 = 4.23). I fail to see how increasing the number of imputed datasets creates any particular issue. If you cannot supply an example of the error, I don't know how to be more helpful. But by-hand combination is certain to accommodate a variety of modeling strategies. This paper discusses other ways that the law of total variance can derive other estimates of the variance of the pooled estimate. In particular, the authors point out (correctly) that the necessary assumption for Rubin's Rules is not normality of the point estimates but something called congeniality. WRT normality, most point estimates that come from regression models have rapid convergence under the central limit theorem, and the bootstrap can show you this.
Applying Rubin's rule for combining multiply imputed datasets
You correctly wrote down the pooled estimator: $$ \bar{U} = \frac{1}{m} \sum_{i=1}^m U_i$$ Where $U_i$ represents the analytic results from the $i$-th imputed dataset. Normally, analytic results have
Applying Rubin's rule for combining multiply imputed datasets You correctly wrote down the pooled estimator: $$ \bar{U} = \frac{1}{m} \sum_{i=1}^m U_i$$ Where $U_i$ represents the analytic results from the $i$-th imputed dataset. Normally, analytic results have some normal approximating distribution from which we draw inference or create confidence bounds. This is mainly done using the mean value ($U_i$) and its standard error. T-tests, linear regressions, logistic regressions, and basically most analyses can be adequately summarized in terms of that value $U_i$ and its standard error $\text{se}(U_i)$. Rubin's Rules uses the law of total variance to write down the variance as the sum of a between and within imputation variance: $$\text{var}(\bar{U}) = E[\text{var}(\bar{U}|U_i)] + \mbox{var}\left(E[\bar{U}|U_i]\right)$$ The first term is the within-variance such that $E[\text{var}(\bar{U}|U_i) = \frac{1}{m}\sum_{i=1}^m V_i$ where $V_i$ is the variance of the analysis result from the $i$-th complete or imputed dataset. The latter term is the between-imputation variance: $ \mbox{var}\left(E[\bar{U}|U_i]\right) = \frac{M+1}{M-1} \sum_{i=1}^m\left(U_i - \bar{U}\right)^2$. I've never quite grasped the DF correction here, but this is basically the accepted approach. Anyway, since the recommended number of imputations is small (Rubin suggests as few as 5), it is typically possible to compute this number by hand fitting each analysis. A by-hand example is listed below: require(mice) set.seed(123) nhimp <- mice(nhanes) sapply(1:5, function(i) { fit <- lm(chl ~ bmi, data=complete(nhimp, i)) print(c('coef'=coef(fit)[2], 'var'=vcov(fit)[2, 2])) }) Gives the following output: coef.bmi var 2.123417 4.542842 3.295818 3.801829 2.866338 3.034773 1.994418 4.124130 3.153911 3.531536 So the within variance is the average of the imputation specific point estimate variances: 3.8 (average of second column). The between variance is 0.35 variance of the first column). Using the DF correction we get variance 4.23. This agrees with the pool command given in the mice package. > fit <- with(data=nhimp,exp=lm(chl~bmi)) > summary(pool(fit)) est se t df Pr(>|t|) lo 95 hi 95 nmis fmi lambda (Intercept) 119.03466 54.716451 2.175482 19.12944 0.04233303 4.564233 233.505080 NA 0.1580941 0.07444487 bmi 2.68678 2.057294 1.305978 18.21792 0.20781073 -1.631731 7.005291 9 0.1853028 0.10051760 which shows the SE = 2.057 for the model coefficient, (Variance = SE**2 = 4.23). I fail to see how increasing the number of imputed datasets creates any particular issue. If you cannot supply an example of the error, I don't know how to be more helpful. But by-hand combination is certain to accommodate a variety of modeling strategies. This paper discusses other ways that the law of total variance can derive other estimates of the variance of the pooled estimate. In particular, the authors point out (correctly) that the necessary assumption for Rubin's Rules is not normality of the point estimates but something called congeniality. WRT normality, most point estimates that come from regression models have rapid convergence under the central limit theorem, and the bootstrap can show you this.
Applying Rubin's rule for combining multiply imputed datasets You correctly wrote down the pooled estimator: $$ \bar{U} = \frac{1}{m} \sum_{i=1}^m U_i$$ Where $U_i$ represents the analytic results from the $i$-th imputed dataset. Normally, analytic results have
27,561
Maximum likelihood estimation of p in a Binomial sample
If you have a bernoulli experiment and repeat that (independently) N times, then you get a binomial variable. Then if you repeat a binomial experiment $n$ times that means you have repeated $nN$ bernoulli experiments. Lets give you an example: Assume $Y\sim Bin(p=3/4,N=5)$ and your observations after $n=5$ repetition are $ 5, 4, 2, 3, 4$ . Then it is clear that for example (5+ 4+ 2+ 3,+ 4)/5=3.6 is not an estimator of $p$ but $(5+4+2+3+4)/(5*5)=.72$ is.
Maximum likelihood estimation of p in a Binomial sample
If you have a bernoulli experiment and repeat that (independently) N times, then you get a binomial variable. Then if you repeat a binomial experiment $n$ times that means you have repeated $nN$ bern
Maximum likelihood estimation of p in a Binomial sample If you have a bernoulli experiment and repeat that (independently) N times, then you get a binomial variable. Then if you repeat a binomial experiment $n$ times that means you have repeated $nN$ bernoulli experiments. Lets give you an example: Assume $Y\sim Bin(p=3/4,N=5)$ and your observations after $n=5$ repetition are $ 5, 4, 2, 3, 4$ . Then it is clear that for example (5+ 4+ 2+ 3,+ 4)/5=3.6 is not an estimator of $p$ but $(5+4+2+3+4)/(5*5)=.72$ is.
Maximum likelihood estimation of p in a Binomial sample If you have a bernoulli experiment and repeat that (independently) N times, then you get a binomial variable. Then if you repeat a binomial experiment $n$ times that means you have repeated $nN$ bern
27,562
Maximum likelihood estimation of p in a Binomial sample
Assuming that you are talking about $n$ iid trials of $X_i \sim \operatorname{Binom}(N,p)$, the likelihood function you calculated is certainly correct: $$L(p) = \prod_i^n(f(y_i)) = \prod_i^n \left[ {{N}\choose{y_i}}p^{y_i} (1-p)^{N- yi} \right] = \left[ \prod_i^n {{N}\choose{y_i}} \right] p^{\sum_1^n y_i} (1-p)^{nN - \sum_1^n{y_i}} \,.$$ By definition then we have that the MLE for $p$ is: $$\hat{p} = \arg\max_p \left[ \left[ \prod_i^n {{N}\choose{y_i}} \right] p^{\sum_1^n y_i} (1-p)^{nN - \sum_1^n{y_i}}\right] $$ Since $x \mapsto \ln x$ is a strictly increasing function of $x$, we have that $x_1 < x_2 \iff \ln x_1 < \ln x_2$ for all $x_1, x_2$ in the domain of this function, which includes the values of the likelihood we calculated above. Taking this fact into consideration, we get that: $$\begin{array}{rcl} \hat{p} & = & \displaystyle\arg\max_p\ \ \ln\left[ \left[ \prod_i^n {{N}\choose{y_i}} \right] p^{\sum_1^n y_i} (1-p)^{nN - \sum_1^n{y_i}}\right] \\ & = & \displaystyle \arg\limits\max_p \sum_{i=1}^n\left[ \ln {{N}\choose{y_i}} + \left( \sum_{i=1}^n y_i \right)\ln(p) + \left( nN - \sum_{i=1}^n y_i \right)\ln(1-p) \right] \end{array}$$ Since this is a differentiable function of $p$, if a MLE $\hat{p}$ exists, by the first derivative test for local extrema, we will have that $\frac{\partial}{\partial p}$ of the above expression is equal to $0$ when $p = \hat{p}$ (provided $p$ is in the interior of $[0,1]$, and not one of the endpoints $p=0$ or $p=1$, which is pretty obvious since in both of those cases $L(p) = 0$). We are even guaranteed that this value of $p$ will not only be an extremum, but even a maximum, if we can show that the second derivative ($w.r.t. p$) of the above function is strictly less than zero, i.e. that the above function is concave. (I claim, but do not show that this log-likelihood is concave. You can show this either by direct verification, e.g. page 4 here, or by showing that the binomial is an exponential family, so that in its natural/canonical parametrization its log-likelihood is concave (see here or here), therefore the MLE for the natural parameter is unique, then if you can show that $p$ is a one-to-one increasing function $h$ of the natural parameter $\eta:= \ln(p/1-p)$, you are done, i.e. just take $\hat{p} = h(\hat{\eta})$.) TL;DR The value of $p$ such that the derivative of the above expression w.r.t. $p$ evaluates to $0$ is our MLE $\hat{p}$, i.e. we choose $\hat{p}$ so that: $$0 + \frac{\left( \sum_{i=1}^n y_i \right)}{\hat{p}} - \frac{\left(n N - \sum_{i=1}^n y_i \right)}{1-\hat{p}} = 0 \,.$$ As one of the answers above claims, this means after doing some algebra that $$\hat{p} = \frac{\sum_{i=1}^n y_i}{nN} = \frac{\bar{y}}{N} \,. $$
Maximum likelihood estimation of p in a Binomial sample
Assuming that you are talking about $n$ iid trials of $X_i \sim \operatorname{Binom}(N,p)$, the likelihood function you calculated is certainly correct: $$L(p) = \prod_i^n(f(y_i)) = \prod_i^n \left[
Maximum likelihood estimation of p in a Binomial sample Assuming that you are talking about $n$ iid trials of $X_i \sim \operatorname{Binom}(N,p)$, the likelihood function you calculated is certainly correct: $$L(p) = \prod_i^n(f(y_i)) = \prod_i^n \left[ {{N}\choose{y_i}}p^{y_i} (1-p)^{N- yi} \right] = \left[ \prod_i^n {{N}\choose{y_i}} \right] p^{\sum_1^n y_i} (1-p)^{nN - \sum_1^n{y_i}} \,.$$ By definition then we have that the MLE for $p$ is: $$\hat{p} = \arg\max_p \left[ \left[ \prod_i^n {{N}\choose{y_i}} \right] p^{\sum_1^n y_i} (1-p)^{nN - \sum_1^n{y_i}}\right] $$ Since $x \mapsto \ln x$ is a strictly increasing function of $x$, we have that $x_1 < x_2 \iff \ln x_1 < \ln x_2$ for all $x_1, x_2$ in the domain of this function, which includes the values of the likelihood we calculated above. Taking this fact into consideration, we get that: $$\begin{array}{rcl} \hat{p} & = & \displaystyle\arg\max_p\ \ \ln\left[ \left[ \prod_i^n {{N}\choose{y_i}} \right] p^{\sum_1^n y_i} (1-p)^{nN - \sum_1^n{y_i}}\right] \\ & = & \displaystyle \arg\limits\max_p \sum_{i=1}^n\left[ \ln {{N}\choose{y_i}} + \left( \sum_{i=1}^n y_i \right)\ln(p) + \left( nN - \sum_{i=1}^n y_i \right)\ln(1-p) \right] \end{array}$$ Since this is a differentiable function of $p$, if a MLE $\hat{p}$ exists, by the first derivative test for local extrema, we will have that $\frac{\partial}{\partial p}$ of the above expression is equal to $0$ when $p = \hat{p}$ (provided $p$ is in the interior of $[0,1]$, and not one of the endpoints $p=0$ or $p=1$, which is pretty obvious since in both of those cases $L(p) = 0$). We are even guaranteed that this value of $p$ will not only be an extremum, but even a maximum, if we can show that the second derivative ($w.r.t. p$) of the above function is strictly less than zero, i.e. that the above function is concave. (I claim, but do not show that this log-likelihood is concave. You can show this either by direct verification, e.g. page 4 here, or by showing that the binomial is an exponential family, so that in its natural/canonical parametrization its log-likelihood is concave (see here or here), therefore the MLE for the natural parameter is unique, then if you can show that $p$ is a one-to-one increasing function $h$ of the natural parameter $\eta:= \ln(p/1-p)$, you are done, i.e. just take $\hat{p} = h(\hat{\eta})$.) TL;DR The value of $p$ such that the derivative of the above expression w.r.t. $p$ evaluates to $0$ is our MLE $\hat{p}$, i.e. we choose $\hat{p}$ so that: $$0 + \frac{\left( \sum_{i=1}^n y_i \right)}{\hat{p}} - \frac{\left(n N - \sum_{i=1}^n y_i \right)}{1-\hat{p}} = 0 \,.$$ As one of the answers above claims, this means after doing some algebra that $$\hat{p} = \frac{\sum_{i=1}^n y_i}{nN} = \frac{\bar{y}}{N} \,. $$
Maximum likelihood estimation of p in a Binomial sample Assuming that you are talking about $n$ iid trials of $X_i \sim \operatorname{Binom}(N,p)$, the likelihood function you calculated is certainly correct: $$L(p) = \prod_i^n(f(y_i)) = \prod_i^n \left[
27,563
Maximum likelihood estimation of p in a Binomial sample
to get MLE, you repeat Binomial Experiment with N trials n times. So that, first N trials give you $y_1$ success. second N trials give you $y_2$ success. . . . nth N trials give you $y_n$ success. Mathematically, you get MLE $\hat p=\frac{\sum\limits_{i=1}^ny_i}{nN}$(that is nothing but $\frac{total~success}{total~trials}$) $\hat p=\frac{\sum\limits_{i=1}^ny_i}{N}$ is neither Mathematically correct nor logically(it gives you MLE for Expected success).
Maximum likelihood estimation of p in a Binomial sample
to get MLE, you repeat Binomial Experiment with N trials n times. So that, first N trials give you $y_1$ success. second N trials give you $y_2$ success. . . . nth N trials give you $y_n$ success. Mat
Maximum likelihood estimation of p in a Binomial sample to get MLE, you repeat Binomial Experiment with N trials n times. So that, first N trials give you $y_1$ success. second N trials give you $y_2$ success. . . . nth N trials give you $y_n$ success. Mathematically, you get MLE $\hat p=\frac{\sum\limits_{i=1}^ny_i}{nN}$(that is nothing but $\frac{total~success}{total~trials}$) $\hat p=\frac{\sum\limits_{i=1}^ny_i}{N}$ is neither Mathematically correct nor logically(it gives you MLE for Expected success).
Maximum likelihood estimation of p in a Binomial sample to get MLE, you repeat Binomial Experiment with N trials n times. So that, first N trials give you $y_1$ success. second N trials give you $y_2$ success. . . . nth N trials give you $y_n$ success. Mat
27,564
Maximum likelihood estimation of p in a Binomial sample
In a Binomial experiment, we are interested in the number of successes: not a single sequence. When calculating the Likelihood function of a Binomial experiment, you can begin from 1) Bernoulli distribution (i.e. single trial) or 2) just use Binomial distribution (number of successes) 1) Likelihood derived from Bernoulli trial The probability of success of a single trial is \begin{align} P(y \mid p) = p^y(1-p)^{1-y} \end{align} and for a sequence of trials \begin{align} P(y_1,...,y_N \mid p) &= \prod_{i=1}^Np^{y_i}(1-p)^{1-y_i} \\ &=p^{\sum_{i=1}^Ny_i}(1-p)^{N-\sum_{i=1}^Ny_i}. \end{align} For clarity, let's define the number of successes as \begin{align} k = \sum_{i=1}^Ny_i \end{align} Giving us: \begin{align} P(y_1,...,y_N \mid p) &= p^{k}(1-p)^{N-k}. \end{align} However, we are not interested in this single sequence, but all the sequences that produce similar number of successes. This is similar to the relationship between the Bernoulli trial and a Binomial distribution: The probability of sequences that produce $k$ successes is given by multiplying the probability of a single sequence above with the binomial coefficient $\binom{N}{k}$. Thus the likelihood (probability of our data given parameter value): \begin{align} L(p) = P(Y \mid p) &= \binom{N}{k}p^{k}(1-p)^{N-k}. \end{align} 2) Likelihood derived from Binomial distribution The Binomial probability \begin{align} P(Y \mid p) &= \binom{N}{k}p^{k}(1-p)^{N-k}. \end{align} already is the probability of $k$ successes over $N$ trials, not a single observation or a single sequence of observations. Thus the Likelihood is not a product of these -- this would be the likelihood of several (independent) binomial experiments repeated, which is what you were getting at in your question! Where the confusion comes from? A lot of sources simply drop the binomial coefficient of the Likelihood function \begin{align} L(p) \propto p^{k}(1-p)^{N-k}, \end{align} without actually stating that this being done, or are simply not rigorous enough in their derivation: Using the likelihood of a single sequence instead. Given fixed observations, $\binom{N}{k}$ is a constant and thus doesn't affect calculating MLE estimate or MCMC sampling from the posterior, and this is why they can get away with the mistake.
Maximum likelihood estimation of p in a Binomial sample
In a Binomial experiment, we are interested in the number of successes: not a single sequence. When calculating the Likelihood function of a Binomial experiment, you can begin from 1) Bernoulli distri
Maximum likelihood estimation of p in a Binomial sample In a Binomial experiment, we are interested in the number of successes: not a single sequence. When calculating the Likelihood function of a Binomial experiment, you can begin from 1) Bernoulli distribution (i.e. single trial) or 2) just use Binomial distribution (number of successes) 1) Likelihood derived from Bernoulli trial The probability of success of a single trial is \begin{align} P(y \mid p) = p^y(1-p)^{1-y} \end{align} and for a sequence of trials \begin{align} P(y_1,...,y_N \mid p) &= \prod_{i=1}^Np^{y_i}(1-p)^{1-y_i} \\ &=p^{\sum_{i=1}^Ny_i}(1-p)^{N-\sum_{i=1}^Ny_i}. \end{align} For clarity, let's define the number of successes as \begin{align} k = \sum_{i=1}^Ny_i \end{align} Giving us: \begin{align} P(y_1,...,y_N \mid p) &= p^{k}(1-p)^{N-k}. \end{align} However, we are not interested in this single sequence, but all the sequences that produce similar number of successes. This is similar to the relationship between the Bernoulli trial and a Binomial distribution: The probability of sequences that produce $k$ successes is given by multiplying the probability of a single sequence above with the binomial coefficient $\binom{N}{k}$. Thus the likelihood (probability of our data given parameter value): \begin{align} L(p) = P(Y \mid p) &= \binom{N}{k}p^{k}(1-p)^{N-k}. \end{align} 2) Likelihood derived from Binomial distribution The Binomial probability \begin{align} P(Y \mid p) &= \binom{N}{k}p^{k}(1-p)^{N-k}. \end{align} already is the probability of $k$ successes over $N$ trials, not a single observation or a single sequence of observations. Thus the Likelihood is not a product of these -- this would be the likelihood of several (independent) binomial experiments repeated, which is what you were getting at in your question! Where the confusion comes from? A lot of sources simply drop the binomial coefficient of the Likelihood function \begin{align} L(p) \propto p^{k}(1-p)^{N-k}, \end{align} without actually stating that this being done, or are simply not rigorous enough in their derivation: Using the likelihood of a single sequence instead. Given fixed observations, $\binom{N}{k}$ is a constant and thus doesn't affect calculating MLE estimate or MCMC sampling from the posterior, and this is why they can get away with the mistake.
Maximum likelihood estimation of p in a Binomial sample In a Binomial experiment, we are interested in the number of successes: not a single sequence. When calculating the Likelihood function of a Binomial experiment, you can begin from 1) Bernoulli distri
27,565
How fair is it to use the word "predict" for (logistic) regression?
There is no problem with using the word "predict". It is important to recognize that predictions are unrelated to causality. Consider a case where most people who die in a hospital emergency room die of a heart attack. If you hear that a patient died, but didn't know the cause, you could predict that it was probably from a heart attack, because you know that heart attacks are responsible for >50%. You are making a prediction, but you are predicting an unknown cause from a known effect. Also, the prediction in this example is categorical, so it is analogous to logistic regression. (The analogy is probably stronger to multinomial logistic regression, but that doesn't matter here.) For what it's worth, predictions don't have to be related to any direct causal connection at all. You can make a prediction based on a spurious correlation, so long as the relationship is reliable. Consider predicting the unknown height of an identical twin based on the twin's sibling. In this case, both heights are effects of a set of common causes (shared genetics and environment). The height of neither twin is a cause or an effect of the other. Nonetheless, you can make very good predictions in this situation.
How fair is it to use the word "predict" for (logistic) regression?
There is no problem with using the word "predict". It is important to recognize that predictions are unrelated to causality. Consider a case where most people who die in a hospital emergency room d
How fair is it to use the word "predict" for (logistic) regression? There is no problem with using the word "predict". It is important to recognize that predictions are unrelated to causality. Consider a case where most people who die in a hospital emergency room die of a heart attack. If you hear that a patient died, but didn't know the cause, you could predict that it was probably from a heart attack, because you know that heart attacks are responsible for >50%. You are making a prediction, but you are predicting an unknown cause from a known effect. Also, the prediction in this example is categorical, so it is analogous to logistic regression. (The analogy is probably stronger to multinomial logistic regression, but that doesn't matter here.) For what it's worth, predictions don't have to be related to any direct causal connection at all. You can make a prediction based on a spurious correlation, so long as the relationship is reliable. Consider predicting the unknown height of an identical twin based on the twin's sibling. In this case, both heights are effects of a set of common causes (shared genetics and environment). The height of neither twin is a cause or an effect of the other. Nonetheless, you can make very good predictions in this situation.
How fair is it to use the word "predict" for (logistic) regression? There is no problem with using the word "predict". It is important to recognize that predictions are unrelated to causality. Consider a case where most people who die in a hospital emergency room d
27,566
What criteria to use for separating variables into explanatory variables and responses for ordination methods in ecology?
As @amoeba mentioned in the comments, PCA will only look at one set of data and it will show you the major (linear) patterns of variation in those variables, the correlations or covariances between those variables, and the relationships between samples (the rows) in your data set. What one normally does with a species data set and a suite of potential explanatory variables is to fit a constrained ordination. In PCA, the principal components, the axes on the PCA biplot, are derived as optimal linear combinations of all variables. If you ran this on a data set of soil chemistry with variables pH, $\mathrm{Ca^{2+}}$, TotalCarbon, you might find that the first component was $$0.5 \times \mathrm{pH} + 1.4 \times \mathrm{Ca^{2+}} + 0.1 \times \mathrm{TotalCarbon} $$ and the second component $$2.7 \times \mathrm{pH} + 0.3 \times \mathrm{Ca^{2+}} - 5.6 \times \mathrm{TotalCarbon} $$ These components are freely selectable from the variables measured, and which get chosen are those that explain sequentially the largest amount of variation in the dataset, and that each linear combination is orthogonal (uncorrelated with) the others. In a constrained ordination, we have two datasets, but we are not free to select whatever linear combinations of the first data set (the soil chem data above) we want. Instead we have to select linear combinations of the variables in the second data set that best explain variation in the first. Also, in the case of PCA, the one data set is the response matrix and there are no predictors (you could think of the response as predicting itself). In the constrained case, we have a response data set which we wish to explain with a set of explanatory variables. Although you haven't explained which variables are the response, normally one wishes to explain variation in the abundances or composition of those species (i.e. the responses) using the environmental explanatory variables. The constrained version of PCA is a thing called Redundancy Analysis (RDA) in ecological circles. This assumes an underlying linear response model for the species, which is either not appropriate or only appropriate if you have short gradients along which the species respond. An alternative to PCA is a thing called correspondence analysis (CA). This is unconstrained but it does have an underlying unimodal response model, which is somewhat more realistic in terms of how species respond along longer gradients. Note also that CA models relative abundances or composition, PCA models the raw abundances. There is a constrained version of CA, known as constrained or canonical correspondence analysis (CCA) - not to be confused with a more formal statistical model known as canonical correlation analysis. In both RDA and CCA the aim is to model the variation in species abundances or composition as a series of linear combinations of the explanatory variables. From the description it sounds like your wife wants to explain variation in the millipede species composition (or abundance) in terms of the other variables measured. Some words of warning; RDA and CCA are just multivariate regressions; CCA is just a weighted multivariate regression. Anything you've learned about regression applies, and there are a couple of other gotchas too: as you increase the number of explanatory variable, the constraints actually become less and less and you aren't really extracting components/axes that explain the species composition optimally, and with CCA, as you increase the number of explanatory factors, you risk inducing an artefact of a curve into the configuration of points in the CCA plot. the theory underlying RDA and CCA are less well-developed than more formal statistical methods. We can only reasonably choose which explanatory variables to keep using step-wise selection (which is not ideal for all the reasons we don't like it as a selection method in regression) and we have to use permutation tests to do so. so my advice is the same as with regression; think ahead of time what your hypotheses are and include variables that reflect those hypotheses. Don't just throw all explanatory variables into the mix. Example Unconstrained ordination PCA I'll show an example comparing PCA, CA and CCA using the vegan package for R which I help maintain and which is designed to fit these kinds of ordination methods: library("vegan") # load the package data(varespec) # load example data ## PCA pcfit <- rda(varespec) ## could add `scale = TRUE` if variables in different units pcfit > pcfit Call: rda(X = varespec) Inertia Rank Total 1826 Unconstrained 1826 23 Inertia is variance Eigenvalues for unconstrained axes: PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 983.0 464.3 132.3 73.9 48.4 37.0 25.7 19.7 (Showed only 8 of all 23 unconstrained eigenvalues) vegan doesn't standardise the Inertia, unlike Canoco, so the total variance is 1826 and the Eigenvalues are in those same units and sum to 1826 > cumsum(eigenvals(pcfit)) PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 982.9788 1447.2829 1579.5334 1653.4670 1701.8853 1738.8947 1764.6209 1784.3265 PC9 PC10 PC11 PC12 PC13 PC14 PC15 PC16 1796.6007 1807.0361 1816.3869 1819.1853 1821.5128 1822.9045 1824.1103 1824.9250 PC17 PC18 PC19 PC20 PC21 PC22 PC23 1825.2563 1825.4429 1825.5495 1825.6131 1825.6383 1825.6548 1825.6594 We also see that the first Eigenvalue is about half the variance and with the first two axes we have explained ~80% of the total variance > head(cumsum(eigenvals(pcfit)) / pcfit$tot.chi) PC1 PC2 PC3 PC4 PC5 PC6 0.5384240 0.7927453 0.8651851 0.9056821 0.9322031 0.9524749 A biplot can be drawn from the scores of the samples and species on the first two principal components > plot(pcfit) There are two issues here The ordination is essentially dominated by three species — these species lie farthest from the origin — as these are the most abundant taxa in the data set There is a strong arch of curve in the ordination, suggestive of a long or dominant single gradient that has been broken down into the two main principal components to maintain the metric properties of the ordination. CA A CA might assist with both these points as it handles long gradient better due to the unimodal response model, and it models relative composition of species not raw abundances. The vegan / R code to do this is similar to the PCA code used above cafit <- cca(varespec) cafit > cafit <- cca(varespec) > cafit Call: cca(X = varespec) Inertia Rank Total 2.083 Unconstrained 2.083 23 Inertia is mean squared contingency coefficient Eigenvalues for unconstrained axes: CA1 CA2 CA3 CA4 CA5 CA6 CA7 CA8 0.5249 0.3568 0.2344 0.1955 0.1776 0.1216 0.1155 0.0889 (Showed only 8 of all 23 unconstrained eigenvalues) Here we explain about 40% of the variation among sites in their relative composition > head(cumsum(eigenvals(cafit)) / cafit$tot.chi) CA1 CA2 CA3 CA4 CA5 CA6 0.2519837 0.4232578 0.5357951 0.6296236 0.7148866 0.7732393 The joint plot of the species and site scores is now less dominated by a few species > plot(cafit) Which of PCA or CA you choose should be determined by the questions you wish to ask of the data. Usually with species data we are more often interested in difference in the suite of species so CA is a popular choice. If we have a data set of environmental variables, say water or soil chemistry, we wouldn't expect those to respond in a unimodal manner along gradients so CA would be inappropriate and PCA (of a correlation matrix, using scale = TRUE in the rda() call) would be more appropriate. Constrained ordination; CCA Now if we have second set of data which we wish to use to explain patterns in the first species data set, we must use a constrained ordination. Often the choice here is CCA, but RDA is an alternative, as is RDA after transformation of the data to allow it to handle species data better. data(varechem) # load explanatory example data We re-use the cca() function but we either supply two data frames (X for species, and Y for explanatory/predictor variables) or a model formula listing the form of the model we wish to fit. To include all variables we could use varechem ~ ., data = varechem as the formula to include all variables — but as I said above, this isn't a good idea in general ccafit <- cca(varespec ~ ., data = varechem) > ccafit Call: cca(formula = varespec ~ N + P + K + Ca + Mg + S + Al + Fe + Mn + Zn + Mo + Baresoil + Humdepth + pH, data = varechem) Inertia Proportion Rank Total 2.0832 1.0000 Constrained 1.4415 0.6920 14 Unconstrained 0.6417 0.3080 9 Inertia is mean squared contingency coefficient Eigenvalues for constrained axes: CCA1 CCA2 CCA3 CCA4 CCA5 CCA6 CCA7 CCA8 CCA9 CCA10 CCA11 0.4389 0.2918 0.1628 0.1421 0.1180 0.0890 0.0703 0.0584 0.0311 0.0133 0.0084 CCA12 CCA13 CCA14 0.0065 0.0062 0.0047 Eigenvalues for unconstrained axes: CA1 CA2 CA3 CA4 CA5 CA6 CA7 CA8 CA9 0.19776 0.14193 0.10117 0.07079 0.05330 0.03330 0.01887 0.01510 0.00949 The triplot of the above ordination is produced using the plot() method > plot(ccafit) Of course, now the task is to work out which of those variables is actually important. Also note that we have explained about 2/3 of the species variance using just 13 variables. one of the problems of using all variables in this ordination is that we've created an arched configuration in sample and species scores, which is purely an artefact of using too-many correlated variables. If you want to know more about this, check out the vegan documentation or a good book on multivariate ecological data analysis. Relationship with regression It is simplest to illustrate the link with RDA, but CCA is just the same except everything involves row and column two-way-table marginal sums as weights. At it's heart, RDA is equivalent to the application of PCA to a matrix of fitted values from a multiple linear regression fitted to each species (response) values (abundances, say) with predictors given by the matrix of explanatory variables. In R we can do this as ## centre the responses spp <- scale(data.matrix(varespec), center = TRUE, scale = FALSE) ## ...and the predictors env <- as.data.frame(scale(varechem, center = TRUE, scale = FALSE)) ## fit a linear model to each column (species) in spp. ## Suppress intercept as we've centred everything fit <- lm(spp ~ . - 1, data = env) ## Collect fitted values for each species and do a PCA of that ## matrix pclmfit <- prcomp(fitted(fit)) The Eigenvalues for these two approaches are equal: > (eig1 <- unclass(unname(eigenvals(pclmfit)[1:14]))) [1] 820.1042107 399.2847431 102.5616781 47.6316940 26.8382218 24.0480875 [7] 19.0643756 10.1669954 4.4287860 2.2720357 1.5353257 0.9255277 [13] 0.7155102 0.3118612 > (eig2 <- unclass(unname(eigenvals(rdafit, constrained = TRUE)))) [1] 820.1042107 399.2847431 102.5616781 47.6316940 26.8382218 24.0480875 [7] 19.0643756 10.1669954 4.4287860 2.2720357 1.5353257 0.9255277 [13] 0.7155102 0.3118612 > all.equal(eig1, eig2) [1] TRUE For some reason I can't get the axis scores (loadings) to match, but invariably these are scaled (or not) so I need to look into exactly how those are being done here. We don't do the RDA via rda() as I showed with lm() etc, but we use a QR decomposition for the linear model part and then SVD for the PCA part. But the essential steps are the same.
What criteria to use for separating variables into explanatory variables and responses for ordinatio
As @amoeba mentioned in the comments, PCA will only look at one set of data and it will show you the major (linear) patterns of variation in those variables, the correlations or covariances between th
What criteria to use for separating variables into explanatory variables and responses for ordination methods in ecology? As @amoeba mentioned in the comments, PCA will only look at one set of data and it will show you the major (linear) patterns of variation in those variables, the correlations or covariances between those variables, and the relationships between samples (the rows) in your data set. What one normally does with a species data set and a suite of potential explanatory variables is to fit a constrained ordination. In PCA, the principal components, the axes on the PCA biplot, are derived as optimal linear combinations of all variables. If you ran this on a data set of soil chemistry with variables pH, $\mathrm{Ca^{2+}}$, TotalCarbon, you might find that the first component was $$0.5 \times \mathrm{pH} + 1.4 \times \mathrm{Ca^{2+}} + 0.1 \times \mathrm{TotalCarbon} $$ and the second component $$2.7 \times \mathrm{pH} + 0.3 \times \mathrm{Ca^{2+}} - 5.6 \times \mathrm{TotalCarbon} $$ These components are freely selectable from the variables measured, and which get chosen are those that explain sequentially the largest amount of variation in the dataset, and that each linear combination is orthogonal (uncorrelated with) the others. In a constrained ordination, we have two datasets, but we are not free to select whatever linear combinations of the first data set (the soil chem data above) we want. Instead we have to select linear combinations of the variables in the second data set that best explain variation in the first. Also, in the case of PCA, the one data set is the response matrix and there are no predictors (you could think of the response as predicting itself). In the constrained case, we have a response data set which we wish to explain with a set of explanatory variables. Although you haven't explained which variables are the response, normally one wishes to explain variation in the abundances or composition of those species (i.e. the responses) using the environmental explanatory variables. The constrained version of PCA is a thing called Redundancy Analysis (RDA) in ecological circles. This assumes an underlying linear response model for the species, which is either not appropriate or only appropriate if you have short gradients along which the species respond. An alternative to PCA is a thing called correspondence analysis (CA). This is unconstrained but it does have an underlying unimodal response model, which is somewhat more realistic in terms of how species respond along longer gradients. Note also that CA models relative abundances or composition, PCA models the raw abundances. There is a constrained version of CA, known as constrained or canonical correspondence analysis (CCA) - not to be confused with a more formal statistical model known as canonical correlation analysis. In both RDA and CCA the aim is to model the variation in species abundances or composition as a series of linear combinations of the explanatory variables. From the description it sounds like your wife wants to explain variation in the millipede species composition (or abundance) in terms of the other variables measured. Some words of warning; RDA and CCA are just multivariate regressions; CCA is just a weighted multivariate regression. Anything you've learned about regression applies, and there are a couple of other gotchas too: as you increase the number of explanatory variable, the constraints actually become less and less and you aren't really extracting components/axes that explain the species composition optimally, and with CCA, as you increase the number of explanatory factors, you risk inducing an artefact of a curve into the configuration of points in the CCA plot. the theory underlying RDA and CCA are less well-developed than more formal statistical methods. We can only reasonably choose which explanatory variables to keep using step-wise selection (which is not ideal for all the reasons we don't like it as a selection method in regression) and we have to use permutation tests to do so. so my advice is the same as with regression; think ahead of time what your hypotheses are and include variables that reflect those hypotheses. Don't just throw all explanatory variables into the mix. Example Unconstrained ordination PCA I'll show an example comparing PCA, CA and CCA using the vegan package for R which I help maintain and which is designed to fit these kinds of ordination methods: library("vegan") # load the package data(varespec) # load example data ## PCA pcfit <- rda(varespec) ## could add `scale = TRUE` if variables in different units pcfit > pcfit Call: rda(X = varespec) Inertia Rank Total 1826 Unconstrained 1826 23 Inertia is variance Eigenvalues for unconstrained axes: PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 983.0 464.3 132.3 73.9 48.4 37.0 25.7 19.7 (Showed only 8 of all 23 unconstrained eigenvalues) vegan doesn't standardise the Inertia, unlike Canoco, so the total variance is 1826 and the Eigenvalues are in those same units and sum to 1826 > cumsum(eigenvals(pcfit)) PC1 PC2 PC3 PC4 PC5 PC6 PC7 PC8 982.9788 1447.2829 1579.5334 1653.4670 1701.8853 1738.8947 1764.6209 1784.3265 PC9 PC10 PC11 PC12 PC13 PC14 PC15 PC16 1796.6007 1807.0361 1816.3869 1819.1853 1821.5128 1822.9045 1824.1103 1824.9250 PC17 PC18 PC19 PC20 PC21 PC22 PC23 1825.2563 1825.4429 1825.5495 1825.6131 1825.6383 1825.6548 1825.6594 We also see that the first Eigenvalue is about half the variance and with the first two axes we have explained ~80% of the total variance > head(cumsum(eigenvals(pcfit)) / pcfit$tot.chi) PC1 PC2 PC3 PC4 PC5 PC6 0.5384240 0.7927453 0.8651851 0.9056821 0.9322031 0.9524749 A biplot can be drawn from the scores of the samples and species on the first two principal components > plot(pcfit) There are two issues here The ordination is essentially dominated by three species — these species lie farthest from the origin — as these are the most abundant taxa in the data set There is a strong arch of curve in the ordination, suggestive of a long or dominant single gradient that has been broken down into the two main principal components to maintain the metric properties of the ordination. CA A CA might assist with both these points as it handles long gradient better due to the unimodal response model, and it models relative composition of species not raw abundances. The vegan / R code to do this is similar to the PCA code used above cafit <- cca(varespec) cafit > cafit <- cca(varespec) > cafit Call: cca(X = varespec) Inertia Rank Total 2.083 Unconstrained 2.083 23 Inertia is mean squared contingency coefficient Eigenvalues for unconstrained axes: CA1 CA2 CA3 CA4 CA5 CA6 CA7 CA8 0.5249 0.3568 0.2344 0.1955 0.1776 0.1216 0.1155 0.0889 (Showed only 8 of all 23 unconstrained eigenvalues) Here we explain about 40% of the variation among sites in their relative composition > head(cumsum(eigenvals(cafit)) / cafit$tot.chi) CA1 CA2 CA3 CA4 CA5 CA6 0.2519837 0.4232578 0.5357951 0.6296236 0.7148866 0.7732393 The joint plot of the species and site scores is now less dominated by a few species > plot(cafit) Which of PCA or CA you choose should be determined by the questions you wish to ask of the data. Usually with species data we are more often interested in difference in the suite of species so CA is a popular choice. If we have a data set of environmental variables, say water or soil chemistry, we wouldn't expect those to respond in a unimodal manner along gradients so CA would be inappropriate and PCA (of a correlation matrix, using scale = TRUE in the rda() call) would be more appropriate. Constrained ordination; CCA Now if we have second set of data which we wish to use to explain patterns in the first species data set, we must use a constrained ordination. Often the choice here is CCA, but RDA is an alternative, as is RDA after transformation of the data to allow it to handle species data better. data(varechem) # load explanatory example data We re-use the cca() function but we either supply two data frames (X for species, and Y for explanatory/predictor variables) or a model formula listing the form of the model we wish to fit. To include all variables we could use varechem ~ ., data = varechem as the formula to include all variables — but as I said above, this isn't a good idea in general ccafit <- cca(varespec ~ ., data = varechem) > ccafit Call: cca(formula = varespec ~ N + P + K + Ca + Mg + S + Al + Fe + Mn + Zn + Mo + Baresoil + Humdepth + pH, data = varechem) Inertia Proportion Rank Total 2.0832 1.0000 Constrained 1.4415 0.6920 14 Unconstrained 0.6417 0.3080 9 Inertia is mean squared contingency coefficient Eigenvalues for constrained axes: CCA1 CCA2 CCA3 CCA4 CCA5 CCA6 CCA7 CCA8 CCA9 CCA10 CCA11 0.4389 0.2918 0.1628 0.1421 0.1180 0.0890 0.0703 0.0584 0.0311 0.0133 0.0084 CCA12 CCA13 CCA14 0.0065 0.0062 0.0047 Eigenvalues for unconstrained axes: CA1 CA2 CA3 CA4 CA5 CA6 CA7 CA8 CA9 0.19776 0.14193 0.10117 0.07079 0.05330 0.03330 0.01887 0.01510 0.00949 The triplot of the above ordination is produced using the plot() method > plot(ccafit) Of course, now the task is to work out which of those variables is actually important. Also note that we have explained about 2/3 of the species variance using just 13 variables. one of the problems of using all variables in this ordination is that we've created an arched configuration in sample and species scores, which is purely an artefact of using too-many correlated variables. If you want to know more about this, check out the vegan documentation or a good book on multivariate ecological data analysis. Relationship with regression It is simplest to illustrate the link with RDA, but CCA is just the same except everything involves row and column two-way-table marginal sums as weights. At it's heart, RDA is equivalent to the application of PCA to a matrix of fitted values from a multiple linear regression fitted to each species (response) values (abundances, say) with predictors given by the matrix of explanatory variables. In R we can do this as ## centre the responses spp <- scale(data.matrix(varespec), center = TRUE, scale = FALSE) ## ...and the predictors env <- as.data.frame(scale(varechem, center = TRUE, scale = FALSE)) ## fit a linear model to each column (species) in spp. ## Suppress intercept as we've centred everything fit <- lm(spp ~ . - 1, data = env) ## Collect fitted values for each species and do a PCA of that ## matrix pclmfit <- prcomp(fitted(fit)) The Eigenvalues for these two approaches are equal: > (eig1 <- unclass(unname(eigenvals(pclmfit)[1:14]))) [1] 820.1042107 399.2847431 102.5616781 47.6316940 26.8382218 24.0480875 [7] 19.0643756 10.1669954 4.4287860 2.2720357 1.5353257 0.9255277 [13] 0.7155102 0.3118612 > (eig2 <- unclass(unname(eigenvals(rdafit, constrained = TRUE)))) [1] 820.1042107 399.2847431 102.5616781 47.6316940 26.8382218 24.0480875 [7] 19.0643756 10.1669954 4.4287860 2.2720357 1.5353257 0.9255277 [13] 0.7155102 0.3118612 > all.equal(eig1, eig2) [1] TRUE For some reason I can't get the axis scores (loadings) to match, but invariably these are scaled (or not) so I need to look into exactly how those are being done here. We don't do the RDA via rda() as I showed with lm() etc, but we use a QR decomposition for the linear model part and then SVD for the PCA part. But the essential steps are the same.
What criteria to use for separating variables into explanatory variables and responses for ordinatio As @amoeba mentioned in the comments, PCA will only look at one set of data and it will show you the major (linear) patterns of variation in those variables, the correlations or covariances between th
27,567
How to report data for an entire population? [duplicate]
The concept of significance or hypothesis testing is not relevant for a whole population. Hypothesis testing is based on the assumption that you deal with a sample from a (usually) infinite population, and asks the question: what is the probability that we have drawn the sample by chance from a population that fulfills the assumptions of the null hypothesis? If this probability is low, then we reject the null. Imagine the following scenario. You measure two groups of people (for example, ten people from New York and 10 people from, say, Cracow) and find that the average height in the two groups is 1.80 and 1.79 meters, respectively, and the standard deviations are 15cm. If this is a sample from an infinite population, you will not reject the null hypothesis -- the difference is small, and we conclude that the probability of getting these results if there is no difference in reality (that is, in our infinite population) is relatively high. However, if these two groups make the full population, then there is no significance. If you have measured every person who lives in Cracow and every person who lives in New York and you find a difference in averages of 1cm, then the populations are different in their mean, full stop. We have no probabilities any more, just measurements! (-- except possibly for a measurement error). What you can do instead is to show the effect size. In the hypothetical example, one would show the difference between the groups for example using Cohen's d; that is, express the difference in standard deviations. In the example above, the difference would be 1cm/15cm = 0.0(6). How to calculate your effect size will depend on what actually is your data. The point is, I think, to ask not what is statistically significant, but what effect size is significant for you as a scientist.
How to report data for an entire population? [duplicate]
The concept of significance or hypothesis testing is not relevant for a whole population. Hypothesis testing is based on the assumption that you deal with a sample from a (usually) infinite population
How to report data for an entire population? [duplicate] The concept of significance or hypothesis testing is not relevant for a whole population. Hypothesis testing is based on the assumption that you deal with a sample from a (usually) infinite population, and asks the question: what is the probability that we have drawn the sample by chance from a population that fulfills the assumptions of the null hypothesis? If this probability is low, then we reject the null. Imagine the following scenario. You measure two groups of people (for example, ten people from New York and 10 people from, say, Cracow) and find that the average height in the two groups is 1.80 and 1.79 meters, respectively, and the standard deviations are 15cm. If this is a sample from an infinite population, you will not reject the null hypothesis -- the difference is small, and we conclude that the probability of getting these results if there is no difference in reality (that is, in our infinite population) is relatively high. However, if these two groups make the full population, then there is no significance. If you have measured every person who lives in Cracow and every person who lives in New York and you find a difference in averages of 1cm, then the populations are different in their mean, full stop. We have no probabilities any more, just measurements! (-- except possibly for a measurement error). What you can do instead is to show the effect size. In the hypothetical example, one would show the difference between the groups for example using Cohen's d; that is, express the difference in standard deviations. In the example above, the difference would be 1cm/15cm = 0.0(6). How to calculate your effect size will depend on what actually is your data. The point is, I think, to ask not what is statistically significant, but what effect size is significant for you as a scientist.
How to report data for an entire population? [duplicate] The concept of significance or hypothesis testing is not relevant for a whole population. Hypothesis testing is based on the assumption that you deal with a sample from a (usually) infinite population
27,568
How to report data for an entire population? [duplicate]
January's response is correct as far as it goes. However, if say the population you are considering is relatively small, say 100 to 1000 and you have collected your data for a particular period, then if you want to infer that your conclusions also apply to similar groups at a future date, then you may find it more appropriate to treat it as a sample and apply statistical processes. Even for cities, over a year, there might be a considerable influx or eflux of immigrants or emigrants, or there may be an epidemic disease or other event that could affect your conclusions, if they were used for predictive purposes.
How to report data for an entire population? [duplicate]
January's response is correct as far as it goes. However, if say the population you are considering is relatively small, say 100 to 1000 and you have collected your data for a particular period, then
How to report data for an entire population? [duplicate] January's response is correct as far as it goes. However, if say the population you are considering is relatively small, say 100 to 1000 and you have collected your data for a particular period, then if you want to infer that your conclusions also apply to similar groups at a future date, then you may find it more appropriate to treat it as a sample and apply statistical processes. Even for cities, over a year, there might be a considerable influx or eflux of immigrants or emigrants, or there may be an epidemic disease or other event that could affect your conclusions, if they were used for predictive purposes.
How to report data for an entire population? [duplicate] January's response is correct as far as it goes. However, if say the population you are considering is relatively small, say 100 to 1000 and you have collected your data for a particular period, then
27,569
How to report data for an entire population? [duplicate]
You must always ask yourself, what quantities am I interested in? Statistics does not (directly) answer non-numerical questions. You must consider - which aspect of which group of people am I interested in, and how are they related to values in the sample at hand? Descriptive statistics, such as the mean, the correlation coefficient, or Cohen's d, quantify various aspects of a sample. Inferential statistics, such as point hypothesis tests, provide estimates of these very same measures for a whole population based based on a subset (in the face of sampling error). This allows one to guess what a descriptive statistic would have been, had the whole population been measured without error. They generalise from some data we have available, to some data we haven't - under the assumption that the data we have is representative of the data we haven't. An inferential statistic will not be more than an approximation of a descriptive statistic. As noted by @january, Statistical significance does not mean practical significance; rather, statistical significance (in the context of point hypothesis tests) informs you that you may assign low confidence to a specific single value for the population parameter (often zero). If you precisely know the value of the population parameter, I cannot imagine any reason to estimate it. What it means to you that you have low confidence in a given value for the population parameter, such as "rejecting the null" when p<0.05, typically meaning that you reject with some confidence the hypothesis that the population mean is 0, depends on the problem in question. The value may be non-zero, but so low as to have no practical relevance. That a test rejects a point null does not carry more direct information than knowing the population parameter to be a specific (non-zero) value; rather, it carries less information, since the hypothesis test sheds doubt on one value, the descriptive statistic sheds doubt on all but one values. (Indirectly, a p value will also inform you about other descriptive statistics, such as variance and effect size). You might imagine inferential statistics as "confidence labels" assigned to descriptive statistics (although this metaphor is getting dangerously close to p(H|D)). However, it is not so easy as to say that if the whole population the researcher is interested in has been measured, descriptive statistics are unequivocally superior. If inferential statistics make sense or not depends not only on the fraction of the population sampled, but also on the reliability of the measurements. For example, height in @January's example is rather easy to measure correctly (ignoring for now that people grow, die, have accidents, ...). But, what if you were interested in their memory span, income or beard length? In such cases, sampling error characterizes the data even though the population has been sampled, and you do not in fact precisely know the parameter. If you were to repeat the measurement, you would get completely different (though statistically very similar) results! In such cases, inference may still be useful. But basically: consider which parameter you are interested in. p values and statistical significance aren't so much parameters, but "confidence labels" for such parameters.
How to report data for an entire population? [duplicate]
You must always ask yourself, what quantities am I interested in? Statistics does not (directly) answer non-numerical questions. You must consider - which aspect of which group of people am I interest
How to report data for an entire population? [duplicate] You must always ask yourself, what quantities am I interested in? Statistics does not (directly) answer non-numerical questions. You must consider - which aspect of which group of people am I interested in, and how are they related to values in the sample at hand? Descriptive statistics, such as the mean, the correlation coefficient, or Cohen's d, quantify various aspects of a sample. Inferential statistics, such as point hypothesis tests, provide estimates of these very same measures for a whole population based based on a subset (in the face of sampling error). This allows one to guess what a descriptive statistic would have been, had the whole population been measured without error. They generalise from some data we have available, to some data we haven't - under the assumption that the data we have is representative of the data we haven't. An inferential statistic will not be more than an approximation of a descriptive statistic. As noted by @january, Statistical significance does not mean practical significance; rather, statistical significance (in the context of point hypothesis tests) informs you that you may assign low confidence to a specific single value for the population parameter (often zero). If you precisely know the value of the population parameter, I cannot imagine any reason to estimate it. What it means to you that you have low confidence in a given value for the population parameter, such as "rejecting the null" when p<0.05, typically meaning that you reject with some confidence the hypothesis that the population mean is 0, depends on the problem in question. The value may be non-zero, but so low as to have no practical relevance. That a test rejects a point null does not carry more direct information than knowing the population parameter to be a specific (non-zero) value; rather, it carries less information, since the hypothesis test sheds doubt on one value, the descriptive statistic sheds doubt on all but one values. (Indirectly, a p value will also inform you about other descriptive statistics, such as variance and effect size). You might imagine inferential statistics as "confidence labels" assigned to descriptive statistics (although this metaphor is getting dangerously close to p(H|D)). However, it is not so easy as to say that if the whole population the researcher is interested in has been measured, descriptive statistics are unequivocally superior. If inferential statistics make sense or not depends not only on the fraction of the population sampled, but also on the reliability of the measurements. For example, height in @January's example is rather easy to measure correctly (ignoring for now that people grow, die, have accidents, ...). But, what if you were interested in their memory span, income or beard length? In such cases, sampling error characterizes the data even though the population has been sampled, and you do not in fact precisely know the parameter. If you were to repeat the measurement, you would get completely different (though statistically very similar) results! In such cases, inference may still be useful. But basically: consider which parameter you are interested in. p values and statistical significance aren't so much parameters, but "confidence labels" for such parameters.
How to report data for an entire population? [duplicate] You must always ask yourself, what quantities am I interested in? Statistics does not (directly) answer non-numerical questions. You must consider - which aspect of which group of people am I interest
27,570
Acceptance ratio in Metropolis–Hastings algorithm
In order to get this, and to simplify the matters, I always think first in just one parameter with uniform (long-range) a-priori distribution, so that in this case, the MAP estimate of the parameter is the same as the MLE. However, assume that your likelihood function is complicated enough to have several local maxima. What MCMC does in this example in 1-D is to explore the posterior curve until it finds values of maximum probability. If the variance is too short, you'll most surely get stuck on local maxima, because you'll be always sampling values near it: the MCMC algorithm will "think" it is stuck on the target distribution. However, if the variance is too large, once you get stuck on one local maximum, you'll more-or-less reject values until you find other regions of maximum probability. If you happen to propose the value at the MAP (or a similar region of local maximum probability which is larger than the others), with a large variance you'll end up rejecting almost every other value: the difference between this region and the others will be too large. Of course, all of the above will affect the convergence rate and not the convergence "per-se" of your chains. Recall that whatever the variance, as long as the probability of selecting the value of this global maximum region is positive, your chain will converge. To by-pass this problem, however, what one can do is to propose different variances in a burn-in period for each parameter and aim at a certain acceptance rates which can satisfy your needs (say $0.44$, see Gelman, Roberts & Gilks, 1995 and Gelman, Gilks & Roberts, 1997 to learn more on the issue of selecting a "good" acceptance rate which, of course, will depende on the form of you posterior distribution). Of course, in this case the chain is non-markovian, so you DON'T have to use them for inference: you just use them to adjust the variance.
Acceptance ratio in Metropolis–Hastings algorithm
In order to get this, and to simplify the matters, I always think first in just one parameter with uniform (long-range) a-priori distribution, so that in this case, the MAP estimate of the parameter i
Acceptance ratio in Metropolis–Hastings algorithm In order to get this, and to simplify the matters, I always think first in just one parameter with uniform (long-range) a-priori distribution, so that in this case, the MAP estimate of the parameter is the same as the MLE. However, assume that your likelihood function is complicated enough to have several local maxima. What MCMC does in this example in 1-D is to explore the posterior curve until it finds values of maximum probability. If the variance is too short, you'll most surely get stuck on local maxima, because you'll be always sampling values near it: the MCMC algorithm will "think" it is stuck on the target distribution. However, if the variance is too large, once you get stuck on one local maximum, you'll more-or-less reject values until you find other regions of maximum probability. If you happen to propose the value at the MAP (or a similar region of local maximum probability which is larger than the others), with a large variance you'll end up rejecting almost every other value: the difference between this region and the others will be too large. Of course, all of the above will affect the convergence rate and not the convergence "per-se" of your chains. Recall that whatever the variance, as long as the probability of selecting the value of this global maximum region is positive, your chain will converge. To by-pass this problem, however, what one can do is to propose different variances in a burn-in period for each parameter and aim at a certain acceptance rates which can satisfy your needs (say $0.44$, see Gelman, Roberts & Gilks, 1995 and Gelman, Gilks & Roberts, 1997 to learn more on the issue of selecting a "good" acceptance rate which, of course, will depende on the form of you posterior distribution). Of course, in this case the chain is non-markovian, so you DON'T have to use them for inference: you just use them to adjust the variance.
Acceptance ratio in Metropolis–Hastings algorithm In order to get this, and to simplify the matters, I always think first in just one parameter with uniform (long-range) a-priori distribution, so that in this case, the MAP estimate of the parameter i
27,571
Acceptance ratio in Metropolis–Hastings algorithm
There are two basic assumptions that lead to this relationship: The stationary distribution $\pi(\cdot)$ doesn't change too quickly (i.e. it has a bounded first derivative). Most of the probability mass of $\pi(\cdot)$ is concentrated in a relatively small subset of the domain (the distribution is "peaky"). Let's consider the "small $\sigma^2$" case first. Let $x_i$ be the current state of the Markov chain and $x_j \sim \mathcal{N}(x_i, \sigma^2)$ be the proposed state. Since $\sigma^2$ is very small, we can be confident that $x_j \approx x_i$. Combining this with our first assumption, we see that $\pi(x_j) \approx \pi(x_i)$ and thus $\frac{\pi(x_j)}{\pi(x_i)} \approx 1$. The low acceptance rate with large $\sigma^2$ follows from the second assumption. Recall that approximately $95\%$ of the probability mass of a normal distribution lies within $2\sigma$ of its mean, so in our case most proposals will be generated within the window $[x_i - 2\sigma, x_i + 2\sigma]$. As $\sigma^2$ gets larger, this window expands to cover more and more of the variable's domain. The second assumption implies that the density function must be quite small over most of the domain, so when our sampling window is large $\pi(x_j)$ will frequently be very small. Now for a bit of circular reasoning: since we know the M-H sampler generates samples distributed according to the stationary distribution $\pi$, it must be the case it generates many samples in the high density regions of the domain and few samples in the low density regions. Since most samples are generated in high density regions, $\pi(x_i)$ is usually large. Thus, $\pi(x_i)$ is large and $\pi(x_j)$ is small, resulting in an acceptance rate $\frac{\pi(x_j)}{\pi(x_i)} << 1$. These two assumptions are true of most distributions we're likely to be interested in, so this relationship between proposal width and acceptance rate is a useful tool for understanding the behavior of M-H samplers.
Acceptance ratio in Metropolis–Hastings algorithm
There are two basic assumptions that lead to this relationship: The stationary distribution $\pi(\cdot)$ doesn't change too quickly (i.e. it has a bounded first derivative). Most of the probability m
Acceptance ratio in Metropolis–Hastings algorithm There are two basic assumptions that lead to this relationship: The stationary distribution $\pi(\cdot)$ doesn't change too quickly (i.e. it has a bounded first derivative). Most of the probability mass of $\pi(\cdot)$ is concentrated in a relatively small subset of the domain (the distribution is "peaky"). Let's consider the "small $\sigma^2$" case first. Let $x_i$ be the current state of the Markov chain and $x_j \sim \mathcal{N}(x_i, \sigma^2)$ be the proposed state. Since $\sigma^2$ is very small, we can be confident that $x_j \approx x_i$. Combining this with our first assumption, we see that $\pi(x_j) \approx \pi(x_i)$ and thus $\frac{\pi(x_j)}{\pi(x_i)} \approx 1$. The low acceptance rate with large $\sigma^2$ follows from the second assumption. Recall that approximately $95\%$ of the probability mass of a normal distribution lies within $2\sigma$ of its mean, so in our case most proposals will be generated within the window $[x_i - 2\sigma, x_i + 2\sigma]$. As $\sigma^2$ gets larger, this window expands to cover more and more of the variable's domain. The second assumption implies that the density function must be quite small over most of the domain, so when our sampling window is large $\pi(x_j)$ will frequently be very small. Now for a bit of circular reasoning: since we know the M-H sampler generates samples distributed according to the stationary distribution $\pi$, it must be the case it generates many samples in the high density regions of the domain and few samples in the low density regions. Since most samples are generated in high density regions, $\pi(x_i)$ is usually large. Thus, $\pi(x_i)$ is large and $\pi(x_j)$ is small, resulting in an acceptance rate $\frac{\pi(x_j)}{\pi(x_i)} << 1$. These two assumptions are true of most distributions we're likely to be interested in, so this relationship between proposal width and acceptance rate is a useful tool for understanding the behavior of M-H samplers.
Acceptance ratio in Metropolis–Hastings algorithm There are two basic assumptions that lead to this relationship: The stationary distribution $\pi(\cdot)$ doesn't change too quickly (i.e. it has a bounded first derivative). Most of the probability m
27,572
When to use Student's or Normal distribution in linear regression?
The normal distribution is the large sample distribution in many meaningful statistical problems that involve some version of the Central Limit Theorem: you have (approximately) independent pieces of information that are being added up to arrive at the answer. If parameter estimates are asymptotically normal, their functions will also be asymptotically normal (in regular cases). On the other hand, the Student $t$ distribution is derived under more restrictive conditions of i.i.d. normal regression errors. If you can buy this assumption, you can buy the $t$-distribution being used for testing hypothesis in linear regression. The use of this distribution provides wider confidence intervals than the use of the normal distribution. The substantive meaning of that is that in small samples, you need to estimate your measure of uncertainty, the regression mean squared error, or the standard deviation of residuals, $\sigma$. (In large samples, you kinda have as much information as if you knew it, so the $t$-distribution degenerates to the normal distribution.) There are some occasions in linear regression, even with finite samples, where the Student distribution cannot be justified. They are related to violations of the second order conditions on regression errors; namely, that they are (1) constant variance, and (2) independent. If these assumptions are violated, and you correct your standard errors using Eicker/White estimator for heteroskedastic, but independent residuals; or Newey-West estimator for serially correlated errors, or clustered standard errors for cluster-correlated data, there is no way you can pull a reasonable justification for Student distribution. However, by employing an appropriate version of asymptotic normality argument (traingular arrays and such), you can justify the normal approximation (although you should have in mind that your confidence intervals would very likely be too narrow).
When to use Student's or Normal distribution in linear regression?
The normal distribution is the large sample distribution in many meaningful statistical problems that involve some version of the Central Limit Theorem: you have (approximately) independent pieces of
When to use Student's or Normal distribution in linear regression? The normal distribution is the large sample distribution in many meaningful statistical problems that involve some version of the Central Limit Theorem: you have (approximately) independent pieces of information that are being added up to arrive at the answer. If parameter estimates are asymptotically normal, their functions will also be asymptotically normal (in regular cases). On the other hand, the Student $t$ distribution is derived under more restrictive conditions of i.i.d. normal regression errors. If you can buy this assumption, you can buy the $t$-distribution being used for testing hypothesis in linear regression. The use of this distribution provides wider confidence intervals than the use of the normal distribution. The substantive meaning of that is that in small samples, you need to estimate your measure of uncertainty, the regression mean squared error, or the standard deviation of residuals, $\sigma$. (In large samples, you kinda have as much information as if you knew it, so the $t$-distribution degenerates to the normal distribution.) There are some occasions in linear regression, even with finite samples, where the Student distribution cannot be justified. They are related to violations of the second order conditions on regression errors; namely, that they are (1) constant variance, and (2) independent. If these assumptions are violated, and you correct your standard errors using Eicker/White estimator for heteroskedastic, but independent residuals; or Newey-West estimator for serially correlated errors, or clustered standard errors for cluster-correlated data, there is no way you can pull a reasonable justification for Student distribution. However, by employing an appropriate version of asymptotic normality argument (traingular arrays and such), you can justify the normal approximation (although you should have in mind that your confidence intervals would very likely be too narrow).
When to use Student's or Normal distribution in linear regression? The normal distribution is the large sample distribution in many meaningful statistical problems that involve some version of the Central Limit Theorem: you have (approximately) independent pieces of
27,573
When to use Student's or Normal distribution in linear regression?
I like the representation of the student t distribution as a mixture of a normal distribution and a gamma distribution: $$Student(x|\mu,\sigma^2,\nu)=\int_{0}^{\infty}Normal\left(x|\mu,\frac{\sigma^2}{\rho}\right)Gamma\left(\rho|\frac{\nu}{2},\frac{\nu}{2}\right)d\rho$$ Note that the mean of the gamma distribution is $E[\rho|\nu]=1$ and the variance of this distribution is $V[\rho|\nu]=\frac{2}{\nu}$. So we can view the t-distribution as generalising the constant variance assumption to a "similar" variance assumption. $\nu$ basically controls how similar we allow the variances to be. You also view this as "random weighted" regression, for we can use the above integral as a "hidden variable" representation as follows: $$y_i=\mu_i+\frac{e_i}{\sqrt{\rho_i}}$$ Where $e_i\sim N(0,\sigma^2)$ and $\rho_i\sim Gamma\left(\frac{\nu}{2},\frac{\nu}{2}\right)$ all variables independent. In fact this is basically just the definition of the t-distribution, as $Gamma\left(\frac{\nu}{2},\frac{\nu}{2}\right)\sim \frac{1}{\nu}\chi^2_\nu$ You can see why this result makes the student t distribution "robust" compared to the normal because a large error $y_i-\mu_i$ can occur due to a large value of $\sigma^2$ or due to a small value of $\rho_i$. Now becuase $\sigma^2$ is common to all observations, but $\rho_i$ is specific to the ith one, the general "common sense" thing to conclude is that outliers give evidence for small $\rho_i$. Additionally, if you were to do linear regression $\mu_i=x_i^T\beta$, you will find that $\rho_i$ is the weight for the ith observation, assuming that $\rho_i$ is known.: $$\hat{\beta}=(\sum_i\rho_ix_ix_i^T)^{-1}(\sum_i\rho_ix_iy_i)$$ So an outlier constitutes evidence for small $\rho_i$ which means the ith observation gets less weight. Additionally, an small "outlier" - an observation which is predicted/fitted much better than the rest - constitutes evidence for large $\rho_i$. Hence this observation will be given more weight in the regression. This is in line with what one would intuitively do with an outlier or a good data point. Note that there is no "rule" for deciding these things, although mine and others response to this question may be useful for finding some tests you can do along the finite variance path (student t is infinite variance for degrees of freedom less than or equal to two).
When to use Student's or Normal distribution in linear regression?
I like the representation of the student t distribution as a mixture of a normal distribution and a gamma distribution: $$Student(x|\mu,\sigma^2,\nu)=\int_{0}^{\infty}Normal\left(x|\mu,\frac{\sigma^2}
When to use Student's or Normal distribution in linear regression? I like the representation of the student t distribution as a mixture of a normal distribution and a gamma distribution: $$Student(x|\mu,\sigma^2,\nu)=\int_{0}^{\infty}Normal\left(x|\mu,\frac{\sigma^2}{\rho}\right)Gamma\left(\rho|\frac{\nu}{2},\frac{\nu}{2}\right)d\rho$$ Note that the mean of the gamma distribution is $E[\rho|\nu]=1$ and the variance of this distribution is $V[\rho|\nu]=\frac{2}{\nu}$. So we can view the t-distribution as generalising the constant variance assumption to a "similar" variance assumption. $\nu$ basically controls how similar we allow the variances to be. You also view this as "random weighted" regression, for we can use the above integral as a "hidden variable" representation as follows: $$y_i=\mu_i+\frac{e_i}{\sqrt{\rho_i}}$$ Where $e_i\sim N(0,\sigma^2)$ and $\rho_i\sim Gamma\left(\frac{\nu}{2},\frac{\nu}{2}\right)$ all variables independent. In fact this is basically just the definition of the t-distribution, as $Gamma\left(\frac{\nu}{2},\frac{\nu}{2}\right)\sim \frac{1}{\nu}\chi^2_\nu$ You can see why this result makes the student t distribution "robust" compared to the normal because a large error $y_i-\mu_i$ can occur due to a large value of $\sigma^2$ or due to a small value of $\rho_i$. Now becuase $\sigma^2$ is common to all observations, but $\rho_i$ is specific to the ith one, the general "common sense" thing to conclude is that outliers give evidence for small $\rho_i$. Additionally, if you were to do linear regression $\mu_i=x_i^T\beta$, you will find that $\rho_i$ is the weight for the ith observation, assuming that $\rho_i$ is known.: $$\hat{\beta}=(\sum_i\rho_ix_ix_i^T)^{-1}(\sum_i\rho_ix_iy_i)$$ So an outlier constitutes evidence for small $\rho_i$ which means the ith observation gets less weight. Additionally, an small "outlier" - an observation which is predicted/fitted much better than the rest - constitutes evidence for large $\rho_i$. Hence this observation will be given more weight in the regression. This is in line with what one would intuitively do with an outlier or a good data point. Note that there is no "rule" for deciding these things, although mine and others response to this question may be useful for finding some tests you can do along the finite variance path (student t is infinite variance for degrees of freedom less than or equal to two).
When to use Student's or Normal distribution in linear regression? I like the representation of the student t distribution as a mixture of a normal distribution and a gamma distribution: $$Student(x|\mu,\sigma^2,\nu)=\int_{0}^{\infty}Normal\left(x|\mu,\frac{\sigma^2}
27,574
Least angle regression keeps the correlations monotonically decreasing and tied?
This is problem 3.23 on page 97 of Hastie et al., Elements of Statistical Learning, 2nd. ed. (5th printing). The key to this problem is a good understanding of ordinary least squares (i.e., linear regression), particularly the orthogonality of the fitted values and the residuals. Orthogonality lemma: Let $X$ be the $n \times p$ design matrix, $y$ the response vector and $\beta$ the (true) parameters. Assuming $X$ is full-rank (which we will throughout), the OLS estimates of $\beta$ are $\hat{\beta} = (X^T X)^{-1} X^T y$. The fitted values are $\hat{y} = X (X^T X)^{-1} X^T y$. Then $\langle \hat{y}, y-\hat{y} \rangle = \hat{y}^T (y - \hat{y}) = 0$. That is, the fitted values are orthogonal to the residuals. This follows since $X^T (y - \hat{y}) = X^T y - X^T X (X^T X)^{-1} X^T y = X^T y - X^T y = 0$. Now, let $x_j$ be a column vector such that $x_j$ is the $j$th column of $X$. The assumed conditions are: $\frac{1}{N} \langle x_j, x_j \rangle = 1$ for each $j$, $\frac{1}{N} \langle y, y \rangle = 1$, $\frac{1}{N} \langle x_j, 1_p \rangle = \frac{1}{N} \langle y, 1_p \rangle = 0$ where $1_p$ denotes a vector of ones of length $p$, and $\frac{1}{N} | \langle x_j, y \rangle | = \lambda$ for all $j$. Note that in particular, the last statement of the orthogonality lemma is identical to $\langle x_j, y - \hat{y} \rangle = 0$ for all $j$. The correlations are tied Now, $u(\alpha) = \alpha X \hat{\beta} = \alpha \hat{y}$. So, $$ \langle x_j, y - u(a) \rangle = \langle x_j, (1-\alpha) y + \alpha y - \alpha \hat{y} \rangle = (1-\alpha) \langle x_j, y \rangle + \alpha \langle x_j, y - \hat{y} \rangle , $$ and the second term on the right-hand side is zero by the orthogonality lemma, so $$ \frac{1}{N} | \langle x_j, y - u(\alpha) \rangle | = (1-\alpha) \lambda , $$ as desired. The absolute value of the correlations are just $$ \hat{\rho}_j(\alpha) = \frac{\frac{1}{N} | \langle x_j, y - u(\alpha) \rangle |}{\sqrt{\frac{1}{N} \langle x_j, x_j \rangle }\sqrt{\frac{1}{N} \langle y - u(\alpha), y - u(\alpha) \rangle }} = \frac{(1-\alpha)\lambda}{\sqrt{\frac{1}{N} \langle y - u(\alpha), y - u(\alpha) \rangle }} $$ Note: The right-hand side above is independent of $j$ and the numerator is just the same as the covariance since we've assumed that all the $x_j$'s and $y$ are centered (so, in particular, no subtraction of the mean is necessary). What's the point? As $\alpha$ increases the response vector is modified so that it inches its way toward that of the (restricted!) least-squares solution obtained from incorporating only the first $p$ parameters in the model. This simultaneously modifies the estimated parameters since they are simple inner products of the predictors with the (modified) response vector. The modification takes a special form though. It keeps the (magnitude of) the correlations between the predictors and the modified response the same throughout the process (even though the value of the correlation is changing). Think about what this is doing geometrically and you'll understand the name of the procedure! Explicit form of the (absolute) correlation Let's focus on the term in the denominator, since the numerator is already in the required form. We have $$ \langle y - u(\alpha), y - u(\alpha) \rangle = \langle (1-\alpha) y + \alpha y - u(\alpha), (1-\alpha) y + \alpha y - u(\alpha) \rangle . $$ Substituting in $u(\alpha) = \alpha \hat{y}$ and using the linearity of the inner product, we get $$ \langle y - u(\alpha), y - u(\alpha) \rangle = (1-\alpha)^2 \langle y, y \rangle + 2\alpha(1-\alpha) \langle y, y - \hat{y} \rangle + \alpha^2 \langle y-\hat{y}, y-\hat{y} \rangle . $$ Observe that $\langle y, y \rangle = N$ by assumption, $\langle y, y - \hat{y} \rangle = \langle y - \hat{y}, y - \hat{y} \rangle + \langle \hat{y}, y - \hat{y} \rangle = \langle y - \hat{y}, y - \hat{y}\rangle$, by applying the orthogonality lemma (yet again) to the second term in the middle; and, $\langle y - \hat{y}, y - \hat{y} \rangle = \mathrm{RSS}$ by definition. Putting this all together, you'll notice that we get $$ \hat{\rho}_j(\alpha) = \frac{(1-\alpha) \lambda}{\sqrt{ (1-\alpha)^2 + \frac{\alpha(2-\alpha)}{N} \mathrm{RSS}}} = \frac{(1-\alpha) \lambda}{\sqrt{ (1-\alpha)^2 (1 - \frac{\mathrm{RSS}}{N}) + \frac{1}{N} \mathrm{RSS}}} $$ To wrap things up, $1 - \frac{\mathrm{RSS}}{N} = \frac{1}{N} (\langle y, y, \rangle - \langle y - \hat{y}, y - \hat{y} \rangle ) \geq 0$ and so it's clear that $\hat{\rho}_j(\alpha)$ is monotonically decreasing in $\alpha$ and $\hat{\rho}_j(\alpha) \downarrow 0$ as $\alpha \uparrow 1$. Epilogue: Concentrate on the ideas here. There is really only one. The orthogonality lemma does almost all the work for us. The rest is just algebra, notation, and the ability to put these last two to work.
Least angle regression keeps the correlations monotonically decreasing and tied?
This is problem 3.23 on page 97 of Hastie et al., Elements of Statistical Learning, 2nd. ed. (5th printing). The key to this problem is a good understanding of ordinary least squares (i.e., linear reg
Least angle regression keeps the correlations monotonically decreasing and tied? This is problem 3.23 on page 97 of Hastie et al., Elements of Statistical Learning, 2nd. ed. (5th printing). The key to this problem is a good understanding of ordinary least squares (i.e., linear regression), particularly the orthogonality of the fitted values and the residuals. Orthogonality lemma: Let $X$ be the $n \times p$ design matrix, $y$ the response vector and $\beta$ the (true) parameters. Assuming $X$ is full-rank (which we will throughout), the OLS estimates of $\beta$ are $\hat{\beta} = (X^T X)^{-1} X^T y$. The fitted values are $\hat{y} = X (X^T X)^{-1} X^T y$. Then $\langle \hat{y}, y-\hat{y} \rangle = \hat{y}^T (y - \hat{y}) = 0$. That is, the fitted values are orthogonal to the residuals. This follows since $X^T (y - \hat{y}) = X^T y - X^T X (X^T X)^{-1} X^T y = X^T y - X^T y = 0$. Now, let $x_j$ be a column vector such that $x_j$ is the $j$th column of $X$. The assumed conditions are: $\frac{1}{N} \langle x_j, x_j \rangle = 1$ for each $j$, $\frac{1}{N} \langle y, y \rangle = 1$, $\frac{1}{N} \langle x_j, 1_p \rangle = \frac{1}{N} \langle y, 1_p \rangle = 0$ where $1_p$ denotes a vector of ones of length $p$, and $\frac{1}{N} | \langle x_j, y \rangle | = \lambda$ for all $j$. Note that in particular, the last statement of the orthogonality lemma is identical to $\langle x_j, y - \hat{y} \rangle = 0$ for all $j$. The correlations are tied Now, $u(\alpha) = \alpha X \hat{\beta} = \alpha \hat{y}$. So, $$ \langle x_j, y - u(a) \rangle = \langle x_j, (1-\alpha) y + \alpha y - \alpha \hat{y} \rangle = (1-\alpha) \langle x_j, y \rangle + \alpha \langle x_j, y - \hat{y} \rangle , $$ and the second term on the right-hand side is zero by the orthogonality lemma, so $$ \frac{1}{N} | \langle x_j, y - u(\alpha) \rangle | = (1-\alpha) \lambda , $$ as desired. The absolute value of the correlations are just $$ \hat{\rho}_j(\alpha) = \frac{\frac{1}{N} | \langle x_j, y - u(\alpha) \rangle |}{\sqrt{\frac{1}{N} \langle x_j, x_j \rangle }\sqrt{\frac{1}{N} \langle y - u(\alpha), y - u(\alpha) \rangle }} = \frac{(1-\alpha)\lambda}{\sqrt{\frac{1}{N} \langle y - u(\alpha), y - u(\alpha) \rangle }} $$ Note: The right-hand side above is independent of $j$ and the numerator is just the same as the covariance since we've assumed that all the $x_j$'s and $y$ are centered (so, in particular, no subtraction of the mean is necessary). What's the point? As $\alpha$ increases the response vector is modified so that it inches its way toward that of the (restricted!) least-squares solution obtained from incorporating only the first $p$ parameters in the model. This simultaneously modifies the estimated parameters since they are simple inner products of the predictors with the (modified) response vector. The modification takes a special form though. It keeps the (magnitude of) the correlations between the predictors and the modified response the same throughout the process (even though the value of the correlation is changing). Think about what this is doing geometrically and you'll understand the name of the procedure! Explicit form of the (absolute) correlation Let's focus on the term in the denominator, since the numerator is already in the required form. We have $$ \langle y - u(\alpha), y - u(\alpha) \rangle = \langle (1-\alpha) y + \alpha y - u(\alpha), (1-\alpha) y + \alpha y - u(\alpha) \rangle . $$ Substituting in $u(\alpha) = \alpha \hat{y}$ and using the linearity of the inner product, we get $$ \langle y - u(\alpha), y - u(\alpha) \rangle = (1-\alpha)^2 \langle y, y \rangle + 2\alpha(1-\alpha) \langle y, y - \hat{y} \rangle + \alpha^2 \langle y-\hat{y}, y-\hat{y} \rangle . $$ Observe that $\langle y, y \rangle = N$ by assumption, $\langle y, y - \hat{y} \rangle = \langle y - \hat{y}, y - \hat{y} \rangle + \langle \hat{y}, y - \hat{y} \rangle = \langle y - \hat{y}, y - \hat{y}\rangle$, by applying the orthogonality lemma (yet again) to the second term in the middle; and, $\langle y - \hat{y}, y - \hat{y} \rangle = \mathrm{RSS}$ by definition. Putting this all together, you'll notice that we get $$ \hat{\rho}_j(\alpha) = \frac{(1-\alpha) \lambda}{\sqrt{ (1-\alpha)^2 + \frac{\alpha(2-\alpha)}{N} \mathrm{RSS}}} = \frac{(1-\alpha) \lambda}{\sqrt{ (1-\alpha)^2 (1 - \frac{\mathrm{RSS}}{N}) + \frac{1}{N} \mathrm{RSS}}} $$ To wrap things up, $1 - \frac{\mathrm{RSS}}{N} = \frac{1}{N} (\langle y, y, \rangle - \langle y - \hat{y}, y - \hat{y} \rangle ) \geq 0$ and so it's clear that $\hat{\rho}_j(\alpha)$ is monotonically decreasing in $\alpha$ and $\hat{\rho}_j(\alpha) \downarrow 0$ as $\alpha \uparrow 1$. Epilogue: Concentrate on the ideas here. There is really only one. The orthogonality lemma does almost all the work for us. The rest is just algebra, notation, and the ability to put these last two to work.
Least angle regression keeps the correlations monotonically decreasing and tied? This is problem 3.23 on page 97 of Hastie et al., Elements of Statistical Learning, 2nd. ed. (5th printing). The key to this problem is a good understanding of ordinary least squares (i.e., linear reg
27,575
Rating system taking account of number of votes
You are talking about a shrinkage estimator. Imdb is possibly the most famous example of this, how they calculate which movies will make it onto the top 250. It relies on the equation, weighted rating (WR) = (v ÷ (v+m)) × R + (m ÷ (v+m)) × C , where: * R = average for the movie (mean) = (Rating) * v = number of votes for the movie = (votes) * m = minimum votes required to be listed in the Top 250 (currently 3000) * C = the mean vote across the whole report (currently 6.9) They call this a "true bayesian rating" and that's true in the sense that our prior for the parameter "average rating" is that it is the same as for all other movies. This prior is then updated based on the "likelihood," which is the average rating for that movie, which has more strength if it has more votes. But I'm not sure whether this technically qualifies as bayesian, because neither the prior nor the posterior is a distribution... Can anyone clarify this?
Rating system taking account of number of votes
You are talking about a shrinkage estimator. Imdb is possibly the most famous example of this, how they calculate which movies will make it onto the top 250. It relies on the equation, weighted ratin
Rating system taking account of number of votes You are talking about a shrinkage estimator. Imdb is possibly the most famous example of this, how they calculate which movies will make it onto the top 250. It relies on the equation, weighted rating (WR) = (v ÷ (v+m)) × R + (m ÷ (v+m)) × C , where: * R = average for the movie (mean) = (Rating) * v = number of votes for the movie = (votes) * m = minimum votes required to be listed in the Top 250 (currently 3000) * C = the mean vote across the whole report (currently 6.9) They call this a "true bayesian rating" and that's true in the sense that our prior for the parameter "average rating" is that it is the same as for all other movies. This prior is then updated based on the "likelihood," which is the average rating for that movie, which has more strength if it has more votes. But I'm not sure whether this technically qualifies as bayesian, because neither the prior nor the posterior is a distribution... Can anyone clarify this?
Rating system taking account of number of votes You are talking about a shrinkage estimator. Imdb is possibly the most famous example of this, how they calculate which movies will make it onto the top 250. It relies on the equation, weighted ratin
27,576
Rating system taking account of number of votes
You could use a system like reddit's "best" algorithm for sorting comments: This algorithm treats the vote count as a statistical sampling of a hypothetical full vote by everyone, much as in an opinion poll. It uses this to calculate the 95% confidence score for the comment. That is, it gives the comment a provisional ranking that it is 95% sure it will get to. The more votes, the closer the 95% confidence score gets to the actual score So in the case of 3 people voting 5/5, you might be 95% sure the "actual" rating is at least a 1, whereas in the case of 500 people voting you might be 95% sure the "actual" rating is at least a 4/5.
Rating system taking account of number of votes
You could use a system like reddit's "best" algorithm for sorting comments: This algorithm treats the vote count as a statistical sampling of a hypothetical full vote by everyone, much as in an opini
Rating system taking account of number of votes You could use a system like reddit's "best" algorithm for sorting comments: This algorithm treats the vote count as a statistical sampling of a hypothetical full vote by everyone, much as in an opinion poll. It uses this to calculate the 95% confidence score for the comment. That is, it gives the comment a provisional ranking that it is 95% sure it will get to. The more votes, the closer the 95% confidence score gets to the actual score So in the case of 3 people voting 5/5, you might be 95% sure the "actual" rating is at least a 1, whereas in the case of 500 people voting you might be 95% sure the "actual" rating is at least a 4/5.
Rating system taking account of number of votes You could use a system like reddit's "best" algorithm for sorting comments: This algorithm treats the vote count as a statistical sampling of a hypothetical full vote by everyone, much as in an opini
27,577
Rating system taking account of number of votes
You may run into problems implied by the Gibbard Satterthwaite Theorem or Arrow's Impossibility Theorem, or any of the results of voting theory...
Rating system taking account of number of votes
You may run into problems implied by the Gibbard Satterthwaite Theorem or Arrow's Impossibility Theorem, or any of the results of voting theory...
Rating system taking account of number of votes You may run into problems implied by the Gibbard Satterthwaite Theorem or Arrow's Impossibility Theorem, or any of the results of voting theory...
Rating system taking account of number of votes You may run into problems implied by the Gibbard Satterthwaite Theorem or Arrow's Impossibility Theorem, or any of the results of voting theory...
27,578
Rating system taking account of number of votes
There is a simple (also to implement) heuristic to first seed the pool of votes with small number of dummy votes with average voting, and later replace it with incoming votes. So, for instance, new object appears and you give it few votes rating it 2.5/5 (this is the best you can tell about it at the zero-knowledge point). Then the first vote comes, let's say 5/5, but it is somewhat tempered by the rest of the initial pool and the objects mean is slightly above 2.5. Than next votes come and the mean gradually moves from the initial guess to the real average which than has time to stabilize. Finally this algorithm converges to normal vote mean.
Rating system taking account of number of votes
There is a simple (also to implement) heuristic to first seed the pool of votes with small number of dummy votes with average voting, and later replace it with incoming votes. So, for instance, new ob
Rating system taking account of number of votes There is a simple (also to implement) heuristic to first seed the pool of votes with small number of dummy votes with average voting, and later replace it with incoming votes. So, for instance, new object appears and you give it few votes rating it 2.5/5 (this is the best you can tell about it at the zero-knowledge point). Then the first vote comes, let's say 5/5, but it is somewhat tempered by the rest of the initial pool and the objects mean is slightly above 2.5. Than next votes come and the mean gradually moves from the initial guess to the real average which than has time to stabilize. Finally this algorithm converges to normal vote mean.
Rating system taking account of number of votes There is a simple (also to implement) heuristic to first seed the pool of votes with small number of dummy votes with average voting, and later replace it with incoming votes. So, for instance, new ob
27,579
Rating system taking account of number of votes
You can use a Bayesian approach. If you have no votes at all then a naïve rating for an item would be based on the distribution of ratings/votes among all the items. That would be the prior distribution for the rating Then with more data/votes for the item you can update your estimate about the true distribution of ratings/votes for an item and compute an estimate for the rating. So you have observations of votes that are categorical $1,2,3,4,5$. You could describe the prior for this by a Dirichlet distribution (whose parameters are estimated based on the items that you already know). Then the posterior will also be a Dirichlet distribution (since the Dirichlet distribution is the conjugate prior).
Rating system taking account of number of votes
You can use a Bayesian approach. If you have no votes at all then a naïve rating for an item would be based on the distribution of ratings/votes among all the items. That would be the prior distributi
Rating system taking account of number of votes You can use a Bayesian approach. If you have no votes at all then a naïve rating for an item would be based on the distribution of ratings/votes among all the items. That would be the prior distribution for the rating Then with more data/votes for the item you can update your estimate about the true distribution of ratings/votes for an item and compute an estimate for the rating. So you have observations of votes that are categorical $1,2,3,4,5$. You could describe the prior for this by a Dirichlet distribution (whose parameters are estimated based on the items that you already know). Then the posterior will also be a Dirichlet distribution (since the Dirichlet distribution is the conjugate prior).
Rating system taking account of number of votes You can use a Bayesian approach. If you have no votes at all then a naïve rating for an item would be based on the distribution of ratings/votes among all the items. That would be the prior distributi
27,580
Rating system taking account of number of votes
You can choose the lower bound of a $1-\alpha$ confidence interval for a binomial proportion, i.e. Clopper-Pearson interval. Or, if you need a closed formula, you can use the lower bound of the Wilson interval, i.e. $$\frac{1}{1+z^2/n}\left[\hat{p} + \frac{z^2}{2n} - z \sqrt{\frac{\hat{p}(1-\hat{p})}{n}+\frac{z^2}{4n^2}}\right]$$ where $z$ is the $1-\alpha/2$ quantile of the standard normal distribution. Edit: Sorry, the suggested confidence intervals only make sense for binary votes (like/like not). But the underlying idea also works with other confidence intervals: the lower bound will be larger for larger numbers of votes.
Rating system taking account of number of votes
You can choose the lower bound of a $1-\alpha$ confidence interval for a binomial proportion, i.e. Clopper-Pearson interval. Or, if you need a closed formula, you can use the lower bound of the Wilson
Rating system taking account of number of votes You can choose the lower bound of a $1-\alpha$ confidence interval for a binomial proportion, i.e. Clopper-Pearson interval. Or, if you need a closed formula, you can use the lower bound of the Wilson interval, i.e. $$\frac{1}{1+z^2/n}\left[\hat{p} + \frac{z^2}{2n} - z \sqrt{\frac{\hat{p}(1-\hat{p})}{n}+\frac{z^2}{4n^2}}\right]$$ where $z$ is the $1-\alpha/2$ quantile of the standard normal distribution. Edit: Sorry, the suggested confidence intervals only make sense for binary votes (like/like not). But the underlying idea also works with other confidence intervals: the lower bound will be larger for larger numbers of votes.
Rating system taking account of number of votes You can choose the lower bound of a $1-\alpha$ confidence interval for a binomial proportion, i.e. Clopper-Pearson interval. Or, if you need a closed formula, you can use the lower bound of the Wilson
27,581
R package for combining factor levels for datamining?
It seems it's just a matter of "releveling" the factor; no need to compute partial sums or make a copy of the original vector. E.g., set.seed(101) a <- factor(LETTERS[sample(5, 150, replace=TRUE, prob=c(.1, .15, rep(.75/3,3)))]) p <- 1/5 lf <- names(which(prop.table(table(a)) < p)) levels(a)[levels(a) %in% lf] <- "Other" Here, the original factor levels are distributed as follows: A B C D E 18 23 35 36 38 and then it becomes Other C D E 41 35 36 38 It may be conveniently wrapped into a function. There is a combine_factor() function in the reshape package, so I guess it could be useful too. Also, as you seem interested in data mining, you might have a look at the caret package. It has a lot of useful features for data preprocessing, including functions like nearZeroVar() that allows to flag predictors with very imbalanced distribution of observed values (See the vignette, example data, pre-processing functions, visualizations and other functions, p. 5, for example of use).
R package for combining factor levels for datamining?
It seems it's just a matter of "releveling" the factor; no need to compute partial sums or make a copy of the original vector. E.g., set.seed(101) a <- factor(LETTERS[sample(5, 150, replace=TRUE,
R package for combining factor levels for datamining? It seems it's just a matter of "releveling" the factor; no need to compute partial sums or make a copy of the original vector. E.g., set.seed(101) a <- factor(LETTERS[sample(5, 150, replace=TRUE, prob=c(.1, .15, rep(.75/3,3)))]) p <- 1/5 lf <- names(which(prop.table(table(a)) < p)) levels(a)[levels(a) %in% lf] <- "Other" Here, the original factor levels are distributed as follows: A B C D E 18 23 35 36 38 and then it becomes Other C D E 41 35 36 38 It may be conveniently wrapped into a function. There is a combine_factor() function in the reshape package, so I guess it could be useful too. Also, as you seem interested in data mining, you might have a look at the caret package. It has a lot of useful features for data preprocessing, including functions like nearZeroVar() that allows to flag predictors with very imbalanced distribution of observed values (See the vignette, example data, pre-processing functions, visualizations and other functions, p. 5, for example of use).
R package for combining factor levels for datamining? It seems it's just a matter of "releveling" the factor; no need to compute partial sums or make a copy of the original vector. E.g., set.seed(101) a <- factor(LETTERS[sample(5, 150, replace=TRUE,
27,582
R package for combining factor levels for datamining?
The only problem with Christopher answer is that it will mix up the original ordering of the factor. Here is my fix: Merge.factors <- function(x, p) { t <- table(x) levt <- cbind(names(t), names(t)) levt[t/sum(t)<p, 2] <- "Other" change.levels(x, levt) } where change.levels is the following function. I wrote it some time ago, so I suspect there might be better ways of achieving what it does. change.levels <- function(f, levt) { ##Change the the names of the factor f levels from ##substitution table levt. ## In the first column there are the original levels, in ## the second column -- the substitutes lv <- levels(f) if(sum(sort(lv) != sort(levt[, 1]))>0) stop ("The names from substitution table does not match given level names") res <- rep(NA, length(f)) for(i in lv) { res[f==i] <- as.character(levt[levt[, 1]==i, 2]) } factor(res) }
R package for combining factor levels for datamining?
The only problem with Christopher answer is that it will mix up the original ordering of the factor. Here is my fix: Merge.factors <- function(x, p) { t <- table(x) levt <- cbind(names(t),
R package for combining factor levels for datamining? The only problem with Christopher answer is that it will mix up the original ordering of the factor. Here is my fix: Merge.factors <- function(x, p) { t <- table(x) levt <- cbind(names(t), names(t)) levt[t/sum(t)<p, 2] <- "Other" change.levels(x, levt) } where change.levels is the following function. I wrote it some time ago, so I suspect there might be better ways of achieving what it does. change.levels <- function(f, levt) { ##Change the the names of the factor f levels from ##substitution table levt. ## In the first column there are the original levels, in ## the second column -- the substitutes lv <- levels(f) if(sum(sort(lv) != sort(levt[, 1]))>0) stop ("The names from substitution table does not match given level names") res <- rep(NA, length(f)) for(i in lv) { res[f==i] <- as.character(levt[levt[, 1]==i, 2]) } factor(res) }
R package for combining factor levels for datamining? The only problem with Christopher answer is that it will mix up the original ordering of the factor. Here is my fix: Merge.factors <- function(x, p) { t <- table(x) levt <- cbind(names(t),
27,583
R package for combining factor levels for datamining?
I wrote a quick function that will accomplish this goal. I'm a novice R user, so it may be slow with large tables. Merge.factors <- function(x, p) { #Combines factor levels in x that are less than a specified proportion, p. t <- table(x) y <- subset(t, prop.table(t) < p) z <- subset(t, prop.table(t) >= p) other <- rep("Other", sum(y)) new.table <- c(z, table(other)) new.x <- as.factor(rep(names(new.table), new.table)) return(new.x) } As an example of it in action: > a <- rep("a", 100) > b <- rep("b", 1000) > c <- rep("c", 1000) > d <- rep("d", 1000) > e <- rep("e", 400) > f <- rep("f", 100) > x <- factor(c(a, b, c, d, e, f)) > summary(x) a b c d e f 100 1000 1000 1000 400 100 > prop.table(table(x)) x a b c d e f 0.02777778 0.27777778 0.27777778 0.27777778 0.11111111 0.02777778 > > w <- Merge.factors(x, .05) > summary(w) b c d e Other 1000 1000 1000 400 200 > class(w) [1] "factor"
R package for combining factor levels for datamining?
I wrote a quick function that will accomplish this goal. I'm a novice R user, so it may be slow with large tables. Merge.factors <- function(x, p) { #Combines factor levels in x that are less tha
R package for combining factor levels for datamining? I wrote a quick function that will accomplish this goal. I'm a novice R user, so it may be slow with large tables. Merge.factors <- function(x, p) { #Combines factor levels in x that are less than a specified proportion, p. t <- table(x) y <- subset(t, prop.table(t) < p) z <- subset(t, prop.table(t) >= p) other <- rep("Other", sum(y)) new.table <- c(z, table(other)) new.x <- as.factor(rep(names(new.table), new.table)) return(new.x) } As an example of it in action: > a <- rep("a", 100) > b <- rep("b", 1000) > c <- rep("c", 1000) > d <- rep("d", 1000) > e <- rep("e", 400) > f <- rep("f", 100) > x <- factor(c(a, b, c, d, e, f)) > summary(x) a b c d e f 100 1000 1000 1000 400 100 > prop.table(table(x)) x a b c d e f 0.02777778 0.27777778 0.27777778 0.27777778 0.11111111 0.02777778 > > w <- Merge.factors(x, .05) > summary(w) b c d e Other 1000 1000 1000 400 200 > class(w) [1] "factor"
R package for combining factor levels for datamining? I wrote a quick function that will accomplish this goal. I'm a novice R user, so it may be slow with large tables. Merge.factors <- function(x, p) { #Combines factor levels in x that are less tha
27,584
Calculating ratio of sample data used for model fitting/training and validation
Well as you said there is no black and white answer. I generally don't divide the data in 2 parts but use methods like k-fold cross validation instead. In k-fold cross validation you divide your data randomly into k parts and fit your model on k-1 parts and test the errors on the left out part. You repeat the process k times leaving each part out of fitting one by one. You can take the mean error from each of the k iterations as an indication of the model error. This works really well if you want to compare the predictive power of different models. One extreme form of k-fold cross validation is the generalized cross validation where you just leave out one data point for testing and fit the model to all the remaining points. Then repeat the process n times leaving out each data point one by one. I generally prefer k-fold cross validation over the generalized cross validation ... just a personal choice
Calculating ratio of sample data used for model fitting/training and validation
Well as you said there is no black and white answer. I generally don't divide the data in 2 parts but use methods like k-fold cross validation instead. In k-fold cross validation you divide your data
Calculating ratio of sample data used for model fitting/training and validation Well as you said there is no black and white answer. I generally don't divide the data in 2 parts but use methods like k-fold cross validation instead. In k-fold cross validation you divide your data randomly into k parts and fit your model on k-1 parts and test the errors on the left out part. You repeat the process k times leaving each part out of fitting one by one. You can take the mean error from each of the k iterations as an indication of the model error. This works really well if you want to compare the predictive power of different models. One extreme form of k-fold cross validation is the generalized cross validation where you just leave out one data point for testing and fit the model to all the remaining points. Then repeat the process n times leaving out each data point one by one. I generally prefer k-fold cross validation over the generalized cross validation ... just a personal choice
Calculating ratio of sample data used for model fitting/training and validation Well as you said there is no black and white answer. I generally don't divide the data in 2 parts but use methods like k-fold cross validation instead. In k-fold cross validation you divide your data
27,585
Calculating ratio of sample data used for model fitting/training and validation
It really depends on the amount of data you have, the specific cost of methods and how exactly you want your result to be. Some examples: If you have little data, you probably want to use cross-validation (k-fold, leave-one-out, etc.) Your model will probably not take much resources to train and test anyway. It are good ways to get the most out of your data You have a lot of data: you probably want to take a reasonably large test-set, ensuring that there will be little possibility that some strange samples will give to much variance to your results. How much data you should take? It depends completely on your data and model. In speech recognition for example, if you would take too much data (let's say 3000 sentences), your experiments would take days, as a realtime factor of 7-10 is common. If you would take too little, it is too much dependent on the speakers that you are choosing (which are not allowed in the training set). Remember also, in a lot of cases it is good to have a validation/development set too!
Calculating ratio of sample data used for model fitting/training and validation
It really depends on the amount of data you have, the specific cost of methods and how exactly you want your result to be. Some examples: If you have little data, you probably want to use cross-valida
Calculating ratio of sample data used for model fitting/training and validation It really depends on the amount of data you have, the specific cost of methods and how exactly you want your result to be. Some examples: If you have little data, you probably want to use cross-validation (k-fold, leave-one-out, etc.) Your model will probably not take much resources to train and test anyway. It are good ways to get the most out of your data You have a lot of data: you probably want to take a reasonably large test-set, ensuring that there will be little possibility that some strange samples will give to much variance to your results. How much data you should take? It depends completely on your data and model. In speech recognition for example, if you would take too much data (let's say 3000 sentences), your experiments would take days, as a realtime factor of 7-10 is common. If you would take too little, it is too much dependent on the speakers that you are choosing (which are not allowed in the training set). Remember also, in a lot of cases it is good to have a validation/development set too!
Calculating ratio of sample data used for model fitting/training and validation It really depends on the amount of data you have, the specific cost of methods and how exactly you want your result to be. Some examples: If you have little data, you probably want to use cross-valida
27,586
Calculating ratio of sample data used for model fitting/training and validation
1:10 test:train ratio is popular because it looks round, 1:9 is popular because of 10-fold CV, 1:2 is popular because it is also round and reassembles bootstrap. Sometimes one gets a test from some data-specific criteria, for instance last year for testing, years before for training. The general rule is such: the train must be large enough to so the accuracy won't drop significantly, and the test must be large enough to silence random fluctuations. Still I prefer CV, since it gives you also a distribution of error.
Calculating ratio of sample data used for model fitting/training and validation
1:10 test:train ratio is popular because it looks round, 1:9 is popular because of 10-fold CV, 1:2 is popular because it is also round and reassembles bootstrap. Sometimes one gets a test from some da
Calculating ratio of sample data used for model fitting/training and validation 1:10 test:train ratio is popular because it looks round, 1:9 is popular because of 10-fold CV, 1:2 is popular because it is also round and reassembles bootstrap. Sometimes one gets a test from some data-specific criteria, for instance last year for testing, years before for training. The general rule is such: the train must be large enough to so the accuracy won't drop significantly, and the test must be large enough to silence random fluctuations. Still I prefer CV, since it gives you also a distribution of error.
Calculating ratio of sample data used for model fitting/training and validation 1:10 test:train ratio is popular because it looks round, 1:9 is popular because of 10-fold CV, 1:2 is popular because it is also round and reassembles bootstrap. Sometimes one gets a test from some da
27,587
Calculating ratio of sample data used for model fitting/training and validation
As an extension on the k-fold answer, the "usual" choice of k is either 5 or 10. The leave-one-out method has a tendency to produce models that are too conservative. FYI, here is a reference on that fact: Shao, J. (1993), Linear Model Selection by Cross-Validation, Journal of the American Statistical Association, Vol. 88, No. 422, pp. 486-494
Calculating ratio of sample data used for model fitting/training and validation
As an extension on the k-fold answer, the "usual" choice of k is either 5 or 10. The leave-one-out method has a tendency to produce models that are too conservative. FYI, here is a reference on that f
Calculating ratio of sample data used for model fitting/training and validation As an extension on the k-fold answer, the "usual" choice of k is either 5 or 10. The leave-one-out method has a tendency to produce models that are too conservative. FYI, here is a reference on that fact: Shao, J. (1993), Linear Model Selection by Cross-Validation, Journal of the American Statistical Association, Vol. 88, No. 422, pp. 486-494
Calculating ratio of sample data used for model fitting/training and validation As an extension on the k-fold answer, the "usual" choice of k is either 5 or 10. The leave-one-out method has a tendency to produce models that are too conservative. FYI, here is a reference on that f
27,588
A predictor that "becomes" categorical when larger than a cutoff
@Tim, as usual, summarizes this well (+1): there is no problem with your "perfectly normal and valid solution." The following illustrates in a bit more detail. On further review, I found your initial model much easier to interpret, with nonsmokers the reference group for the indicator variable and raw, non-negative and non-standardized values for Cig. That's the way that @whuber suggested in response to a very similar question. I used set.seed(20230212) before running your code. Then: coef(mod) # (Intercept) NoSmokeSmoke Cig # 19.920394 6.956709 1.511727 is easy to interpret: (Intercept) is the outcome estimate for non-smokers, NoSmokeSmoke is the extra outcome estimate for smokers if they had Cig=0, and Cig is the extra outcome beyond that per Cig. No need to deal with un-de-meaning, interpreting coefficients in a standard-deviation-of-the-predictor scale, or similar complications. The correlation between the two predictors isn't a problem here. It seems large: cor(df$NoSmoke=="Smoke",df$Cig) # [1] 0.6980322 as another answer notes. Yes, it inflates the variances of the individual coefficient estimates, but not by much: car::vif(mod) # NoSmoke Cig # 1.950264 1.950264 A variance-inflation factor of that size isn't typically considered a problem. When you use the model for predictions there is a counterbalancing negative correlation between the coefficient estimates: print(cov2cor(vcov(mod)),digits=3) # (Intercept) NoSmokeSmoke Cig # (Intercept) 1.00e+00 -0.506 9.41e-16 # NoSmokeSmoke -5.06e-01 1.000 -6.98e-01 # Cig 9.41e-16 -0.698 1.00e+00 that leads to perfectly reasonable (and precise) predictions when the coefficient covariances are properly taken into account. You'll note that the regression restricted to the smokers gives the same result as the combined model: coef(lm(Response~Cig,data=df,subset=NoSmoke=="Smoke")) # (Intercept) Cig # 26.877102 1.511727 with an (Intercept) that's the sum of the original model's (Intercept) plus its NoSmokeSmoke coefficient: sum(coef(mod)[1:2]) # [1] 26.8771 So your solution for this type of data does not have the problems that one might have feared.
A predictor that "becomes" categorical when larger than a cutoff
@Tim, as usual, summarizes this well (+1): there is no problem with your "perfectly normal and valid solution." The following illustrates in a bit more detail. On further review, I found your initial
A predictor that "becomes" categorical when larger than a cutoff @Tim, as usual, summarizes this well (+1): there is no problem with your "perfectly normal and valid solution." The following illustrates in a bit more detail. On further review, I found your initial model much easier to interpret, with nonsmokers the reference group for the indicator variable and raw, non-negative and non-standardized values for Cig. That's the way that @whuber suggested in response to a very similar question. I used set.seed(20230212) before running your code. Then: coef(mod) # (Intercept) NoSmokeSmoke Cig # 19.920394 6.956709 1.511727 is easy to interpret: (Intercept) is the outcome estimate for non-smokers, NoSmokeSmoke is the extra outcome estimate for smokers if they had Cig=0, and Cig is the extra outcome beyond that per Cig. No need to deal with un-de-meaning, interpreting coefficients in a standard-deviation-of-the-predictor scale, or similar complications. The correlation between the two predictors isn't a problem here. It seems large: cor(df$NoSmoke=="Smoke",df$Cig) # [1] 0.6980322 as another answer notes. Yes, it inflates the variances of the individual coefficient estimates, but not by much: car::vif(mod) # NoSmoke Cig # 1.950264 1.950264 A variance-inflation factor of that size isn't typically considered a problem. When you use the model for predictions there is a counterbalancing negative correlation between the coefficient estimates: print(cov2cor(vcov(mod)),digits=3) # (Intercept) NoSmokeSmoke Cig # (Intercept) 1.00e+00 -0.506 9.41e-16 # NoSmokeSmoke -5.06e-01 1.000 -6.98e-01 # Cig 9.41e-16 -0.698 1.00e+00 that leads to perfectly reasonable (and precise) predictions when the coefficient covariances are properly taken into account. You'll note that the regression restricted to the smokers gives the same result as the combined model: coef(lm(Response~Cig,data=df,subset=NoSmoke=="Smoke")) # (Intercept) Cig # 26.877102 1.511727 with an (Intercept) that's the sum of the original model's (Intercept) plus its NoSmokeSmoke coefficient: sum(coef(mod)[1:2]) # [1] 26.8771 So your solution for this type of data does not have the problems that one might have feared.
A predictor that "becomes" categorical when larger than a cutoff @Tim, as usual, summarizes this well (+1): there is no problem with your "perfectly normal and valid solution." The following illustrates in a bit more detail. On further review, I found your initial
27,589
A predictor that "becomes" categorical when larger than a cutoff
Having two features like those described by you is a perfectly normal and valid solution. It also arises in many different scenarios. For example, if you have two features: age and gender, and consider interaction age * gender, the values for it would be 0 for people whose gender was coded as 0 and non-negative otherwise (see also the link mentioned in the comment by EdM). I cannot comment on what the statistician said because you didn't give us a full quote and context. Maybe they meant some specific scenario, but in general, such features and their interactions are commonly used.
A predictor that "becomes" categorical when larger than a cutoff
Having two features like those described by you is a perfectly normal and valid solution. It also arises in many different scenarios. For example, if you have two features: age and gender, and conside
A predictor that "becomes" categorical when larger than a cutoff Having two features like those described by you is a perfectly normal and valid solution. It also arises in many different scenarios. For example, if you have two features: age and gender, and consider interaction age * gender, the values for it would be 0 for people whose gender was coded as 0 and non-negative otherwise (see also the link mentioned in the comment by EdM). I cannot comment on what the statistician said because you didn't give us a full quote and context. Maybe they meant some specific scenario, but in general, such features and their interactions are commonly used.
A predictor that "becomes" categorical when larger than a cutoff Having two features like those described by you is a perfectly normal and valid solution. It also arises in many different scenarios. For example, if you have two features: age and gender, and conside
27,590
A predictor that "becomes" categorical when larger than a cutoff
Other answers are right but I think we should take in account that by including or not including the binary variable we are adjusting different models with underlying different assumptions. Dropping the binary variable and only using the continuous variable means assuming that the effects of smoking are continuous at 0. That is, the effect of smoking very little (approaching zero) approaches the effect of not smoking. You can see that in your example this assumption is false, because the expected response for a non smoker is 20 but the expected response for a smoker of 0 cigarettes is 25. Therefore for this example the model can match better the underlying problem when both predictors are included. From actual data we might not know how the response behaves near zero. For some problems the knowledge on the problem may give some clue: for example, some contaminants are known not to have any safe dose, so we can suppose that very little exposure is going to produce a different response than no exposure at all and using both predictors may be useful. Other phenomena can behave differently. In case of no prior knowledge, you can test if the categorical predictor is significant, just as with any other predictor. Here we are working in the opposite way, and finding reasons to discard it as not significant may suggest that the response is continuous at zero smoking and that may be an interesting result in itself. Additionally, if your actual problem involves more variables, by discarding the categorical predictor you are assuming that the effects of those other variables are the same for smokers and non smokers, but by including the categorical predictor and its interactions with other variables you are actually adjusting two different models for smokers and non smokers. If you have an enough large dataset to adjust such a model without overfitting, it can give interesting results.
A predictor that "becomes" categorical when larger than a cutoff
Other answers are right but I think we should take in account that by including or not including the binary variable we are adjusting different models with underlying different assumptions. Dropping t
A predictor that "becomes" categorical when larger than a cutoff Other answers are right but I think we should take in account that by including or not including the binary variable we are adjusting different models with underlying different assumptions. Dropping the binary variable and only using the continuous variable means assuming that the effects of smoking are continuous at 0. That is, the effect of smoking very little (approaching zero) approaches the effect of not smoking. You can see that in your example this assumption is false, because the expected response for a non smoker is 20 but the expected response for a smoker of 0 cigarettes is 25. Therefore for this example the model can match better the underlying problem when both predictors are included. From actual data we might not know how the response behaves near zero. For some problems the knowledge on the problem may give some clue: for example, some contaminants are known not to have any safe dose, so we can suppose that very little exposure is going to produce a different response than no exposure at all and using both predictors may be useful. Other phenomena can behave differently. In case of no prior knowledge, you can test if the categorical predictor is significant, just as with any other predictor. Here we are working in the opposite way, and finding reasons to discard it as not significant may suggest that the response is continuous at zero smoking and that may be an interesting result in itself. Additionally, if your actual problem involves more variables, by discarding the categorical predictor you are assuming that the effects of those other variables are the same for smokers and non smokers, but by including the categorical predictor and its interactions with other variables you are actually adjusting two different models for smokers and non smokers. If you have an enough large dataset to adjust such a model without overfitting, it can give interesting results.
A predictor that "becomes" categorical when larger than a cutoff Other answers are right but I think we should take in account that by including or not including the binary variable we are adjusting different models with underlying different assumptions. Dropping t
27,591
A predictor that "becomes" categorical when larger than a cutoff
I would agree with your statistician. There is a common question, with no perfect answer. Ultimately, it comes down to what your question is. The two most common approaches I see in addiction research are (1) define 'control' as drug-exposed non-users and then define a level of drug-use above-which a user is defined as a 'case' (non-exposed non-users and exposed low-users are removed). Then use these two groups for your analysis; or (2) exclude non-exposed non-users and treat level of use as a continuous variable (drug-exposed non-users are simply '0' on this scale). If you check the correlation between your variables: df$NoSmoke2 = (df$NoSmoke == "NoSmoke")*1 cor(df[,-1]) You'll see that NoSmoke and Cig are very highly correlated (with set.seed(1234) r = -0.73). This makes it quite hard to interpret the regression coefficients, or the significance of either variable. For example, let's say you want to know the effect of smoking on the response variable. What's the effect? Compare your effect of Non-smoking and your effect of cigarettes when they are entered together or separately. The issue of sample restrictions has to do with trying to move beyond pure correlations. If you want to say something about the effect of smoking, then you should only be examining people who had an opportunity to smoke. Otherwise, reverse effects (Response leads to people being more likely to smoke) is a very real possibility.
A predictor that "becomes" categorical when larger than a cutoff
I would agree with your statistician. There is a common question, with no perfect answer. Ultimately, it comes down to what your question is. The two most common approaches I see in addiction research
A predictor that "becomes" categorical when larger than a cutoff I would agree with your statistician. There is a common question, with no perfect answer. Ultimately, it comes down to what your question is. The two most common approaches I see in addiction research are (1) define 'control' as drug-exposed non-users and then define a level of drug-use above-which a user is defined as a 'case' (non-exposed non-users and exposed low-users are removed). Then use these two groups for your analysis; or (2) exclude non-exposed non-users and treat level of use as a continuous variable (drug-exposed non-users are simply '0' on this scale). If you check the correlation between your variables: df$NoSmoke2 = (df$NoSmoke == "NoSmoke")*1 cor(df[,-1]) You'll see that NoSmoke and Cig are very highly correlated (with set.seed(1234) r = -0.73). This makes it quite hard to interpret the regression coefficients, or the significance of either variable. For example, let's say you want to know the effect of smoking on the response variable. What's the effect? Compare your effect of Non-smoking and your effect of cigarettes when they are entered together or separately. The issue of sample restrictions has to do with trying to move beyond pure correlations. If you want to say something about the effect of smoking, then you should only be examining people who had an opportunity to smoke. Otherwise, reverse effects (Response leads to people being more likely to smoke) is a very real possibility.
A predictor that "becomes" categorical when larger than a cutoff I would agree with your statistician. There is a common question, with no perfect answer. Ultimately, it comes down to what your question is. The two most common approaches I see in addiction research
27,592
A predictor that "becomes" categorical when larger than a cutoff
I would have one variable such as $\tilde x=\max(x,0)$. This type of a variable is used in a linear spline in some packages such as Stata, where it can be created with mkspline function. Normally, these are used to model varying slopes, e.g. you may have a model where the response to negative predictor is different from response to the positive one. In this case you create two variables: $x_+=\max(0,x),x_-=\min(0,x)$.
A predictor that "becomes" categorical when larger than a cutoff
I would have one variable such as $\tilde x=\max(x,0)$. This type of a variable is used in a linear spline in some packages such as Stata, where it can be created with mkspline function. Normally, the
A predictor that "becomes" categorical when larger than a cutoff I would have one variable such as $\tilde x=\max(x,0)$. This type of a variable is used in a linear spline in some packages such as Stata, where it can be created with mkspline function. Normally, these are used to model varying slopes, e.g. you may have a model where the response to negative predictor is different from response to the positive one. In this case you create two variables: $x_+=\max(0,x),x_-=\min(0,x)$.
A predictor that "becomes" categorical when larger than a cutoff I would have one variable such as $\tilde x=\max(x,0)$. This type of a variable is used in a linear spline in some packages such as Stata, where it can be created with mkspline function. Normally, the
27,593
Invariance of results when scaling explanatory variables in logistic regression, is there a proof?
Here is a heuristic idea: The likelihood for a logistic regression model is $$ \ell(\beta|y) \propto \prod_i\left(\frac{\exp(x_i'\beta)}{1+\exp(x_i'\beta)}\right)^{y_i}\left(\frac{1}{1+\exp(x_i'\beta)}\right)^{1-y_i} $$ and the MLE is the arg max of that likelihood. When you scale a regressor, you also need to accordingly scale the coefficients to achieve the original maximal likelihood.
Invariance of results when scaling explanatory variables in logistic regression, is there a proof?
Here is a heuristic idea: The likelihood for a logistic regression model is $$ \ell(\beta|y) \propto \prod_i\left(\frac{\exp(x_i'\beta)}{1+\exp(x_i'\beta)}\right)^{y_i}\left(\frac{1}{1+\exp(x_i'\beta)
Invariance of results when scaling explanatory variables in logistic regression, is there a proof? Here is a heuristic idea: The likelihood for a logistic regression model is $$ \ell(\beta|y) \propto \prod_i\left(\frac{\exp(x_i'\beta)}{1+\exp(x_i'\beta)}\right)^{y_i}\left(\frac{1}{1+\exp(x_i'\beta)}\right)^{1-y_i} $$ and the MLE is the arg max of that likelihood. When you scale a regressor, you also need to accordingly scale the coefficients to achieve the original maximal likelihood.
Invariance of results when scaling explanatory variables in logistic regression, is there a proof? Here is a heuristic idea: The likelihood for a logistic regression model is $$ \ell(\beta|y) \propto \prod_i\left(\frac{\exp(x_i'\beta)}{1+\exp(x_i'\beta)}\right)^{y_i}\left(\frac{1}{1+\exp(x_i'\beta)
27,594
Invariance of results when scaling explanatory variables in logistic regression, is there a proof?
Christoph has a great answer (+1). Just writing this because I can't comment there. The crucial point here is that the likelihood only depends on the coefficients $\beta$ through the linear term $X \beta$. This makes the likelihood unable to distinguish between "$X \beta$" and $(XD) (D^{-1}\beta)$", causing the invariance you've noticed. To be specific about this, we need to introduce some notation (which we can do since we're writing an answer!). Let $y_i | x_i \stackrel{ind.}{\sim} \mathrm{bernoulli}\left[ \mathrm{logit}^{-1} (x_i^T \beta) \right]$ be independent draws according to the logistic regression model, where $x_i \in \mathbb{R}^{p+1}$ is the measured covariates. Write the likelihood of the $i^{th}$ observation as $l(y_i, x_i^T \beta)$. To introduce the change of coordinates, write $\bar{x}_i = D x_i$, where $D$ is diagonal matrix with all diagonal entries nonzero. By definition of maximum likelihood estimation, we know that maximum likelihood estimators $\hat{\beta}$ of the data $\{y_i | x_i\}$ satisfy that $$\sum_{i=1}^n l(y_i, x_i^T \beta) \leq \sum_{i=1}^n l(y_i, x_i^T \hat\beta) \tag{1}$$ for all coefficients $\beta \in \mathbb{R}^p$, and that maximum likelihood estimators for the data $\{y_i | \bar{x}_i\}$ satisfy that $$\sum_{i=1}^n l(y_i, \bar{x}_i^T \alpha) \leq \sum_{i=1}^n l(y_i, \bar{x}_i^T \hat\alpha) \tag{2}$$ for all coefficients $\alpha \in \mathbb{R}^p$. In your argument, you used a closed form of the maximum likelihood estimator to derive the result. It turns out, though, (as Cristoph suggested above), all you need to do is work with the likelihood. Let $\hat{\beta}$ be a maximum likelihood estimator of the data $\{y_i | x_i\}$. Now, writing $\beta = D \alpha$, we can use equation (1) to show that $$\sum_{i=1}^n l(y_i, \bar{x}_i^T \alpha) = \sum_{i=1}^n l\left(y_i, (x_i^T D) (D^{-1} \beta)\right) \leq \sum_{i=1}^n l(y_i, x_i^T \hat\beta) = \sum_{i=1}^n l(y_i, \bar{x}_i^T D^{-1} \hat{\beta}).$$ That is, $D^{-1} \hat{\beta}$ satisfies equation (2) and is therefore a maximum likelihood estimator with respect to the data $\{y_i | \bar{x}_i\}$. This is the invariance property you noticed. (For what it's worth, there's a lot of room for generalizing this argument beyond logistic regression: did we need independent observations? did we need the matrix $D$ to be diagonal? did we need a binary response? did we need the use logit? What notation would you change for this argument to work in different scenarios?)
Invariance of results when scaling explanatory variables in logistic regression, is there a proof?
Christoph has a great answer (+1). Just writing this because I can't comment there. The crucial point here is that the likelihood only depends on the coefficients $\beta$ through the linear term $X \b
Invariance of results when scaling explanatory variables in logistic regression, is there a proof? Christoph has a great answer (+1). Just writing this because I can't comment there. The crucial point here is that the likelihood only depends on the coefficients $\beta$ through the linear term $X \beta$. This makes the likelihood unable to distinguish between "$X \beta$" and $(XD) (D^{-1}\beta)$", causing the invariance you've noticed. To be specific about this, we need to introduce some notation (which we can do since we're writing an answer!). Let $y_i | x_i \stackrel{ind.}{\sim} \mathrm{bernoulli}\left[ \mathrm{logit}^{-1} (x_i^T \beta) \right]$ be independent draws according to the logistic regression model, where $x_i \in \mathbb{R}^{p+1}$ is the measured covariates. Write the likelihood of the $i^{th}$ observation as $l(y_i, x_i^T \beta)$. To introduce the change of coordinates, write $\bar{x}_i = D x_i$, where $D$ is diagonal matrix with all diagonal entries nonzero. By definition of maximum likelihood estimation, we know that maximum likelihood estimators $\hat{\beta}$ of the data $\{y_i | x_i\}$ satisfy that $$\sum_{i=1}^n l(y_i, x_i^T \beta) \leq \sum_{i=1}^n l(y_i, x_i^T \hat\beta) \tag{1}$$ for all coefficients $\beta \in \mathbb{R}^p$, and that maximum likelihood estimators for the data $\{y_i | \bar{x}_i\}$ satisfy that $$\sum_{i=1}^n l(y_i, \bar{x}_i^T \alpha) \leq \sum_{i=1}^n l(y_i, \bar{x}_i^T \hat\alpha) \tag{2}$$ for all coefficients $\alpha \in \mathbb{R}^p$. In your argument, you used a closed form of the maximum likelihood estimator to derive the result. It turns out, though, (as Cristoph suggested above), all you need to do is work with the likelihood. Let $\hat{\beta}$ be a maximum likelihood estimator of the data $\{y_i | x_i\}$. Now, writing $\beta = D \alpha$, we can use equation (1) to show that $$\sum_{i=1}^n l(y_i, \bar{x}_i^T \alpha) = \sum_{i=1}^n l\left(y_i, (x_i^T D) (D^{-1} \beta)\right) \leq \sum_{i=1}^n l(y_i, x_i^T \hat\beta) = \sum_{i=1}^n l(y_i, \bar{x}_i^T D^{-1} \hat{\beta}).$$ That is, $D^{-1} \hat{\beta}$ satisfies equation (2) and is therefore a maximum likelihood estimator with respect to the data $\{y_i | \bar{x}_i\}$. This is the invariance property you noticed. (For what it's worth, there's a lot of room for generalizing this argument beyond logistic regression: did we need independent observations? did we need the matrix $D$ to be diagonal? did we need a binary response? did we need the use logit? What notation would you change for this argument to work in different scenarios?)
Invariance of results when scaling explanatory variables in logistic regression, is there a proof? Christoph has a great answer (+1). Just writing this because I can't comment there. The crucial point here is that the likelihood only depends on the coefficients $\beta$ through the linear term $X \b
27,595
Why use a Gaussian mixture model?
I'll borrow the notation from (1), which describes GMMs quite nicely in my opinon. Suppose we have a feature $X \in \mathbb{R}^d$. To model the distribution of $X$ we can fit a GMM of the form $$f(x)=\sum_{m=1}^{M} \alpha_m \phi(x;\mu_m;\Sigma_m)$$ with $M$ the number of components in the mixture, $\alpha_m$ the mixture weight of the $m$-th component and $\phi(x;\mu_m;\Sigma_m)$ being the Gaussian density function with mean $\mu_m$ and covariance matrix $\Sigma_m$. Using the EM algorithm (its connection to K-Means is explained in this answer) we can aquire estimates of the model parameters, which I'll denote with a hat here ($\hat{\alpha}_m, \hat{\mu}_m,\hat{\Sigma}_m)$. So, our GMM has now been fitted to $X$, let's use it! This addresses your questions 1 and 3 What is the metric to say that one data point is closer to another with GMM? [...] How can this ever be used for clustering things into K cluster? As we now have a probabilistic model of the distribution, we can among other things calculate the posterior probability of a given instance $x_i$ belonging to component $m$, which is sometimes referred to as the 'responsibility' of component $m$ for (producing) $x_i$ (2) , denoted as $\hat{r}_{im}$ $$ \hat{r}_{im} = \frac{\hat{\alpha}_m \phi(x_i;\mu_m;\Sigma_m)}{\sum_{k=1}^{M}\hat{\alpha}_k \phi(x_i;\mu_k;\Sigma_k)}$$ this gives us the probabilities of $x_i$ belonging to the different components. That is precisely how a GMM can be used to cluster your data. K-Means can encounter problems when the choice of K is not well suited for the data or the shapes of the subpopulations differ. The scikit-learn documentation contains an interesting illustration of such cases The choice of the shape of the GMM's covariance matrices affects what shapes the components can take on, here again the scikit-learn documentation provides an illustration While a poorly chosen number of clusters/components can also affect an EM-fitted GMM, a GMM fitted in a bayesian fashion can be somewhat resilient against the effects of this, allowing the mixture weights of some components to be (close to) zero. More on this can be found here. References (1) Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning. Vol. 1. No. 10. New York: Springer series in statistics, 2001. (2) Bishop, Christopher M. Pattern recognition and machine learning. springer, 2006.
Why use a Gaussian mixture model?
I'll borrow the notation from (1), which describes GMMs quite nicely in my opinon. Suppose we have a feature $X \in \mathbb{R}^d$. To model the distribution of $X$ we can fit a GMM of the form $$f(x
Why use a Gaussian mixture model? I'll borrow the notation from (1), which describes GMMs quite nicely in my opinon. Suppose we have a feature $X \in \mathbb{R}^d$. To model the distribution of $X$ we can fit a GMM of the form $$f(x)=\sum_{m=1}^{M} \alpha_m \phi(x;\mu_m;\Sigma_m)$$ with $M$ the number of components in the mixture, $\alpha_m$ the mixture weight of the $m$-th component and $\phi(x;\mu_m;\Sigma_m)$ being the Gaussian density function with mean $\mu_m$ and covariance matrix $\Sigma_m$. Using the EM algorithm (its connection to K-Means is explained in this answer) we can aquire estimates of the model parameters, which I'll denote with a hat here ($\hat{\alpha}_m, \hat{\mu}_m,\hat{\Sigma}_m)$. So, our GMM has now been fitted to $X$, let's use it! This addresses your questions 1 and 3 What is the metric to say that one data point is closer to another with GMM? [...] How can this ever be used for clustering things into K cluster? As we now have a probabilistic model of the distribution, we can among other things calculate the posterior probability of a given instance $x_i$ belonging to component $m$, which is sometimes referred to as the 'responsibility' of component $m$ for (producing) $x_i$ (2) , denoted as $\hat{r}_{im}$ $$ \hat{r}_{im} = \frac{\hat{\alpha}_m \phi(x_i;\mu_m;\Sigma_m)}{\sum_{k=1}^{M}\hat{\alpha}_k \phi(x_i;\mu_k;\Sigma_k)}$$ this gives us the probabilities of $x_i$ belonging to the different components. That is precisely how a GMM can be used to cluster your data. K-Means can encounter problems when the choice of K is not well suited for the data or the shapes of the subpopulations differ. The scikit-learn documentation contains an interesting illustration of such cases The choice of the shape of the GMM's covariance matrices affects what shapes the components can take on, here again the scikit-learn documentation provides an illustration While a poorly chosen number of clusters/components can also affect an EM-fitted GMM, a GMM fitted in a bayesian fashion can be somewhat resilient against the effects of this, allowing the mixture weights of some components to be (close to) zero. More on this can be found here. References (1) Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. The elements of statistical learning. Vol. 1. No. 10. New York: Springer series in statistics, 2001. (2) Bishop, Christopher M. Pattern recognition and machine learning. springer, 2006.
Why use a Gaussian mixture model? I'll borrow the notation from (1), which describes GMMs quite nicely in my opinon. Suppose we have a feature $X \in \mathbb{R}^d$. To model the distribution of $X$ we can fit a GMM of the form $$f(x
27,596
Why use a Gaussian mixture model?
How is this algorithm better than other standard clustering algorithm such as $K$-means when it comes to clustering? k-means is well suited for roughly spherical clusters of equal size. It may fail if these conditions are violated (although it may still work if the clusters are very widely separated). GMMs can fit clusters with a greater variety of shapes and sizes. But, neither algorithm is well suited for data with curved/non-convex clusters. GMMs give a probabilistic assignment of points to clusters. This lets us quantify uncertainty. For example, if a point is near the 'border' between two clusters, it's often better to know that it has near equal membership probabilities for these clusters, rather than blindly assigning it to the nearest one. The probabilistic formulation of GMMs lets us incorporate prior knowledge, using Bayesian methods. For example, we might already know something about the shapes or locations of the clusters, or how many points they contain. The probabilistic formulation gives a way to handle missing data (e.g. using the expectation maximization algorithm typically used to fit GMMs). We can still cluster a data point, even if we haven't observed its value along some dimensions. And, we can infer what those missing values might have been. ...The $K$ means algorithm partitions data into $K$ clusters with clear set memberships, whereas the Gaussian mixture model does not produce clear set membership for each data point. What is the metric to say that one data point is closer to another with GMM? GMMs give a probability that each each point belongs to each cluster (see below). These probabilities can be converted into 'hard assignments' using a decision rule. For example, the simplest choice is to assign each point to the most likely cluster (i.e. the one with highest membership probability). How can I make use of the final probability distribution that GMM produces? Suppose I obtain my final probability distribution $f(x|w)$ where $w$ are the weights, so what? I have obtained a probability distribution that fits to my data $x$. What can I do with it? Here are just a few possibilities. You can: Perform clustering (including hard assignments, as above). Impute missing values (as above). Detect anomalies (i.e. points with low probability density). Learn something about the structure of the data. Sample from the model to generate new, synthetic data points. To follow up with my previous point, for $K$ means, at the end we obtain a set of $K$ clusters, which we may denote as the set $\{S_1, \ldots, S_K\}$, which are $K$ things. But for GMM, all I obtain is one distribution $f(x|w) = \sum\limits_{i=1}^N w_i \mathcal{N}(x|\mu_i, \Sigma_i)$ which is $1$ thing. How can this ever be used for clustering things into $K$ cluster? The expression you wrote is the distribution for the observed data. However, a GMM can be thought of as a latent variable model. Each data point is associated with a latent variable that indicates which cluster it belongs to. When fitting a GMM, we learn a distribution over these latent variables. This gives a probability that each data point is a member of each cluster.
Why use a Gaussian mixture model?
How is this algorithm better than other standard clustering algorithm such as $K$-means when it comes to clustering? k-means is well suited for roughly spherical clusters of equal size. It may fail
Why use a Gaussian mixture model? How is this algorithm better than other standard clustering algorithm such as $K$-means when it comes to clustering? k-means is well suited for roughly spherical clusters of equal size. It may fail if these conditions are violated (although it may still work if the clusters are very widely separated). GMMs can fit clusters with a greater variety of shapes and sizes. But, neither algorithm is well suited for data with curved/non-convex clusters. GMMs give a probabilistic assignment of points to clusters. This lets us quantify uncertainty. For example, if a point is near the 'border' between two clusters, it's often better to know that it has near equal membership probabilities for these clusters, rather than blindly assigning it to the nearest one. The probabilistic formulation of GMMs lets us incorporate prior knowledge, using Bayesian methods. For example, we might already know something about the shapes or locations of the clusters, or how many points they contain. The probabilistic formulation gives a way to handle missing data (e.g. using the expectation maximization algorithm typically used to fit GMMs). We can still cluster a data point, even if we haven't observed its value along some dimensions. And, we can infer what those missing values might have been. ...The $K$ means algorithm partitions data into $K$ clusters with clear set memberships, whereas the Gaussian mixture model does not produce clear set membership for each data point. What is the metric to say that one data point is closer to another with GMM? GMMs give a probability that each each point belongs to each cluster (see below). These probabilities can be converted into 'hard assignments' using a decision rule. For example, the simplest choice is to assign each point to the most likely cluster (i.e. the one with highest membership probability). How can I make use of the final probability distribution that GMM produces? Suppose I obtain my final probability distribution $f(x|w)$ where $w$ are the weights, so what? I have obtained a probability distribution that fits to my data $x$. What can I do with it? Here are just a few possibilities. You can: Perform clustering (including hard assignments, as above). Impute missing values (as above). Detect anomalies (i.e. points with low probability density). Learn something about the structure of the data. Sample from the model to generate new, synthetic data points. To follow up with my previous point, for $K$ means, at the end we obtain a set of $K$ clusters, which we may denote as the set $\{S_1, \ldots, S_K\}$, which are $K$ things. But for GMM, all I obtain is one distribution $f(x|w) = \sum\limits_{i=1}^N w_i \mathcal{N}(x|\mu_i, \Sigma_i)$ which is $1$ thing. How can this ever be used for clustering things into $K$ cluster? The expression you wrote is the distribution for the observed data. However, a GMM can be thought of as a latent variable model. Each data point is associated with a latent variable that indicates which cluster it belongs to. When fitting a GMM, we learn a distribution over these latent variables. This gives a probability that each data point is a member of each cluster.
Why use a Gaussian mixture model? How is this algorithm better than other standard clustering algorithm such as $K$-means when it comes to clustering? k-means is well suited for roughly spherical clusters of equal size. It may fail
27,597
Gradient descent optimization
Gradient descent updates all parameters at each step. You can see this in the update rule: $$ w^{(t+1)}=w^{(t)} - \eta\nabla f\left(w^{(t)}\right). $$ Since the gradient of the loss function $\nabla f(w)$ is vector-valued with dimension matching that of $w$, all parameters are updated at each iteration. The learning rate $\eta$ is a positive number that re-scales the gradient. Taking too large a step can endlessly bounce you across the loss surface with no improvement in your loss function; too small a step can mean tediously slow progress towards the optimum. Although you could estimate linear regression parameters using gradient descent, it's not a good idea. Likewise, there are better ways to estimate logistic regression coefficients.
Gradient descent optimization
Gradient descent updates all parameters at each step. You can see this in the update rule: $$ w^{(t+1)}=w^{(t)} - \eta\nabla f\left(w^{(t)}\right). $$ Since the gradient of the loss function $\nabla f
Gradient descent optimization Gradient descent updates all parameters at each step. You can see this in the update rule: $$ w^{(t+1)}=w^{(t)} - \eta\nabla f\left(w^{(t)}\right). $$ Since the gradient of the loss function $\nabla f(w)$ is vector-valued with dimension matching that of $w$, all parameters are updated at each iteration. The learning rate $\eta$ is a positive number that re-scales the gradient. Taking too large a step can endlessly bounce you across the loss surface with no improvement in your loss function; too small a step can mean tediously slow progress towards the optimum. Although you could estimate linear regression parameters using gradient descent, it's not a good idea. Likewise, there are better ways to estimate logistic regression coefficients.
Gradient descent optimization Gradient descent updates all parameters at each step. You can see this in the update rule: $$ w^{(t+1)}=w^{(t)} - \eta\nabla f\left(w^{(t)}\right). $$ Since the gradient of the loss function $\nabla f
27,598
Gradient descent optimization
When the optimization does occur through partial derivatives, in each turn does it change both w1 and w2 or is it a combination like in few iterations only w1 is changed and when w1 isn't reducing the error more, the derivative starts with w2 - to reach the local minima? In each iteration, the algorithm will change all weights at the same time based on gradient vector. In fact, the gradient is a vector. The length of the gradient is as same as number of the weights in the model. On the other hand, changing one parameter at a time did exist and it is called coordinate decent algorithm, which is a type of gradient free optimization algorithm. In practice, it may not work as well as gradient based algorithm. Here is an interesting answer on gradient free algorithm Is it possible to train a neural network without backpropagation?
Gradient descent optimization
When the optimization does occur through partial derivatives, in each turn does it change both w1 and w2 or is it a combination like in few iterations only w1 is changed and when w1 isn't reducing the
Gradient descent optimization When the optimization does occur through partial derivatives, in each turn does it change both w1 and w2 or is it a combination like in few iterations only w1 is changed and when w1 isn't reducing the error more, the derivative starts with w2 - to reach the local minima? In each iteration, the algorithm will change all weights at the same time based on gradient vector. In fact, the gradient is a vector. The length of the gradient is as same as number of the weights in the model. On the other hand, changing one parameter at a time did exist and it is called coordinate decent algorithm, which is a type of gradient free optimization algorithm. In practice, it may not work as well as gradient based algorithm. Here is an interesting answer on gradient free algorithm Is it possible to train a neural network without backpropagation?
Gradient descent optimization When the optimization does occur through partial derivatives, in each turn does it change both w1 and w2 or is it a combination like in few iterations only w1 is changed and when w1 isn't reducing the
27,599
Gradient descent optimization
Gradient decent is applied to both w1 and w2 for each iteration. During each iteration, the parameters updated according to the gradients. They would likely have different partial derivative. Check here.
Gradient descent optimization
Gradient decent is applied to both w1 and w2 for each iteration. During each iteration, the parameters updated according to the gradients. They would likely have different partial derivative. Check he
Gradient descent optimization Gradient decent is applied to both w1 and w2 for each iteration. During each iteration, the parameters updated according to the gradients. They would likely have different partial derivative. Check here.
Gradient descent optimization Gradient decent is applied to both w1 and w2 for each iteration. During each iteration, the parameters updated according to the gradients. They would likely have different partial derivative. Check he
27,600
Gradient descent optimization
The aim of gradient descent is to minimize the cost function. This minimization is achieved by adjusting weights, for your case w1 and w2. In general there could be n such weights. Gradient descent is done in the following way: initialize weights randomly. compute the cost function and gradient with initialized weights. update weigths: It might happen that the gradient is O for some weights, in that case those weights doesn't show any change after updating. for example: Let say gradient is [1,0] the W2 will remain unchanged. check the cost function with updated weights, if the decrement is acceptable enough continue the iterations else terminate. while updating weights which weight ( W1 or W2) gets changed is entirely decided by gradient. All the weights get updated ( some weights might not change based on gradient).
Gradient descent optimization
The aim of gradient descent is to minimize the cost function. This minimization is achieved by adjusting weights, for your case w1 and w2. In general there could be n such weights. Gradient descen
Gradient descent optimization The aim of gradient descent is to minimize the cost function. This minimization is achieved by adjusting weights, for your case w1 and w2. In general there could be n such weights. Gradient descent is done in the following way: initialize weights randomly. compute the cost function and gradient with initialized weights. update weigths: It might happen that the gradient is O for some weights, in that case those weights doesn't show any change after updating. for example: Let say gradient is [1,0] the W2 will remain unchanged. check the cost function with updated weights, if the decrement is acceptable enough continue the iterations else terminate. while updating weights which weight ( W1 or W2) gets changed is entirely decided by gradient. All the weights get updated ( some weights might not change based on gradient).
Gradient descent optimization The aim of gradient descent is to minimize the cost function. This minimization is achieved by adjusting weights, for your case w1 and w2. In general there could be n such weights. Gradient descen