idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
46,901
Derivation for the confidence interval for a population proportion
The binomial distribution $B(n,p)$ is just the sum of of Bernoulli variables with sucess probability p. Therefore the Central Limit Theorem applies and if n is "large enough" you can approximate the binomial distribution by a normal distribution with the same mean np and the same variance $np(1-p)$. This means $\frac{X}{n}$ can be approximated by a normal distribution with mean $p$ and variance $\frac{p(1-p)}{n}$. The corresponding standard deviation is then given by $\sqrt{\frac{p(1-p)}{n}}$ . The question then becomes when is n "large enough" and there are always lots of discussions about that. One of the shortcomings of the normal approximation is that it is always symmetrical and the binomial distribution is not for $p != 0$. This in turn has the effect that the confidence interval above may include values larger than 1 or smaller than 0, which obviously does not make sense. More seriously, it does not only include values that do not sense but it also does not have the stated coverage of $\alpha$. There are other lots of discussion when the normal approximation makes sense, which depends not only on n but also on p. This Wikipedia Article http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval is a very good starting, which also discusses better alternatives. The normal approximation is still widely in use, because it usually does a decent jobs and because it requires little calculations, which was important in times before computers.
Derivation for the confidence interval for a population proportion
The binomial distribution $B(n,p)$ is just the sum of of Bernoulli variables with sucess probability p. Therefore the Central Limit Theorem applies and if n is "large enough" you can approximate the b
Derivation for the confidence interval for a population proportion The binomial distribution $B(n,p)$ is just the sum of of Bernoulli variables with sucess probability p. Therefore the Central Limit Theorem applies and if n is "large enough" you can approximate the binomial distribution by a normal distribution with the same mean np and the same variance $np(1-p)$. This means $\frac{X}{n}$ can be approximated by a normal distribution with mean $p$ and variance $\frac{p(1-p)}{n}$. The corresponding standard deviation is then given by $\sqrt{\frac{p(1-p)}{n}}$ . The question then becomes when is n "large enough" and there are always lots of discussions about that. One of the shortcomings of the normal approximation is that it is always symmetrical and the binomial distribution is not for $p != 0$. This in turn has the effect that the confidence interval above may include values larger than 1 or smaller than 0, which obviously does not make sense. More seriously, it does not only include values that do not sense but it also does not have the stated coverage of $\alpha$. There are other lots of discussion when the normal approximation makes sense, which depends not only on n but also on p. This Wikipedia Article http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval is a very good starting, which also discusses better alternatives. The normal approximation is still widely in use, because it usually does a decent jobs and because it requires little calculations, which was important in times before computers.
Derivation for the confidence interval for a population proportion The binomial distribution $B(n,p)$ is just the sum of of Bernoulli variables with sucess probability p. Therefore the Central Limit Theorem applies and if n is "large enough" you can approximate the b
46,902
Derivation for the confidence interval for a population proportion
This is an immediate consequence of the normal approximation to the sampling distribution of the mean (proportion). Note that if $Z$ were a standard normal RV (with mean 0 and sd 1), then we would have: $$ \mbox{P}\left( -z_{\alpha/2} < Z < z_{\alpha/2} \right) \approx 1-\alpha. $$ Substitute, then, the centered and scaled sample proportion for Z, i.e. let $$ Z = \frac{\hat{p} - p}{\sqrt{p(1-p)/n}} $$ and this gives you the confidence interval they presented.
Derivation for the confidence interval for a population proportion
This is an immediate consequence of the normal approximation to the sampling distribution of the mean (proportion). Note that if $Z$ were a standard normal RV (with mean 0 and sd 1), then we would hav
Derivation for the confidence interval for a population proportion This is an immediate consequence of the normal approximation to the sampling distribution of the mean (proportion). Note that if $Z$ were a standard normal RV (with mean 0 and sd 1), then we would have: $$ \mbox{P}\left( -z_{\alpha/2} < Z < z_{\alpha/2} \right) \approx 1-\alpha. $$ Substitute, then, the centered and scaled sample proportion for Z, i.e. let $$ Z = \frac{\hat{p} - p}{\sqrt{p(1-p)/n}} $$ and this gives you the confidence interval they presented.
Derivation for the confidence interval for a population proportion This is an immediate consequence of the normal approximation to the sampling distribution of the mean (proportion). Note that if $Z$ were a standard normal RV (with mean 0 and sd 1), then we would hav
46,903
How to interpret two-way interactions in Linear Mixed Effects modeling?
It means that for a given rat of Age=0, logit(Score) should be expected to be lower when TrialNumber is higher; -1.872e-06 units lower for each unit of TrialNumber. But this effect changes when we look at higher ages; the effect of TrialNumber is 2.123e0-08 units higher for additional unit of Age. As Age gets bigger, the effect will go positive, eventually, but whether that happens at plausible values of Age will depend on what those plausible values are. Similar interpretations hold for the Age effects at specific values of TrialNumber; the fitted model says that at TrialNumber=0, the expected logit(Score) is -2.788e-03 units lower, for each additional unit of Age. This effect differs by 2.123e0-08 for each additional unit of TrialNumber.
How to interpret two-way interactions in Linear Mixed Effects modeling?
It means that for a given rat of Age=0, logit(Score) should be expected to be lower when TrialNumber is higher; -1.872e-06 units lower for each unit of TrialNumber. But this effect changes when we loo
How to interpret two-way interactions in Linear Mixed Effects modeling? It means that for a given rat of Age=0, logit(Score) should be expected to be lower when TrialNumber is higher; -1.872e-06 units lower for each unit of TrialNumber. But this effect changes when we look at higher ages; the effect of TrialNumber is 2.123e0-08 units higher for additional unit of Age. As Age gets bigger, the effect will go positive, eventually, but whether that happens at plausible values of Age will depend on what those plausible values are. Similar interpretations hold for the Age effects at specific values of TrialNumber; the fitted model says that at TrialNumber=0, the expected logit(Score) is -2.788e-03 units lower, for each additional unit of Age. This effect differs by 2.123e0-08 for each additional unit of TrialNumber.
How to interpret two-way interactions in Linear Mixed Effects modeling? It means that for a given rat of Age=0, logit(Score) should be expected to be lower when TrialNumber is higher; -1.872e-06 units lower for each unit of TrialNumber. But this effect changes when we loo
46,904
How to account for lag in a simple regression in R?
I would have a look to the R package dynlm. It provides an L operator that makes you model lag term in the regression equation. The examples in the dynlm function should give you tips to work around on your problem. Pay attention to configure the time series structure.
How to account for lag in a simple regression in R?
I would have a look to the R package dynlm. It provides an L operator that makes you model lag term in the regression equation. The examples in the dynlm function should give you tips to work around
How to account for lag in a simple regression in R? I would have a look to the R package dynlm. It provides an L operator that makes you model lag term in the regression equation. The examples in the dynlm function should give you tips to work around on your problem. Pay attention to configure the time series structure.
How to account for lag in a simple regression in R? I would have a look to the R package dynlm. It provides an L operator that makes you model lag term in the regression equation. The examples in the dynlm function should give you tips to work around
46,905
How to account for lag in a simple regression in R?
You might find the chapter on dynamic models in Market Response Models: Econometric and Time Series Analysis helpful. It's not R-specific in any way, but it will walk you through the basic model with lags and leads (when customers and/or competitors anticipate a marketing action and adjust their behavior before that action takes place).
How to account for lag in a simple regression in R?
You might find the chapter on dynamic models in Market Response Models: Econometric and Time Series Analysis helpful. It's not R-specific in any way, but it will walk you through the basic model with
How to account for lag in a simple regression in R? You might find the chapter on dynamic models in Market Response Models: Econometric and Time Series Analysis helpful. It's not R-specific in any way, but it will walk you through the basic model with lags and leads (when customers and/or competitors anticipate a marketing action and adjust their behavior before that action takes place).
How to account for lag in a simple regression in R? You might find the chapter on dynamic models in Market Response Models: Econometric and Time Series Analysis helpful. It's not R-specific in any way, but it will walk you through the basic model with
46,906
What are some good (ideally free) tools to give laymen access to basic statistical techniques?
As I said in my comment, my first concern would be that you need to figure out how to avoid making people more dangerous by handing them tools that are more impressive than Excel but that also require a lot more knowledge/intuition/experience to properly use and interpret. Sort of like replacing company cars with airplanes. That said, I have always had a bit of a soft spot for gretl. It's command-driven (like SAS, et al, and as opposed to R) rather than a language, has a reasonable GUI and is pretty powerful. I can see you using graphical flow tools, like Rapid Miner as well.
What are some good (ideally free) tools to give laymen access to basic statistical techniques?
As I said in my comment, my first concern would be that you need to figure out how to avoid making people more dangerous by handing them tools that are more impressive than Excel but that also require
What are some good (ideally free) tools to give laymen access to basic statistical techniques? As I said in my comment, my first concern would be that you need to figure out how to avoid making people more dangerous by handing them tools that are more impressive than Excel but that also require a lot more knowledge/intuition/experience to properly use and interpret. Sort of like replacing company cars with airplanes. That said, I have always had a bit of a soft spot for gretl. It's command-driven (like SAS, et al, and as opposed to R) rather than a language, has a reasonable GUI and is pretty powerful. I can see you using graphical flow tools, like Rapid Miner as well.
What are some good (ideally free) tools to give laymen access to basic statistical techniques? As I said in my comment, my first concern would be that you need to figure out how to avoid making people more dangerous by handing them tools that are more impressive than Excel but that also require
46,907
What are some good (ideally free) tools to give laymen access to basic statistical techniques?
Judging from my experience with research-active clinicians, imparting safe levels of statistical knowledge is very, very difficult. In 3-6 months while doing other things I would hazard that it's impossible. I think in your shoes I would just make a lot of reports that write themselves (with Brew, Sweave, R2HTML, whatever) and go HEAVY on the visualisations. Anything else and you're just putting a loaded idiot-gun in their hands. I would also wean them off Excel. Just get them using .csv files, in Excel if they wish. Proprietary file formats, merged cells, coloured cells, they're all the devil's work. A nice, flat, colourless, .csv, that's all they need.
What are some good (ideally free) tools to give laymen access to basic statistical techniques?
Judging from my experience with research-active clinicians, imparting safe levels of statistical knowledge is very, very difficult. In 3-6 months while doing other things I would hazard that it's impo
What are some good (ideally free) tools to give laymen access to basic statistical techniques? Judging from my experience with research-active clinicians, imparting safe levels of statistical knowledge is very, very difficult. In 3-6 months while doing other things I would hazard that it's impossible. I think in your shoes I would just make a lot of reports that write themselves (with Brew, Sweave, R2HTML, whatever) and go HEAVY on the visualisations. Anything else and you're just putting a loaded idiot-gun in their hands. I would also wean them off Excel. Just get them using .csv files, in Excel if they wish. Proprietary file formats, merged cells, coloured cells, they're all the devil's work. A nice, flat, colourless, .csv, that's all they need.
What are some good (ideally free) tools to give laymen access to basic statistical techniques? Judging from my experience with research-active clinicians, imparting safe levels of statistical knowledge is very, very difficult. In 3-6 months while doing other things I would hazard that it's impo
46,908
What are some good (ideally free) tools to give laymen access to basic statistical techniques?
When it comes to data mining, Weka is very user-friendly.
What are some good (ideally free) tools to give laymen access to basic statistical techniques?
When it comes to data mining, Weka is very user-friendly.
What are some good (ideally free) tools to give laymen access to basic statistical techniques? When it comes to data mining, Weka is very user-friendly.
What are some good (ideally free) tools to give laymen access to basic statistical techniques? When it comes to data mining, Weka is very user-friendly.
46,909
What are probabilistic approaches to finding the right number of clusters?
There are methods to do that. A good starting point is Rasmussen, C. E. (2000). The Infinite Gaussian Mixture Model. In S. A. Solla, T. K. Leen, & K.-R. Müller (Eds.), Advances in Neural Information Processing Systems 12 (Vol. 12, pp. 554-560). MIT Press. The idea is to put a Dirichlet prior on the mixture weights of a mixture of Gaussians and take the limit of infinitely many components. Since you always have have finitely many data points, it doesn't matter that you potentially have infinitely many mixtures but it allows the model to choose new clusters if it needs to. There is a lot more work on that. A good starting point would be the publications of Yee Whye Teh.
What are probabilistic approaches to finding the right number of clusters?
There are methods to do that. A good starting point is Rasmussen, C. E. (2000). The Infinite Gaussian Mixture Model. In S. A. Solla, T. K. Leen, & K.-R. Müller (Eds.), Advances in Neural Information P
What are probabilistic approaches to finding the right number of clusters? There are methods to do that. A good starting point is Rasmussen, C. E. (2000). The Infinite Gaussian Mixture Model. In S. A. Solla, T. K. Leen, & K.-R. Müller (Eds.), Advances in Neural Information Processing Systems 12 (Vol. 12, pp. 554-560). MIT Press. The idea is to put a Dirichlet prior on the mixture weights of a mixture of Gaussians and take the limit of infinitely many components. Since you always have have finitely many data points, it doesn't matter that you potentially have infinitely many mixtures but it allows the model to choose new clusters if it needs to. There is a lot more work on that. A good starting point would be the publications of Yee Whye Teh.
What are probabilistic approaches to finding the right number of clusters? There are methods to do that. A good starting point is Rasmussen, C. E. (2000). The Infinite Gaussian Mixture Model. In S. A. Solla, T. K. Leen, & K.-R. Müller (Eds.), Advances in Neural Information P
46,910
What are probabilistic approaches to finding the right number of clusters?
The first question you should then answer is: What is a cluster? Most of the time, a cluster is whatever the clustering algorithm finds. Which by definition then is correct. If you run e.g. k-means, it does a good job in finding the optimal $k$ cell voronoi partitioning of the dataset. So if you are referring to k-means, the question is: what are the chances that the data set is based on $k$ Voronoi cells?
What are probabilistic approaches to finding the right number of clusters?
The first question you should then answer is: What is a cluster? Most of the time, a cluster is whatever the clustering algorithm finds. Which by definition then is correct. If you run e.g. k-means, i
What are probabilistic approaches to finding the right number of clusters? The first question you should then answer is: What is a cluster? Most of the time, a cluster is whatever the clustering algorithm finds. Which by definition then is correct. If you run e.g. k-means, it does a good job in finding the optimal $k$ cell voronoi partitioning of the dataset. So if you are referring to k-means, the question is: what are the chances that the data set is based on $k$ Voronoi cells?
What are probabilistic approaches to finding the right number of clusters? The first question you should then answer is: What is a cluster? Most of the time, a cluster is whatever the clustering algorithm finds. Which by definition then is correct. If you run e.g. k-means, i
46,911
What is distribution of lengths of gaps between occurrences of ones in Bernoulli process?
The time you have to wait till the next one is a geometric variable $X\sim\mathcal{G}(p)$ with probability parameter $p$, i.e. $$ \mathbb{P}(X=k) = (1-p)^k p \quad k=0,1,2,\ldots $$ Fitting your distribution to the data presumably means estimating $p$ by $\hat p$ and using the pluggin distribution $\mathcal{G}(\hat p)$ for all purposes. If you do not want to run a Bayesian analysis (with prior $\pi(p)=1/\sqrt{p(1-p)}$), estimating $p$ from your proportion of $1$'s along the Bernoulli sequence is indeed an unbiased estimator.
What is distribution of lengths of gaps between occurrences of ones in Bernoulli process?
The time you have to wait till the next one is a geometric variable $X\sim\mathcal{G}(p)$ with probability parameter $p$, i.e. $$ \mathbb{P}(X=k) = (1-p)^k p \quad k=0,1,2,\ldots $$ Fitting your dis
What is distribution of lengths of gaps between occurrences of ones in Bernoulli process? The time you have to wait till the next one is a geometric variable $X\sim\mathcal{G}(p)$ with probability parameter $p$, i.e. $$ \mathbb{P}(X=k) = (1-p)^k p \quad k=0,1,2,\ldots $$ Fitting your distribution to the data presumably means estimating $p$ by $\hat p$ and using the pluggin distribution $\mathcal{G}(\hat p)$ for all purposes. If you do not want to run a Bayesian analysis (with prior $\pi(p)=1/\sqrt{p(1-p)}$), estimating $p$ from your proportion of $1$'s along the Bernoulli sequence is indeed an unbiased estimator.
What is distribution of lengths of gaps between occurrences of ones in Bernoulli process? The time you have to wait till the next one is a geometric variable $X\sim\mathcal{G}(p)$ with probability parameter $p$, i.e. $$ \mathbb{P}(X=k) = (1-p)^k p \quad k=0,1,2,\ldots $$ Fitting your dis
46,912
How do I clean up inconsistent survey data?
data cleaning of surveys takes longer than analysis and report write-up, so you're not alone. :) Normally in a survey, we path the questions for respondents. So, for example, in computer-assisted telephone interviewing (or online interviews, face-to-face interviewing with a laptop), the survey programmers code the survey to literally skip questions that should not be presented if the respondent answers a particular way. It appears that a question skip pattern was missing from this survey, for whatever reason. If a skip pattern should have been implemented, then yes you can post-hoc introduce it for questions 2 and 3 and change the "should not have answered" responses to system-missing (or other missing code you're using). There are a lot of survey books out there, and the ones for you will really depend on your particular need as they all have various strengths and weaknesses. Have a look at the range of books by David De Vaus, such as Analysing Social Science Data - this looks particularly good for your situation. David De Vaus has written a number of other social science survey books, and they all come recommended. The Dillman et al book also came highly recommended to me, although I have not used it myself. I also recommend cognitive testing followed by field testing of a questionnnaire before going live with the survey. This type of testing is designed to show up question sequencing issues, while also showing how respondents interpret questions (this is sometimes not the same way as intended by the questionnaire designer!). While this process is too late for your current survey, you can implement it for future surveys. Best wishes with your survey analysis.
How do I clean up inconsistent survey data?
data cleaning of surveys takes longer than analysis and report write-up, so you're not alone. :) Normally in a survey, we path the questions for respondents. So, for example, in computer-assisted tele
How do I clean up inconsistent survey data? data cleaning of surveys takes longer than analysis and report write-up, so you're not alone. :) Normally in a survey, we path the questions for respondents. So, for example, in computer-assisted telephone interviewing (or online interviews, face-to-face interviewing with a laptop), the survey programmers code the survey to literally skip questions that should not be presented if the respondent answers a particular way. It appears that a question skip pattern was missing from this survey, for whatever reason. If a skip pattern should have been implemented, then yes you can post-hoc introduce it for questions 2 and 3 and change the "should not have answered" responses to system-missing (or other missing code you're using). There are a lot of survey books out there, and the ones for you will really depend on your particular need as they all have various strengths and weaknesses. Have a look at the range of books by David De Vaus, such as Analysing Social Science Data - this looks particularly good for your situation. David De Vaus has written a number of other social science survey books, and they all come recommended. The Dillman et al book also came highly recommended to me, although I have not used it myself. I also recommend cognitive testing followed by field testing of a questionnnaire before going live with the survey. This type of testing is designed to show up question sequencing issues, while also showing how respondents interpret questions (this is sometimes not the same way as intended by the questionnaire designer!). While this process is too late for your current survey, you can implement it for future surveys. Best wishes with your survey analysis.
How do I clean up inconsistent survey data? data cleaning of surveys takes longer than analysis and report write-up, so you're not alone. :) Normally in a survey, we path the questions for respondents. So, for example, in computer-assisted tele
46,913
Understanding proof of McDiarmid's inequality
$\mathbb{E}(V_{i} | X_{1}, \ldots, X_{i-1}) = 0$ Let us introduce some notation, $X_{1:i} = X_{1}, \ldots, X_{i}$. \begin{align*} \mathbb{E}(V_{i} | X_{1:i-1}) &= \mathbb{E}\left( \left[\mathbb{E}(g | X_{1:i}) - \mathbb{E}(g | X_{1:i-1})\right] | X_{1:i-1}\right) \\\\ &=\mathbb{E}\left( \mathbb{E}(g | X_{1:i}) | X_{1:i-1}\right)- \mathbb{E}\left( \mathbb{E}(g | X_{1:i-1}) | X_{1:i-1} \right) \end{align*} Next apply the definitions of iterated expectations to $\mathbb{E}\left( \mathbb{E}(g | X_{1:i}) | X_{1:i-1}\right)$ First let us work with inner expectation. \begin{align*} \mathbb{E}(g | X_{1:i}) &= \int_{x_{i+1:n}} g f_{X_{1:n} | X_{1:i}} \mathrm{d}x_{i+1:n} \end{align*} Since $X_{i}$ are independent, we can reduce the joint pdf to \begin{align*} f_{X_{1:n} | X_{1:i}} &= \frac{\prod_{j=1}^{n} f_{X_{j}}(x_{j})} {\prod_{j=1}^{i} f_{X_{j}}(x_{j})} \\\\ &= \prod_{j=i+1}^{n} f_{X_{j}}(x_{j}) \end{align*} Substituting the joint pdf in the inner expectation integral we get, \begin{align*} \mathbb{E}(g | X_{1:i}) &= \int_{x_{i+1:n}} g \prod_{j=i+1}^{n} f_{X_{j}}(x_{j})\text{dx}_{i+1:n} \end{align*} Observe that $\mathbb{E}(g | X_{1:i})$ is some function of $X_{1:i}$ only. Thus while doing the outer expectation, the conditional pdf will be $f_{X_{1:i} | X_{1:i-1}}$ Now the outer expectation: \begin{align*} \mathbb{E}\left( \mathbb{E}(g | X_{1:i}) | X_{1:i-1} \right) &= \int_{x_{i}} \left ( \int_{x_{i+1:n}} g \prod_{j=i+1}^{n} f_{X_{j}}(x_{j}) \mathrm{d}x_{i+1:n} \right) \frac{\prod_{j=1}^{i} f_{X_{j}}(x_{j})}{\prod_{j=1}^{i-1} f_{X_{j}}(x_{j})} \mathrm{d}x_{i}\\\\ & = \int_{x_{i:n}} g \prod_{j=i}^{n} f_{X_{j}}(x_{j}) \mathrm{d}x_{i:n} \\\\ & = \mathbb{E}(g | X_{1:i-1}) \end{align*} Using the same argument used above one can prove, \begin{align*} \mathbb{E}\left( \mathbb{E}(g | X_{1:i-1}) | X_{1:i-1} \right) & = \mathbb{E}(g | X_{1:i-1}) \end{align*} which leads to \begin{align*} \mathbb{E}(V_{i} | X_{1}, \ldots, X_{i-1}) &= \mathbb{E}\left( \mathbb{E}(g | X_{1:i}) | X_{1:i-1}\right)- \mathbb{E}\left( \mathbb{E}(g | X_{1:i-1}) | X_{1:i-1} \right) \\\\ &= \mathbb{E}(g | X_{1:i-1}) - \mathbb{E}(g | X_{1:i-1}) \\\\ &= 0 \end{align*} $\mathbb{E}(e^{t V_{i} } | X_{1}, \ldots, X_{i-1}) \le e^{t^2c_{i}^{2}/8}$ The above inequality will follow from Hoeffding's lemma if we can prove $(V_{i} | X_{1}, \ldots, X_{i-1})$ is both upper and lower bounded. We have already proved $\mathbb{E}(V_{i} | X_{1}, \ldots, X_{i-1}) = 0$. The following proof is taken from John Duchi's wonderful notes on the inequalities. Let \begin{align*} U_{i} &= \sup_{u} \quad \mathbb{E}(g | X_{1:i−1}, u) − \mathbb{E}(g | X_{1:i−1}) \\\\ L_{i} &= \inf_{l} \quad \mathbb{E}(g | X_{1:i−1}, l) − \mathbb{E}(g | X_{1:i−1}) \end{align*} Now \begin{align*} U_{i} - L_{i} &\le \sup_{l,u} \, \mathbb{E}(g | X_{1:i−1}, u) - \mathbb{E}(g | X_{1:i−1}, l) \\\\ &\le \sup_{l,u} \left ( \int_{x_{i+1}:n} [ g(X_{1:i-1}, u, x_{i+1:n}) - g(X_{1:i-1}, l, x_{i+1:n}) ] \prod_{j=i+1}^{n} f_{X_{j}}(x_{j}) \mathrm{d}x_{i+1:n} \right ) \\\\ &\le \int_{x_{i+1}:n} \sup_{l,u} \; \left ( g(X_{1:i-1}, u, x_{i+1:n}) - g(X_{1:i-1}, l, x_{i+1:n}) \right ) \prod_{j=i+1}^{n} f_{X_{j}}(x_{j}) \; \mathrm{d}x_{i+1:n} \\\\ &\le \int_{x_{i+1}:n} c_{i} \prod_{j=i+1}^{n} f_{X_{j}}(x_{j}) \; \mathrm{d}x_{i+1:n} \\\\ &= c_{i} \end{align*} The third line follows from Jensen's inequality since $\sup$ is convex and the fourth line from the assumptions in the McDiarmid Inequality. Hence $L_{i} \le V_{i} \le U_{i}$ and we can apply Hoeffding's lemma to get \begin{align*} \mathbb{E}(e^{t V_{i} } | X_{1}, \ldots, X_{i-1}) &\le e^{t^2c_{i}^{2}/8} \end{align*} $\mathbb{E} \left( e^{t \sum_{i=1}^{n} V_{i}} \right) = \mathbb{E} \left( e^{t \sum_{i=1}^{n-1} V_{i}} \mathbb{E} \left( e^{tV_{n}} | X_{1}, \ldots, X_{n-1} \right) \right)$ A straightforward application of iterated expectation will lead us to the above result. \begin{align*} \mathbb{E} \left( e^{t \sum_{i=1}^{n} V_{i}} \right) &= \mathbb{E} \left( \mathbb{E} \left( e^{t \sum_{i=1}^{n} V_{i}} | X_{1}, \ldots, X_{n-1} \right) \right) \\\\ &= \mathbb{E} \left( e^{t \sum_{i=1}^{n-1} V_{i}} \mathbb{E} \left( e^{t V_{n}} | X_{1}, \ldots, X_{n-1} \right) \right) \\\\ \end{align*} The inner expectation is wrt $X_{n}$ while the outer expectation is wrt $X_{1:n-1}$ On applying the Hoeffding's inequality $n$ times repeatedly, we get: \begin{align*} \mathbb{E} \left( e^{t \sum_{i=1}^{n} V_{i}} \right) &= \mathbb{E} \left( e^{t \sum_{i=1}^{n-1} V_{i}} \mathbb{E} \left( e^{t V_{n}} | X_{1}, \ldots, X_{n-1} \right) \right) \\\\ &\le \mathbb{E} \left( e^{t \sum_{i=1}^{n-1} V_{i}} \exp \left( t^{2} c_{n}^{2}/8 \right) \right) \\\\ &\le \exp\left( \frac{1}{8} \sum_{i=1}^{n} t^{2} c_{i}^{2} \right) \end{align*} Now we are ready to prove McDiarmid's inequality, \begin{align*} \mathbb{P} \left( g(X_{1}, \ldots, X_{n}) - \mathbb{E}(g(X_{1}, \ldots, X_{n})) \ge \epsilon \right) &= \mathbb{P} \left( \sum_{i=1}^{n} V_{i} \ge \epsilon \right) \\\\ &= \mathbb{P} \left( e^{t \sum_{i=1}^{n} V_{i}} \ge e^{t \epsilon} \right) \\\\ & \le \exp(-t \epsilon) \mathbb{E} \left( e^{t \sum_{i=1}^{n} V_{i}} \right) \\\\ & \le \exp(-t \epsilon) \exp\left( \frac{1}{8} \sum_{i=1}^{n} t^{2} c_{i}^{2} \right) \\\\ & = \exp \left (-t \epsilon + \frac{1}{8} \sum_{i=1}^{n} t^{2} c_{i}^{2} \right) \end{align*} The third line follows from Markov's inequality. To get the final result, we need to minimize the expression wrt $t$. This occurs at $t = 4 \epsilon / \sum_{i=1}^{n} c_{i}^{2}$.
Understanding proof of McDiarmid's inequality
$\mathbb{E}(V_{i} | X_{1}, \ldots, X_{i-1}) = 0$ Let us introduce some notation, $X_{1:i} = X_{1}, \ldots, X_{i}$. \begin{align*} \mathbb{E}(V_{i} | X_{1:i-1}) &= \mathbb{E}\left( \left[\mathbb{E}(g
Understanding proof of McDiarmid's inequality $\mathbb{E}(V_{i} | X_{1}, \ldots, X_{i-1}) = 0$ Let us introduce some notation, $X_{1:i} = X_{1}, \ldots, X_{i}$. \begin{align*} \mathbb{E}(V_{i} | X_{1:i-1}) &= \mathbb{E}\left( \left[\mathbb{E}(g | X_{1:i}) - \mathbb{E}(g | X_{1:i-1})\right] | X_{1:i-1}\right) \\\\ &=\mathbb{E}\left( \mathbb{E}(g | X_{1:i}) | X_{1:i-1}\right)- \mathbb{E}\left( \mathbb{E}(g | X_{1:i-1}) | X_{1:i-1} \right) \end{align*} Next apply the definitions of iterated expectations to $\mathbb{E}\left( \mathbb{E}(g | X_{1:i}) | X_{1:i-1}\right)$ First let us work with inner expectation. \begin{align*} \mathbb{E}(g | X_{1:i}) &= \int_{x_{i+1:n}} g f_{X_{1:n} | X_{1:i}} \mathrm{d}x_{i+1:n} \end{align*} Since $X_{i}$ are independent, we can reduce the joint pdf to \begin{align*} f_{X_{1:n} | X_{1:i}} &= \frac{\prod_{j=1}^{n} f_{X_{j}}(x_{j})} {\prod_{j=1}^{i} f_{X_{j}}(x_{j})} \\\\ &= \prod_{j=i+1}^{n} f_{X_{j}}(x_{j}) \end{align*} Substituting the joint pdf in the inner expectation integral we get, \begin{align*} \mathbb{E}(g | X_{1:i}) &= \int_{x_{i+1:n}} g \prod_{j=i+1}^{n} f_{X_{j}}(x_{j})\text{dx}_{i+1:n} \end{align*} Observe that $\mathbb{E}(g | X_{1:i})$ is some function of $X_{1:i}$ only. Thus while doing the outer expectation, the conditional pdf will be $f_{X_{1:i} | X_{1:i-1}}$ Now the outer expectation: \begin{align*} \mathbb{E}\left( \mathbb{E}(g | X_{1:i}) | X_{1:i-1} \right) &= \int_{x_{i}} \left ( \int_{x_{i+1:n}} g \prod_{j=i+1}^{n} f_{X_{j}}(x_{j}) \mathrm{d}x_{i+1:n} \right) \frac{\prod_{j=1}^{i} f_{X_{j}}(x_{j})}{\prod_{j=1}^{i-1} f_{X_{j}}(x_{j})} \mathrm{d}x_{i}\\\\ & = \int_{x_{i:n}} g \prod_{j=i}^{n} f_{X_{j}}(x_{j}) \mathrm{d}x_{i:n} \\\\ & = \mathbb{E}(g | X_{1:i-1}) \end{align*} Using the same argument used above one can prove, \begin{align*} \mathbb{E}\left( \mathbb{E}(g | X_{1:i-1}) | X_{1:i-1} \right) & = \mathbb{E}(g | X_{1:i-1}) \end{align*} which leads to \begin{align*} \mathbb{E}(V_{i} | X_{1}, \ldots, X_{i-1}) &= \mathbb{E}\left( \mathbb{E}(g | X_{1:i}) | X_{1:i-1}\right)- \mathbb{E}\left( \mathbb{E}(g | X_{1:i-1}) | X_{1:i-1} \right) \\\\ &= \mathbb{E}(g | X_{1:i-1}) - \mathbb{E}(g | X_{1:i-1}) \\\\ &= 0 \end{align*} $\mathbb{E}(e^{t V_{i} } | X_{1}, \ldots, X_{i-1}) \le e^{t^2c_{i}^{2}/8}$ The above inequality will follow from Hoeffding's lemma if we can prove $(V_{i} | X_{1}, \ldots, X_{i-1})$ is both upper and lower bounded. We have already proved $\mathbb{E}(V_{i} | X_{1}, \ldots, X_{i-1}) = 0$. The following proof is taken from John Duchi's wonderful notes on the inequalities. Let \begin{align*} U_{i} &= \sup_{u} \quad \mathbb{E}(g | X_{1:i−1}, u) − \mathbb{E}(g | X_{1:i−1}) \\\\ L_{i} &= \inf_{l} \quad \mathbb{E}(g | X_{1:i−1}, l) − \mathbb{E}(g | X_{1:i−1}) \end{align*} Now \begin{align*} U_{i} - L_{i} &\le \sup_{l,u} \, \mathbb{E}(g | X_{1:i−1}, u) - \mathbb{E}(g | X_{1:i−1}, l) \\\\ &\le \sup_{l,u} \left ( \int_{x_{i+1}:n} [ g(X_{1:i-1}, u, x_{i+1:n}) - g(X_{1:i-1}, l, x_{i+1:n}) ] \prod_{j=i+1}^{n} f_{X_{j}}(x_{j}) \mathrm{d}x_{i+1:n} \right ) \\\\ &\le \int_{x_{i+1}:n} \sup_{l,u} \; \left ( g(X_{1:i-1}, u, x_{i+1:n}) - g(X_{1:i-1}, l, x_{i+1:n}) \right ) \prod_{j=i+1}^{n} f_{X_{j}}(x_{j}) \; \mathrm{d}x_{i+1:n} \\\\ &\le \int_{x_{i+1}:n} c_{i} \prod_{j=i+1}^{n} f_{X_{j}}(x_{j}) \; \mathrm{d}x_{i+1:n} \\\\ &= c_{i} \end{align*} The third line follows from Jensen's inequality since $\sup$ is convex and the fourth line from the assumptions in the McDiarmid Inequality. Hence $L_{i} \le V_{i} \le U_{i}$ and we can apply Hoeffding's lemma to get \begin{align*} \mathbb{E}(e^{t V_{i} } | X_{1}, \ldots, X_{i-1}) &\le e^{t^2c_{i}^{2}/8} \end{align*} $\mathbb{E} \left( e^{t \sum_{i=1}^{n} V_{i}} \right) = \mathbb{E} \left( e^{t \sum_{i=1}^{n-1} V_{i}} \mathbb{E} \left( e^{tV_{n}} | X_{1}, \ldots, X_{n-1} \right) \right)$ A straightforward application of iterated expectation will lead us to the above result. \begin{align*} \mathbb{E} \left( e^{t \sum_{i=1}^{n} V_{i}} \right) &= \mathbb{E} \left( \mathbb{E} \left( e^{t \sum_{i=1}^{n} V_{i}} | X_{1}, \ldots, X_{n-1} \right) \right) \\\\ &= \mathbb{E} \left( e^{t \sum_{i=1}^{n-1} V_{i}} \mathbb{E} \left( e^{t V_{n}} | X_{1}, \ldots, X_{n-1} \right) \right) \\\\ \end{align*} The inner expectation is wrt $X_{n}$ while the outer expectation is wrt $X_{1:n-1}$ On applying the Hoeffding's inequality $n$ times repeatedly, we get: \begin{align*} \mathbb{E} \left( e^{t \sum_{i=1}^{n} V_{i}} \right) &= \mathbb{E} \left( e^{t \sum_{i=1}^{n-1} V_{i}} \mathbb{E} \left( e^{t V_{n}} | X_{1}, \ldots, X_{n-1} \right) \right) \\\\ &\le \mathbb{E} \left( e^{t \sum_{i=1}^{n-1} V_{i}} \exp \left( t^{2} c_{n}^{2}/8 \right) \right) \\\\ &\le \exp\left( \frac{1}{8} \sum_{i=1}^{n} t^{2} c_{i}^{2} \right) \end{align*} Now we are ready to prove McDiarmid's inequality, \begin{align*} \mathbb{P} \left( g(X_{1}, \ldots, X_{n}) - \mathbb{E}(g(X_{1}, \ldots, X_{n})) \ge \epsilon \right) &= \mathbb{P} \left( \sum_{i=1}^{n} V_{i} \ge \epsilon \right) \\\\ &= \mathbb{P} \left( e^{t \sum_{i=1}^{n} V_{i}} \ge e^{t \epsilon} \right) \\\\ & \le \exp(-t \epsilon) \mathbb{E} \left( e^{t \sum_{i=1}^{n} V_{i}} \right) \\\\ & \le \exp(-t \epsilon) \exp\left( \frac{1}{8} \sum_{i=1}^{n} t^{2} c_{i}^{2} \right) \\\\ & = \exp \left (-t \epsilon + \frac{1}{8} \sum_{i=1}^{n} t^{2} c_{i}^{2} \right) \end{align*} The third line follows from Markov's inequality. To get the final result, we need to minimize the expression wrt $t$. This occurs at $t = 4 \epsilon / \sum_{i=1}^{n} c_{i}^{2}$.
Understanding proof of McDiarmid's inequality $\mathbb{E}(V_{i} | X_{1}, \ldots, X_{i-1}) = 0$ Let us introduce some notation, $X_{1:i} = X_{1}, \ldots, X_{i}$. \begin{align*} \mathbb{E}(V_{i} | X_{1:i-1}) &= \mathbb{E}\left( \left[\mathbb{E}(g
46,914
What are the assumptions (and H0) for Wilcoxon signed-rank test?
Assumption 1 is needed. Assumption 3 is not strong enough. You need X and Y to be on scales that make differences orderable, which can mean that X and Y are interval scaled. Regarding the distributional assumption this depends on how you state the hypothesis. If you want to make an inference about the mean difference (and perhaps about the median?) then you assume the distribution of the differences is symmetric. If you want to test the hypothesis that the probability that the sum of a randomly chosen pair of differences exceeds zero is 0.5 then no distributional assumption is needed.
What are the assumptions (and H0) for Wilcoxon signed-rank test?
Assumption 1 is needed. Assumption 3 is not strong enough. You need X and Y to be on scales that make differences orderable, which can mean that X and Y are interval scaled. Regarding the distribut
What are the assumptions (and H0) for Wilcoxon signed-rank test? Assumption 1 is needed. Assumption 3 is not strong enough. You need X and Y to be on scales that make differences orderable, which can mean that X and Y are interval scaled. Regarding the distributional assumption this depends on how you state the hypothesis. If you want to make an inference about the mean difference (and perhaps about the median?) then you assume the distribution of the differences is symmetric. If you want to test the hypothesis that the probability that the sum of a randomly chosen pair of differences exceeds zero is 0.5 then no distributional assumption is needed.
What are the assumptions (and H0) for Wilcoxon signed-rank test? Assumption 1 is needed. Assumption 3 is not strong enough. You need X and Y to be on scales that make differences orderable, which can mean that X and Y are interval scaled. Regarding the distribut
46,915
Whether to report untransformed data when performing ANOVA on transformed data?
This depends on a number of things. The analysis was done within the transformation space so presenting the data back-transformed can distort things (untransformed means is just wrong, but converting it back from the transformed after summarizing, means, variance, etc. might be OK in certain situations). I guess the first thing I'd do is see how it looks when you back-transform. Does back-transforming tell the exact same story as the transformed data. If so, then you're probably fine to present it that way. If not then you need to present the transformed summary. Even if you do back-transform you need to be clear in your results section that the analysis applies to the transformation. You say, "we found significant effects in the log of the data", etc. Some transformations are variations of an arbitrary measurement anyway. For example, you might measure reaction time in seconds and have a mean of 0.5. Typically that kind of data is tailed out to the right and sometimes can be normalized by simply taking the inverse, so now your mean is 2 response / second. It's hard to argue that either one more meaningfully represents what happened and they're also both straightforwardly expressive and easy to interpret. Another thing to consider is that sometimes the transformed data actually are more meaningful. Sometimes the data need to be transformed partially because the transformation is the more natural expression of the response variable. There are probably lots of things to consider I haven't even mentioned. If you're having a difficult time deciding for your particular problem then ask the particular question about the exact kind of data you have.
Whether to report untransformed data when performing ANOVA on transformed data?
This depends on a number of things. The analysis was done within the transformation space so presenting the data back-transformed can distort things (untransformed means is just wrong, but converting
Whether to report untransformed data when performing ANOVA on transformed data? This depends on a number of things. The analysis was done within the transformation space so presenting the data back-transformed can distort things (untransformed means is just wrong, but converting it back from the transformed after summarizing, means, variance, etc. might be OK in certain situations). I guess the first thing I'd do is see how it looks when you back-transform. Does back-transforming tell the exact same story as the transformed data. If so, then you're probably fine to present it that way. If not then you need to present the transformed summary. Even if you do back-transform you need to be clear in your results section that the analysis applies to the transformation. You say, "we found significant effects in the log of the data", etc. Some transformations are variations of an arbitrary measurement anyway. For example, you might measure reaction time in seconds and have a mean of 0.5. Typically that kind of data is tailed out to the right and sometimes can be normalized by simply taking the inverse, so now your mean is 2 response / second. It's hard to argue that either one more meaningfully represents what happened and they're also both straightforwardly expressive and easy to interpret. Another thing to consider is that sometimes the transformed data actually are more meaningful. Sometimes the data need to be transformed partially because the transformation is the more natural expression of the response variable. There are probably lots of things to consider I haven't even mentioned. If you're having a difficult time deciding for your particular problem then ask the particular question about the exact kind of data you have.
Whether to report untransformed data when performing ANOVA on transformed data? This depends on a number of things. The analysis was done within the transformation space so presenting the data back-transformed can distort things (untransformed means is just wrong, but converting
46,916
Whether to report untransformed data when performing ANOVA on transformed data?
@John has a really good answer here. I just want to add an orthogonal point. Having normally distributed data isn't as important as many people believe. The Gauss-Markov theorem tells us that it's not necessary for model estimation. Normality is required for $p$-values to be accurate with low $N$ (i.e., $p$-values will be correct, even with non-normal data, if $N$ is sufficiently high). If $N$ is low, then you would want to bootstrap your standard errors / $p$-values. Transformations are often performed because the data are most meaningful / interpretable in that scale or to correct for heterogeneity of variance (a more important problem than non-normality). For instance, John used reaction times as an example. It is well-known that the standard deviation of reaction times increases as the mean increases. Taking the log stabilizes the variance.
Whether to report untransformed data when performing ANOVA on transformed data?
@John has a really good answer here. I just want to add an orthogonal point. Having normally distributed data isn't as important as many people believe. The Gauss-Markov theorem tells us that it's no
Whether to report untransformed data when performing ANOVA on transformed data? @John has a really good answer here. I just want to add an orthogonal point. Having normally distributed data isn't as important as many people believe. The Gauss-Markov theorem tells us that it's not necessary for model estimation. Normality is required for $p$-values to be accurate with low $N$ (i.e., $p$-values will be correct, even with non-normal data, if $N$ is sufficiently high). If $N$ is low, then you would want to bootstrap your standard errors / $p$-values. Transformations are often performed because the data are most meaningful / interpretable in that scale or to correct for heterogeneity of variance (a more important problem than non-normality). For instance, John used reaction times as an example. It is well-known that the standard deviation of reaction times increases as the mean increases. Taking the log stabilizes the variance.
Whether to report untransformed data when performing ANOVA on transformed data? @John has a really good answer here. I just want to add an orthogonal point. Having normally distributed data isn't as important as many people believe. The Gauss-Markov theorem tells us that it's no
46,917
Whether to report untransformed data when performing ANOVA on transformed data?
It will depend on your application, but in the biological sciences it's advised to present the un-transformed means as they are usually more interpretable than the transformed means
Whether to report untransformed data when performing ANOVA on transformed data?
It will depend on your application, but in the biological sciences it's advised to present the un-transformed means as they are usually more interpretable than the transformed means
Whether to report untransformed data when performing ANOVA on transformed data? It will depend on your application, but in the biological sciences it's advised to present the un-transformed means as they are usually more interpretable than the transformed means
Whether to report untransformed data when performing ANOVA on transformed data? It will depend on your application, but in the biological sciences it's advised to present the un-transformed means as they are usually more interpretable than the transformed means
46,918
How can I (or should I) test that observation A tends to be greater than observation B for each subject?
You can do a paired t-test. In R, t.test(surfOM,meanOM,paired=TRUE) That will give you a p-value and a confidence interval for the mean of the differences. These only really makes sense if the lakes are a sample of a larger population of lakes, not if you have data on all the lakes in your population (e.g. all the lakes in some geographical area, or all the lakes of a certain type of interest).
How can I (or should I) test that observation A tends to be greater than observation B for each subj
You can do a paired t-test. In R, t.test(surfOM,meanOM,paired=TRUE) That will give you a p-value and a confidence interval for the mean of the differences. These only really makes sense if the lakes
How can I (or should I) test that observation A tends to be greater than observation B for each subject? You can do a paired t-test. In R, t.test(surfOM,meanOM,paired=TRUE) That will give you a p-value and a confidence interval for the mean of the differences. These only really makes sense if the lakes are a sample of a larger population of lakes, not if you have data on all the lakes in your population (e.g. all the lakes in some geographical area, or all the lakes of a certain type of interest).
How can I (or should I) test that observation A tends to be greater than observation B for each subj You can do a paired t-test. In R, t.test(surfOM,meanOM,paired=TRUE) That will give you a p-value and a confidence interval for the mean of the differences. These only really makes sense if the lakes
46,919
How can I (or should I) test that observation A tends to be greater than observation B for each subject?
When I hear the statement "A tends to be greater than B" it sounds like A is greater than B a bunch of the time, say, more than 50% of the time. This can happen when $\mu_{A}$ is greater than $\mu_{B}$, but it can also happen that $\mu_{A}$ is greater than $\mu_{B}$... yet $P(A > B) < 0.50$. (For a concrete example of what I'm getting at, let B be identically 1/2, and let A put probability $1 - p$ on zero with its remaining probability $p$ on some integer $n \geq 1$. We can make $P(A > B)$ as small as we like by letting $p \to 0$, yet we can make $\mu_{A}$ as big as we like by letting $n \to \infty$.) On top of all this, it looks like your "B" is itself a "mean", which complicates the language. So, my first thought is to try to figure out whether you'd be more interested in $\mu_{A} > \mu_{B}$ or if you'd rather be more interested in $P(A > B) > 0.50$. If you're interested in the means then the t-test should be just fine. If you're more interested in $P(X > Y)$, you might consider the Sign Test. It's a nonparametric test, it's light on assumptions, and you don't need to be so concerned about outliers. If you could additionally say that the distribution of the differences of organic matter was symmetric, you could do one better and go for the Wilcoxon Signed Rank test. Anyway, suppose you can say nothing more about the distributions, and suppose they are appropriate for the Sign Test. The data given above had 8 out of 9 pairs with A greater than B. For these data, in R, you would do binom.test(8, 9, alt = "greater") The output would give a p-value and a 95% one-sided Clopper-Pearson confidence interval (which generally is conservative).
How can I (or should I) test that observation A tends to be greater than observation B for each subj
When I hear the statement "A tends to be greater than B" it sounds like A is greater than B a bunch of the time, say, more than 50% of the time. This can happen when $\mu_{A}$ is greater than $\mu_{B
How can I (or should I) test that observation A tends to be greater than observation B for each subject? When I hear the statement "A tends to be greater than B" it sounds like A is greater than B a bunch of the time, say, more than 50% of the time. This can happen when $\mu_{A}$ is greater than $\mu_{B}$, but it can also happen that $\mu_{A}$ is greater than $\mu_{B}$... yet $P(A > B) < 0.50$. (For a concrete example of what I'm getting at, let B be identically 1/2, and let A put probability $1 - p$ on zero with its remaining probability $p$ on some integer $n \geq 1$. We can make $P(A > B)$ as small as we like by letting $p \to 0$, yet we can make $\mu_{A}$ as big as we like by letting $n \to \infty$.) On top of all this, it looks like your "B" is itself a "mean", which complicates the language. So, my first thought is to try to figure out whether you'd be more interested in $\mu_{A} > \mu_{B}$ or if you'd rather be more interested in $P(A > B) > 0.50$. If you're interested in the means then the t-test should be just fine. If you're more interested in $P(X > Y)$, you might consider the Sign Test. It's a nonparametric test, it's light on assumptions, and you don't need to be so concerned about outliers. If you could additionally say that the distribution of the differences of organic matter was symmetric, you could do one better and go for the Wilcoxon Signed Rank test. Anyway, suppose you can say nothing more about the distributions, and suppose they are appropriate for the Sign Test. The data given above had 8 out of 9 pairs with A greater than B. For these data, in R, you would do binom.test(8, 9, alt = "greater") The output would give a p-value and a 95% one-sided Clopper-Pearson confidence interval (which generally is conservative).
How can I (or should I) test that observation A tends to be greater than observation B for each subj When I hear the statement "A tends to be greater than B" it sounds like A is greater than B a bunch of the time, say, more than 50% of the time. This can happen when $\mu_{A}$ is greater than $\mu_{B
46,920
Are confidence intervals always symmetrical around the point estimate? [duplicate]
Short Answer: No Long Answer: It depends. A confidence interval obtained from an analytical technique (a formula) will be symmetrical around the point estimate on a particular scale. For example, Hazard Ratios, Risk Ratios and Odds Ratios are symmetrical around the point estimate on the natural log scale. They aren't necessarily on other scales, which is often while you see them graphed on the log scale. Confidence intervals obtained by other techniques, like bootstrapping, simulation, or credible intervals in Bayesian analysis, often are, but need not be.
Are confidence intervals always symmetrical around the point estimate? [duplicate]
Short Answer: No Long Answer: It depends. A confidence interval obtained from an analytical technique (a formula) will be symmetrical around the point estimate on a particular scale. For example, Haza
Are confidence intervals always symmetrical around the point estimate? [duplicate] Short Answer: No Long Answer: It depends. A confidence interval obtained from an analytical technique (a formula) will be symmetrical around the point estimate on a particular scale. For example, Hazard Ratios, Risk Ratios and Odds Ratios are symmetrical around the point estimate on the natural log scale. They aren't necessarily on other scales, which is often while you see them graphed on the log scale. Confidence intervals obtained by other techniques, like bootstrapping, simulation, or credible intervals in Bayesian analysis, often are, but need not be.
Are confidence intervals always symmetrical around the point estimate? [duplicate] Short Answer: No Long Answer: It depends. A confidence interval obtained from an analytical technique (a formula) will be symmetrical around the point estimate on a particular scale. For example, Haza
46,921
Which algorithm to compute p-value of logrank test with three or more groups is best?
As far as I know, the simpler formula is known to be a conservative approximation of the more complicated version. In the classical Cox and Oakes "Analysis of Survival Data" book, chapter 7.7 describes the derivation of the log-rank test as a score test in the two-sample case, and shows how the simpler formula corresponds to using a different (larger) estimator of the information matrix. I assume that this argument would generalize to more than two samples. If you want to see the derivation of the longer formula, it is quite straightforward, and is written out, for example, in the Klein and Moeschberger "Survival Analysis" textbook. In summary, there is no doubt that the more complicated formula is the "correct" one, but is approximated by the easier-to-understand and compute-by-hand simpler formula.
Which algorithm to compute p-value of logrank test with three or more groups is best?
As far as I know, the simpler formula is known to be a conservative approximation of the more complicated version. In the classical Cox and Oakes "Analysis of Survival Data" book, chapter 7.7 describe
Which algorithm to compute p-value of logrank test with three or more groups is best? As far as I know, the simpler formula is known to be a conservative approximation of the more complicated version. In the classical Cox and Oakes "Analysis of Survival Data" book, chapter 7.7 describes the derivation of the log-rank test as a score test in the two-sample case, and shows how the simpler formula corresponds to using a different (larger) estimator of the information matrix. I assume that this argument would generalize to more than two samples. If you want to see the derivation of the longer formula, it is quite straightforward, and is written out, for example, in the Klein and Moeschberger "Survival Analysis" textbook. In summary, there is no doubt that the more complicated formula is the "correct" one, but is approximated by the easier-to-understand and compute-by-hand simpler formula.
Which algorithm to compute p-value of logrank test with three or more groups is best? As far as I know, the simpler formula is known to be a conservative approximation of the more complicated version. In the classical Cox and Oakes "Analysis of Survival Data" book, chapter 7.7 describe
46,922
Which algorithm to compute p-value of logrank test with three or more groups is best?
This amazingly complete handout about survival analysis (from Michael Vaeth, at the University of Aarhus) states on page 40: "Some computer packages and text books use the name log rank test for a slightly different test statistic, namely: AltChi2 = (O1 - E1)^2/E1 + (O2 - E2)^2/E2. The alternative version of the log rank test is (slightly) conservative, since AltChi2 < Chi2 is always satisfied. The p value from the alternative test is therefore too large. If the data contain no tied event times the discrepancy is minimal, but the difference increases with the number of ties among the uncensored observations." A lecture handout is not quite a book chapter or journal article, but he agrees that the simpler method is purely a calculation shortcut that gives a less accurate (more conservative, i.e. larger) P value.
Which algorithm to compute p-value of logrank test with three or more groups is best?
This amazingly complete handout about survival analysis (from Michael Vaeth, at the University of Aarhus) states on page 40: "Some computer packages and text books use the name log rank test for a
Which algorithm to compute p-value of logrank test with three or more groups is best? This amazingly complete handout about survival analysis (from Michael Vaeth, at the University of Aarhus) states on page 40: "Some computer packages and text books use the name log rank test for a slightly different test statistic, namely: AltChi2 = (O1 - E1)^2/E1 + (O2 - E2)^2/E2. The alternative version of the log rank test is (slightly) conservative, since AltChi2 < Chi2 is always satisfied. The p value from the alternative test is therefore too large. If the data contain no tied event times the discrepancy is minimal, but the difference increases with the number of ties among the uncensored observations." A lecture handout is not quite a book chapter or journal article, but he agrees that the simpler method is purely a calculation shortcut that gives a less accurate (more conservative, i.e. larger) P value.
Which algorithm to compute p-value of logrank test with three or more groups is best? This amazingly complete handout about survival analysis (from Michael Vaeth, at the University of Aarhus) states on page 40: "Some computer packages and text books use the name log rank test for a
46,923
Recursive partitioning using median (instead of mean)
You could also pre-process your data, using a transformation like the spatial sign transformation, or the rank-order transformation to minimize the impact of outliers.
Recursive partitioning using median (instead of mean)
You could also pre-process your data, using a transformation like the spatial sign transformation, or the rank-order transformation to minimize the impact of outliers.
Recursive partitioning using median (instead of mean) You could also pre-process your data, using a transformation like the spatial sign transformation, or the rank-order transformation to minimize the impact of outliers.
Recursive partitioning using median (instead of mean) You could also pre-process your data, using a transformation like the spatial sign transformation, or the rank-order transformation to minimize the impact of outliers.
46,924
Recursive partitioning using median (instead of mean)
Although I have never used it, the quantregForest package seems to do what you want. Here is the description: Quantile Regression Forests is a tree-based ensemble method for estimation of conditional quantiles. It is particularly well suited for high-dimensional data. Predictor variables of mixed classes can be handled. The package is dependent on the package randomForests, written by Andy Liaw. There is also an article accompanying the quantregForest package: Meinshausen N (2006). “Quantile Regression Forests.” Journal of Machine Learning Research, 7, 983–999.
Recursive partitioning using median (instead of mean)
Although I have never used it, the quantregForest package seems to do what you want. Here is the description: Quantile Regression Forests is a tree-based ensemble method for estimation of condition
Recursive partitioning using median (instead of mean) Although I have never used it, the quantregForest package seems to do what you want. Here is the description: Quantile Regression Forests is a tree-based ensemble method for estimation of conditional quantiles. It is particularly well suited for high-dimensional data. Predictor variables of mixed classes can be handled. The package is dependent on the package randomForests, written by Andy Liaw. There is also an article accompanying the quantregForest package: Meinshausen N (2006). “Quantile Regression Forests.” Journal of Machine Learning Research, 7, 983–999.
Recursive partitioning using median (instead of mean) Although I have never used it, the quantregForest package seems to do what you want. Here is the description: Quantile Regression Forests is a tree-based ensemble method for estimation of condition
46,925
Recursive partitioning using median (instead of mean)
In addition to Johannes's suggestion about quantregForest, there is also an R package called gbm (generalized boosted machine) which uses trees to calculate conditional quantiles.
Recursive partitioning using median (instead of mean)
In addition to Johannes's suggestion about quantregForest, there is also an R package called gbm (generalized boosted machine) which uses trees to calculate conditional quantiles.
Recursive partitioning using median (instead of mean) In addition to Johannes's suggestion about quantregForest, there is also an R package called gbm (generalized boosted machine) which uses trees to calculate conditional quantiles.
Recursive partitioning using median (instead of mean) In addition to Johannes's suggestion about quantregForest, there is also an R package called gbm (generalized boosted machine) which uses trees to calculate conditional quantiles.
46,926
How to determine the sample distribution based on a survey involving six variables?
There is no single answer for your question, but you can approximate the six distributions to a varying degree of accuracy. First thing you should do is plot them using either histogram (hist() in R) or a kernel density estimate (density()). It should give you and idea as to what parametric family (exponential, normal, log-normal...) might provide you with a reasonable fit. If there is one, you can proceed with estimating the parameters.
How to determine the sample distribution based on a survey involving six variables?
There is no single answer for your question, but you can approximate the six distributions to a varying degree of accuracy. First thing you should do is plot them using either histogram (hist() in R)
How to determine the sample distribution based on a survey involving six variables? There is no single answer for your question, but you can approximate the six distributions to a varying degree of accuracy. First thing you should do is plot them using either histogram (hist() in R) or a kernel density estimate (density()). It should give you and idea as to what parametric family (exponential, normal, log-normal...) might provide you with a reasonable fit. If there is one, you can proceed with estimating the parameters.
How to determine the sample distribution based on a survey involving six variables? There is no single answer for your question, but you can approximate the six distributions to a varying degree of accuracy. First thing you should do is plot them using either histogram (hist() in R)
46,927
How to determine the sample distribution based on a survey involving six variables?
I personally think this is a poor idea. If you know that your data comes from a certain distribution, you can probably say something meaningful. You may have 0/1 responses, so the distribution is binomial, may be conditional on some other covariates -- that's a logistic regression. You may have counts, so the distribution is Poisson, may be conditional on some other covariates -- that's Poisson or zero inflated Poisson or negative binomial regression. However, generally just peeking and the data and trying to determine the distribution rarely leads to good results. Telling us what your ultimate goal of analysis is may help suggesting some better routes. Do you want to simulate new data from a similar distribution? Do you want to provide an analytical summary that's easy to compute for certain distributions? (I've seen people fit lognormal curve to income data, so as to report the Gini coefficient.) Do you want to compare your results with somebody else's? Also, keep in mind that a small sample (say under 100) will be compatible with many possible distributions. A distribution with positive values only could be represented by a gamma, or a lognormal, or a beta, or by Pearson family, and there is simply no way distinguishing between them on the basis of the data only. On the other hand, large samples (say more than 10000) won't be compatible with anything, since the real life is richer than the assumptions we make about it.
How to determine the sample distribution based on a survey involving six variables?
I personally think this is a poor idea. If you know that your data comes from a certain distribution, you can probably say something meaningful. You may have 0/1 responses, so the distribution is bino
How to determine the sample distribution based on a survey involving six variables? I personally think this is a poor idea. If you know that your data comes from a certain distribution, you can probably say something meaningful. You may have 0/1 responses, so the distribution is binomial, may be conditional on some other covariates -- that's a logistic regression. You may have counts, so the distribution is Poisson, may be conditional on some other covariates -- that's Poisson or zero inflated Poisson or negative binomial regression. However, generally just peeking and the data and trying to determine the distribution rarely leads to good results. Telling us what your ultimate goal of analysis is may help suggesting some better routes. Do you want to simulate new data from a similar distribution? Do you want to provide an analytical summary that's easy to compute for certain distributions? (I've seen people fit lognormal curve to income data, so as to report the Gini coefficient.) Do you want to compare your results with somebody else's? Also, keep in mind that a small sample (say under 100) will be compatible with many possible distributions. A distribution with positive values only could be represented by a gamma, or a lognormal, or a beta, or by Pearson family, and there is simply no way distinguishing between them on the basis of the data only. On the other hand, large samples (say more than 10000) won't be compatible with anything, since the real life is richer than the assumptions we make about it.
How to determine the sample distribution based on a survey involving six variables? I personally think this is a poor idea. If you know that your data comes from a certain distribution, you can probably say something meaningful. You may have 0/1 responses, so the distribution is bino
46,928
Beta binomial Bayesian updating over many iterations
1) You could scale it down, so $\alpha,\beta\mapsto \alpha/N, \beta/N$. This would indeed allow you to continue. What this would do, however, is to make older data carry less weight (if $N$ is two, it would be carrying half as much weight). This might even be a feature, if you would rather trust newer data. Compare for example $\alpha=\beta=20$ and $\alpha=\beta=10$ here. What you are doing when dividing by $N$ is multiplying the variance of the distribution by $N$ (almost!) while leaving the expected value unaffected. 2) You could stop right there. With 1 million data points, you distribution will essentially be a point. If you are having troubles with your model, despite 1000000 data points, you don't need more data, you need a better model. In short, overflow shouldn't be a problem with a binomial-beta setup, because long before you reach overflow, you will have insanely small confidence intervals.
Beta binomial Bayesian updating over many iterations
1) You could scale it down, so $\alpha,\beta\mapsto \alpha/N, \beta/N$. This would indeed allow you to continue. What this would do, however, is to make older data carry less weight (if $N$ is two, it
Beta binomial Bayesian updating over many iterations 1) You could scale it down, so $\alpha,\beta\mapsto \alpha/N, \beta/N$. This would indeed allow you to continue. What this would do, however, is to make older data carry less weight (if $N$ is two, it would be carrying half as much weight). This might even be a feature, if you would rather trust newer data. Compare for example $\alpha=\beta=20$ and $\alpha=\beta=10$ here. What you are doing when dividing by $N$ is multiplying the variance of the distribution by $N$ (almost!) while leaving the expected value unaffected. 2) You could stop right there. With 1 million data points, you distribution will essentially be a point. If you are having troubles with your model, despite 1000000 data points, you don't need more data, you need a better model. In short, overflow shouldn't be a problem with a binomial-beta setup, because long before you reach overflow, you will have insanely small confidence intervals.
Beta binomial Bayesian updating over many iterations 1) You could scale it down, so $\alpha,\beta\mapsto \alpha/N, \beta/N$. This would indeed allow you to continue. What this would do, however, is to make older data carry less weight (if $N$ is two, it
46,929
Beta binomial Bayesian updating over many iterations
If you continue to update your prior in the manner that you described, aren't you assuming that the process that is generating your data stationary? If the answer to the question is yes, then all that you should need to do is take a random sample of your data to create a likelihood function and then generate the posterior. In that way you would not have to worry about overflow. On the otherhand, although I do not know what the process is that you are investigating, it seems almost impossible that a process could remain stationary over any long period of time. In fact, you could check to see if your data generating process is serially changing by monitoring independent estimates of the alpha and beta parameters over time. Minimally, you could make a control chart of the two parameters; or better yet there is probably a simple way to implement a likelihood ratio to check for stationarity.
Beta binomial Bayesian updating over many iterations
If you continue to update your prior in the manner that you described, aren't you assuming that the process that is generating your data stationary? If the answer to the question is yes, then all tha
Beta binomial Bayesian updating over many iterations If you continue to update your prior in the manner that you described, aren't you assuming that the process that is generating your data stationary? If the answer to the question is yes, then all that you should need to do is take a random sample of your data to create a likelihood function and then generate the posterior. In that way you would not have to worry about overflow. On the otherhand, although I do not know what the process is that you are investigating, it seems almost impossible that a process could remain stationary over any long period of time. In fact, you could check to see if your data generating process is serially changing by monitoring independent estimates of the alpha and beta parameters over time. Minimally, you could make a control chart of the two parameters; or better yet there is probably a simple way to implement a likelihood ratio to check for stationarity.
Beta binomial Bayesian updating over many iterations If you continue to update your prior in the manner that you described, aren't you assuming that the process that is generating your data stationary? If the answer to the question is yes, then all tha
46,930
Beta binomial Bayesian updating over many iterations
If alpha and beta are very large, your prior distribution must have converged already to a single point, and you can use the MAP approximation instead of the posterior distribution. Having said that, scaling alpha and beta down would preserve the mean and keen you away from conversion (if that's what you're looking for). See python code: from conjugate_prior import BetaBinomial heads = 95 tails = 105 prior_model = BetaBinomial() # Uninformative prior updated_model = prior_model.update(heads, tails) credible_interval = updated_model.posterior(0.45, 0.55) print ("There's {p:.2f}% chance that the coin is fair".format(p=credible_interval*100)) predictive = updated_model.predict(50, 50) print ("The chance of flipping 50 Heads and 50 Tails in 100 trials is {p:.2f}%".format(p=predictive*100)) scaled_down_model = BetaBinomial(BetaBinomial.mean()) # preserve mean, new model
Beta binomial Bayesian updating over many iterations
If alpha and beta are very large, your prior distribution must have converged already to a single point, and you can use the MAP approximation instead of the posterior distribution. Having said that,
Beta binomial Bayesian updating over many iterations If alpha and beta are very large, your prior distribution must have converged already to a single point, and you can use the MAP approximation instead of the posterior distribution. Having said that, scaling alpha and beta down would preserve the mean and keen you away from conversion (if that's what you're looking for). See python code: from conjugate_prior import BetaBinomial heads = 95 tails = 105 prior_model = BetaBinomial() # Uninformative prior updated_model = prior_model.update(heads, tails) credible_interval = updated_model.posterior(0.45, 0.55) print ("There's {p:.2f}% chance that the coin is fair".format(p=credible_interval*100)) predictive = updated_model.predict(50, 50) print ("The chance of flipping 50 Heads and 50 Tails in 100 trials is {p:.2f}%".format(p=predictive*100)) scaled_down_model = BetaBinomial(BetaBinomial.mean()) # preserve mean, new model
Beta binomial Bayesian updating over many iterations If alpha and beta are very large, your prior distribution must have converged already to a single point, and you can use the MAP approximation instead of the posterior distribution. Having said that,
46,931
Beta binomial Bayesian updating over many iterations
The approach I found useful for this is to divide the a and b parameters by the maximum value of the y axis at each iteration. Thus keeping the scale constant.
Beta binomial Bayesian updating over many iterations
The approach I found useful for this is to divide the a and b parameters by the maximum value of the y axis at each iteration. Thus keeping the scale constant.
Beta binomial Bayesian updating over many iterations The approach I found useful for this is to divide the a and b parameters by the maximum value of the y axis at each iteration. Thus keeping the scale constant.
Beta binomial Bayesian updating over many iterations The approach I found useful for this is to divide the a and b parameters by the maximum value of the y axis at each iteration. Thus keeping the scale constant.
46,932
How can I use credibility intervals in Bayesian logistic regression?
I wouldn't use the means at all for the classifier. You don't need to apply "corrections" or to "smooth out" a Bayesian solution, it is the optimal one for the prior information and data that you have actually used. But the means can be useful for giving you a feel for which combinations of regressor variables are likely to lead to classifying towards a particular category. However this can be a horrendously complicated beast for multinomial regression, as you have a matrix of betas to interpret (one column for each category, except for the reference, which can be thought of as having all betas "estimated" as zero with zero standard error). Given that this seems to be an attempt at a intuitive way to understand what your classifier is doing, let me propose another. I will delete this section if this is not what you were intending. You have your MCMC samples of the beta matrix: call this $\beta_{ij}^{(b)}$ where $i=1,\dots,R$ denotes the multinomial category, $j=1,\dots,p$ denotes the regressor variable (the $X$), and $b=1,\dots,B$ denotes the $bth$ MCMC sampled value. If the categories have different $X$ variables, then simply set those excluded variables' betas to zero in the matrix: $\beta_{ik}^{(b)}=0$ for all $b$ if variable $k$ was not part of the model fit to the ith category, and $\beta_{Rj}^{(b)}=0$ for all $j$ and $b$. The first thing you need is a set of covariates to use $X_{mj}\;\;\;\;m=1,\dots,M$, where $m$ is the "observation number" and $M$ is the number of predictions you are going to make. The data used to fit your model should do for this purpose, so $M=\text{sample size}$. You now calculate the linear predictor for each category for each prediction for each MCMC sample: $$y_{im}^{(b)}=\sum_{j=1}^{p}X_{mj}\beta_{ij}^{(b)}$$ (this may be quicker to code up as a matrix/array operation). Note that $y_{Rm}^{(b)}=0$ for all $b$ and $m$. Then convert this into a probability for the $mth$ observation belonging to the $ith$ category/class, call this quantity $Classify(m,i)$. $$Classify(m,i)=\frac{1}{B}\sum_{b=1}^{B}\frac{\exp\left(y_{im}^{(b)}\right)}{\sum_{l=1}^{R}\exp\left(y_{lm}^{(b)}\right)}$$ Now you plot the value of $Classify(i,m)$ against $X_{mj}$, so you will have a total of $R\times p$ plots. Looking at these should give you a feel for what the classifier is doing in relation to the regressor variables. Note that when it comes to actually classifying a new variable, you only need $Classify(i,m)$ in order to do this - all other quantities from the MCMC are irrelevant for the purpose of classification. What you do need though is a loss matrix which describes the loss incurred from classifying into category $i_{est}$ when the true category is actually $i_{true}$, this will be a $R\times R$ matrix, usually zero on the diagonal and positive everywhere else. This can be very important if correctly identifying "rare" classes is crucial compared to correctly identifying "common" classes.
How can I use credibility intervals in Bayesian logistic regression?
I wouldn't use the means at all for the classifier. You don't need to apply "corrections" or to "smooth out" a Bayesian solution, it is the optimal one for the prior information and data that you hav
How can I use credibility intervals in Bayesian logistic regression? I wouldn't use the means at all for the classifier. You don't need to apply "corrections" or to "smooth out" a Bayesian solution, it is the optimal one for the prior information and data that you have actually used. But the means can be useful for giving you a feel for which combinations of regressor variables are likely to lead to classifying towards a particular category. However this can be a horrendously complicated beast for multinomial regression, as you have a matrix of betas to interpret (one column for each category, except for the reference, which can be thought of as having all betas "estimated" as zero with zero standard error). Given that this seems to be an attempt at a intuitive way to understand what your classifier is doing, let me propose another. I will delete this section if this is not what you were intending. You have your MCMC samples of the beta matrix: call this $\beta_{ij}^{(b)}$ where $i=1,\dots,R$ denotes the multinomial category, $j=1,\dots,p$ denotes the regressor variable (the $X$), and $b=1,\dots,B$ denotes the $bth$ MCMC sampled value. If the categories have different $X$ variables, then simply set those excluded variables' betas to zero in the matrix: $\beta_{ik}^{(b)}=0$ for all $b$ if variable $k$ was not part of the model fit to the ith category, and $\beta_{Rj}^{(b)}=0$ for all $j$ and $b$. The first thing you need is a set of covariates to use $X_{mj}\;\;\;\;m=1,\dots,M$, where $m$ is the "observation number" and $M$ is the number of predictions you are going to make. The data used to fit your model should do for this purpose, so $M=\text{sample size}$. You now calculate the linear predictor for each category for each prediction for each MCMC sample: $$y_{im}^{(b)}=\sum_{j=1}^{p}X_{mj}\beta_{ij}^{(b)}$$ (this may be quicker to code up as a matrix/array operation). Note that $y_{Rm}^{(b)}=0$ for all $b$ and $m$. Then convert this into a probability for the $mth$ observation belonging to the $ith$ category/class, call this quantity $Classify(m,i)$. $$Classify(m,i)=\frac{1}{B}\sum_{b=1}^{B}\frac{\exp\left(y_{im}^{(b)}\right)}{\sum_{l=1}^{R}\exp\left(y_{lm}^{(b)}\right)}$$ Now you plot the value of $Classify(i,m)$ against $X_{mj}$, so you will have a total of $R\times p$ plots. Looking at these should give you a feel for what the classifier is doing in relation to the regressor variables. Note that when it comes to actually classifying a new variable, you only need $Classify(i,m)$ in order to do this - all other quantities from the MCMC are irrelevant for the purpose of classification. What you do need though is a loss matrix which describes the loss incurred from classifying into category $i_{est}$ when the true category is actually $i_{true}$, this will be a $R\times R$ matrix, usually zero on the diagonal and positive everywhere else. This can be very important if correctly identifying "rare" classes is crucial compared to correctly identifying "common" classes.
How can I use credibility intervals in Bayesian logistic regression? I wouldn't use the means at all for the classifier. You don't need to apply "corrections" or to "smooth out" a Bayesian solution, it is the optimal one for the prior information and data that you hav
46,933
How can I use credibility intervals in Bayesian logistic regression?
Don't use the mean of the sampled coefficients for making predictions, instead compute the predictions for logistic regression models with all of the sampled coefficient vectors and take the mean of those predictions (or better still treat the predictions for all sampled coefficient vectors as the posterior distribution of the probability of class membership - the spread of that distribution is a useful indicator of how confident the classifier is about the probabilistic classification). The distribution of the sampled coeffcient vector gives an impression of how well the training data constrain the value of that parameter, so if the distribution is broad, we can't be confident of the "true" value of that coefficient (as explained by Manoel Galdino). However, the key advantage of having a distribution of plausible coefficient vectors is that it provides you with a rational way to get a distribution of plausible values for the probability of class membership, which is what we really want. Often using a Bayesian approach, we are not really interested in the coefficients of the model, but in the function implemented by the model.
How can I use credibility intervals in Bayesian logistic regression?
Don't use the mean of the sampled coefficients for making predictions, instead compute the predictions for logistic regression models with all of the sampled coefficient vectors and take the mean of t
How can I use credibility intervals in Bayesian logistic regression? Don't use the mean of the sampled coefficients for making predictions, instead compute the predictions for logistic regression models with all of the sampled coefficient vectors and take the mean of those predictions (or better still treat the predictions for all sampled coefficient vectors as the posterior distribution of the probability of class membership - the spread of that distribution is a useful indicator of how confident the classifier is about the probabilistic classification). The distribution of the sampled coeffcient vector gives an impression of how well the training data constrain the value of that parameter, so if the distribution is broad, we can't be confident of the "true" value of that coefficient (as explained by Manoel Galdino). However, the key advantage of having a distribution of plausible coefficient vectors is that it provides you with a rational way to get a distribution of plausible values for the probability of class membership, which is what we really want. Often using a Bayesian approach, we are not really interested in the coefficients of the model, but in the function implemented by the model.
How can I use credibility intervals in Bayesian logistic regression? Don't use the mean of the sampled coefficients for making predictions, instead compute the predictions for logistic regression models with all of the sampled coefficient vectors and take the mean of t
46,934
How can I use credibility intervals in Bayesian logistic regression?
I don't know if I am understanding correctly your question. But I guess you may use the posterior density to assess the uncertainty around point estimates like the mean. You may plot a histogram, calculate standard deviations. This is easy to do, if you have the MCMC output. Just take the values sampled (after a burnin period) and compute the means, standard deviations and plot histograms or densities. Another advantage of a posterior distribution is that you can assess your uncertainty on tails of the distribution and also if the distribution is simetric around the mode/mean. Confidence intervals in general assume that the distribution is simetric and that outliers are rare...
How can I use credibility intervals in Bayesian logistic regression?
I don't know if I am understanding correctly your question. But I guess you may use the posterior density to assess the uncertainty around point estimates like the mean. You may plot a histogram, calc
How can I use credibility intervals in Bayesian logistic regression? I don't know if I am understanding correctly your question. But I guess you may use the posterior density to assess the uncertainty around point estimates like the mean. You may plot a histogram, calculate standard deviations. This is easy to do, if you have the MCMC output. Just take the values sampled (after a burnin period) and compute the means, standard deviations and plot histograms or densities. Another advantage of a posterior distribution is that you can assess your uncertainty on tails of the distribution and also if the distribution is simetric around the mode/mean. Confidence intervals in general assume that the distribution is simetric and that outliers are rare...
How can I use credibility intervals in Bayesian logistic regression? I don't know if I am understanding correctly your question. But I guess you may use the posterior density to assess the uncertainty around point estimates like the mean. You may plot a histogram, calc
46,935
How can I use credibility intervals in Bayesian logistic regression?
What you're looking for, and what the other respondants have proposed, is called the posterior predictive distribution. It takes into account the inherent uncertainty of the parameter estimates. You can either use the samples from the MCMC run, or you can approximate it from the mean and covariance of the posterior distribution of the parameters by use of the probit function. See pages 218-220 of Chris Bishop's book "Pattern Recognition and Machine Learning" for an overview of how this can be done.
How can I use credibility intervals in Bayesian logistic regression?
What you're looking for, and what the other respondants have proposed, is called the posterior predictive distribution. It takes into account the inherent uncertainty of the parameter estimates. You c
How can I use credibility intervals in Bayesian logistic regression? What you're looking for, and what the other respondants have proposed, is called the posterior predictive distribution. It takes into account the inherent uncertainty of the parameter estimates. You can either use the samples from the MCMC run, or you can approximate it from the mean and covariance of the posterior distribution of the parameters by use of the probit function. See pages 218-220 of Chris Bishop's book "Pattern Recognition and Machine Learning" for an overview of how this can be done.
How can I use credibility intervals in Bayesian logistic regression? What you're looking for, and what the other respondants have proposed, is called the posterior predictive distribution. It takes into account the inherent uncertainty of the parameter estimates. You c
46,936
Generating sorted pseudo-random numbers in Stata
The help for set_seed states The sequences these functions produce are determined by the seed, which is just a number and which is set to 123456789 every time Stata is launched. Stata's philosophy emphasizes reproducibility, so this consistency is not surprising. Of course you can set the seed yourself. See the help page for more information. One way to sort a column separately from all others is to preserve your data, keep only the column to sort, sort it, save the results in a temporary file, restore your data, and merge the temporary file: gen y = rnormal() preserve keep y sort y tempfile out save `out' restore merge 1:1 _n using `out', nogen
Generating sorted pseudo-random numbers in Stata
The help for set_seed states The sequences these functions produce are determined by the seed, which is just a number and which is set to 123456789 every time Stata is launched. Stata's philosophy
Generating sorted pseudo-random numbers in Stata The help for set_seed states The sequences these functions produce are determined by the seed, which is just a number and which is set to 123456789 every time Stata is launched. Stata's philosophy emphasizes reproducibility, so this consistency is not surprising. Of course you can set the seed yourself. See the help page for more information. One way to sort a column separately from all others is to preserve your data, keep only the column to sort, sort it, save the results in a temporary file, restore your data, and merge the temporary file: gen y = rnormal() preserve keep y sort y tempfile out save `out' restore merge 1:1 _n using `out', nogen
Generating sorted pseudo-random numbers in Stata The help for set_seed states The sequences these functions produce are determined by the seed, which is just a number and which is set to 123456789 every time Stata is launched. Stata's philosophy
46,937
How to export data in R syntax?
You can use dput() to get a structure() that can be used later. > #Build the original data frame > x <- seq(1, 10, 1) > y <- seq(10, 100, 10) > df <- data.frame(x=x, y=y) > df x y 1 1 10 2 2 20 3 3 30 4 4 40 5 5 50 6 6 60 7 7 70 8 8 80 9 9 90 10 10 100 > #Use the dput() statement to print out the structure of df > dput(df) structure(list(x = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10), y = c(10, 20, 30, 40, 50, 60, 70, 80, 90, 100)), .Names = c("x", "y"), row.names = c(NA, -10L), class = "data.frame") The above structure statement is the output of dput(df). If you copy/paste that into your R text file, you can use it later. Here's how. > #Build a new dataframe from the structure() statement > newdf <- structure(list(x = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10), y = c(10, 20, 30, 40, 50, 60, 70, 80, 90, 100)), .Names = c("x", "y"), row.names = c(NA, -10L), class = "data.frame") > newdf x y 1 1 10 2 2 20 3 3 30 4 4 40 5 5 50 6 6 60 7 7 70 8 8 80 9 9 90 10 10 100
How to export data in R syntax?
You can use dput() to get a structure() that can be used later. > #Build the original data frame > x <- seq(1, 10, 1) > y <- seq(10, 100, 10) > df <- data.frame(x=x, y=y) > df x y 1
How to export data in R syntax? You can use dput() to get a structure() that can be used later. > #Build the original data frame > x <- seq(1, 10, 1) > y <- seq(10, 100, 10) > df <- data.frame(x=x, y=y) > df x y 1 1 10 2 2 20 3 3 30 4 4 40 5 5 50 6 6 60 7 7 70 8 8 80 9 9 90 10 10 100 > #Use the dput() statement to print out the structure of df > dput(df) structure(list(x = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10), y = c(10, 20, 30, 40, 50, 60, 70, 80, 90, 100)), .Names = c("x", "y"), row.names = c(NA, -10L), class = "data.frame") The above structure statement is the output of dput(df). If you copy/paste that into your R text file, you can use it later. Here's how. > #Build a new dataframe from the structure() statement > newdf <- structure(list(x = c(1, 2, 3, 4, 5, 6, 7, 8, 9, 10), y = c(10, 20, 30, 40, 50, 60, 70, 80, 90, 100)), .Names = c("x", "y"), row.names = c(NA, -10L), class = "data.frame") > newdf x y 1 1 10 2 2 20 3 3 30 4 4 40 5 5 50 6 6 60 7 7 70 8 8 80 9 9 90 10 10 100
How to export data in R syntax? You can use dput() to get a structure() that can be used later. > #Build the original data frame > x <- seq(1, 10, 1) > y <- seq(10, 100, 10) > df <- data.frame(x=x, y=y) > df x y 1
46,938
Advice on missing value imputation
First of all: it is not clear from your explanation whether or not you have done multiple imputation. If not: please do so: single imputation could be worse than simple complete case analysis, and can both lead to severely biased results. Next, if I understand correctly, your problem is that you don't know which variables to use as covariates for you imputation model. If you number of possible covariates (I assume these are the other covariates in your model of interest) is limited, you could opt for the nonparametric kind of imputation that is offered by MICE (in R) and similar algorithms. Another option is to use shrinkage (LASSO or alike) in a model predicting customer_no_dependent: this should give you a set of likely predictors. Be aware though, that this step induces even more uncertainty (you reuse the data yet again), and you should trust your confidence intervals and p-values somewhat less. The effect should be negligble if your association is truly as strong as you indicate. If you do use the kind of parametric and commonsense induced imputation mechanism (like regressing on 'credible' predictors): simply make note of this fact, and mention that the obtained results are conditional on this extra set of assumptions.
Advice on missing value imputation
First of all: it is not clear from your explanation whether or not you have done multiple imputation. If not: please do so: single imputation could be worse than simple complete case analysis, and can
Advice on missing value imputation First of all: it is not clear from your explanation whether or not you have done multiple imputation. If not: please do so: single imputation could be worse than simple complete case analysis, and can both lead to severely biased results. Next, if I understand correctly, your problem is that you don't know which variables to use as covariates for you imputation model. If you number of possible covariates (I assume these are the other covariates in your model of interest) is limited, you could opt for the nonparametric kind of imputation that is offered by MICE (in R) and similar algorithms. Another option is to use shrinkage (LASSO or alike) in a model predicting customer_no_dependent: this should give you a set of likely predictors. Be aware though, that this step induces even more uncertainty (you reuse the data yet again), and you should trust your confidence intervals and p-values somewhat less. The effect should be negligble if your association is truly as strong as you indicate. If you do use the kind of parametric and commonsense induced imputation mechanism (like regressing on 'credible' predictors): simply make note of this fact, and mention that the obtained results are conditional on this extra set of assumptions.
Advice on missing value imputation First of all: it is not clear from your explanation whether or not you have done multiple imputation. If not: please do so: single imputation could be worse than simple complete case analysis, and can
46,939
Advice on missing value imputation
I don't know if you have SAS experience, but I've used SAS PROCs MI and Mianalyze to perform (and then synthesize) multiple imputations in several different models. Building the "imputation model" (this yields non-biased estimates of missing data, incorporating the uncertainty one finds in non-missing data) is probably the most difficult task. The imputation model will include all or most analysis variables (i.e., predictors in your analysis model), as well as auxiliary variables-other variables that correlate with the dependent variable, the state of being missing or both. (Note: you might want to use p < .15 as a first threshold.) One then selects parameters such as the number of iterations (both before the first imputation and between iterations), the estimation method, the sampling method, etc. Of course, preceeding all of this, one should determine what led to the missing data, and whether missing data are MCAR (missing completely at random), MAR (missing at random), or MNAR (missing not at random). Explaining these is beyond the scope of this forum, but--if you're not familiar with these terms--there are a number of good introductory-level descriptions on the web. The above can be quite time-consuming depending on the number of candidate variables for your imputation model; however, this has the "silver-lining" advantage of clarifying what's driving the imputation. There are also a number of good diagnostic tools that allow you to evaluate and compare different imputation models. Mplus enables one to do all of this more quickly; basically, it models the state of being missing using ML estimation. You can read more about this at statmodel.com. I agree that single imputation or dropping all missing cases is probably not the best approach, depending, of course, are your research questions. If SAS is an available language and you'd like to talk about this in more detail, please post.
Advice on missing value imputation
I don't know if you have SAS experience, but I've used SAS PROCs MI and Mianalyze to perform (and then synthesize) multiple imputations in several different models. Building the "imputation model" (th
Advice on missing value imputation I don't know if you have SAS experience, but I've used SAS PROCs MI and Mianalyze to perform (and then synthesize) multiple imputations in several different models. Building the "imputation model" (this yields non-biased estimates of missing data, incorporating the uncertainty one finds in non-missing data) is probably the most difficult task. The imputation model will include all or most analysis variables (i.e., predictors in your analysis model), as well as auxiliary variables-other variables that correlate with the dependent variable, the state of being missing or both. (Note: you might want to use p < .15 as a first threshold.) One then selects parameters such as the number of iterations (both before the first imputation and between iterations), the estimation method, the sampling method, etc. Of course, preceeding all of this, one should determine what led to the missing data, and whether missing data are MCAR (missing completely at random), MAR (missing at random), or MNAR (missing not at random). Explaining these is beyond the scope of this forum, but--if you're not familiar with these terms--there are a number of good introductory-level descriptions on the web. The above can be quite time-consuming depending on the number of candidate variables for your imputation model; however, this has the "silver-lining" advantage of clarifying what's driving the imputation. There are also a number of good diagnostic tools that allow you to evaluate and compare different imputation models. Mplus enables one to do all of this more quickly; basically, it models the state of being missing using ML estimation. You can read more about this at statmodel.com. I agree that single imputation or dropping all missing cases is probably not the best approach, depending, of course, are your research questions. If SAS is an available language and you'd like to talk about this in more detail, please post.
Advice on missing value imputation I don't know if you have SAS experience, but I've used SAS PROCs MI and Mianalyze to perform (and then synthesize) multiple imputations in several different models. Building the "imputation model" (th
46,940
Generating over-dispersed counts data with serial correlation
A standard way of generating overdispersed count data is to generate data from a Poisson distribution with a random mean: $Y_i\sim Poisson(\lambda_i)$, $\lambda_i \sim F$. For example, if $\lambda_i$ has a Gamma distribution, you will get the negative binomial distribution for $Y$. You can easily impose serial correlation by imposing correlation on the $\lambda_i$'s. For example, you could have $\log\lambda_i \sim AR(1)$. Implemented in R: N <- 100 rho <- 0.6 log.lambda <- 1 + arima.sim(model=list(ar=rho), n=N) y <- rpois(N, lambda=exp(log.lambda)) > cor(head(y,-1), tail(y,-1)) [1] 0.4132512 > mean(y) [1] 4.35 > var(y) [1] 33.4015 Here $\lambda_i$'s come from a normal distribution, so the marginal distribution is not a classic distribution, but you could get more creative. Also note that the correlation of the $y$'s does not equal to rho, but it is some function of it.
Generating over-dispersed counts data with serial correlation
A standard way of generating overdispersed count data is to generate data from a Poisson distribution with a random mean: $Y_i\sim Poisson(\lambda_i)$, $\lambda_i \sim F$. For example, if $\lambda_i$
Generating over-dispersed counts data with serial correlation A standard way of generating overdispersed count data is to generate data from a Poisson distribution with a random mean: $Y_i\sim Poisson(\lambda_i)$, $\lambda_i \sim F$. For example, if $\lambda_i$ has a Gamma distribution, you will get the negative binomial distribution for $Y$. You can easily impose serial correlation by imposing correlation on the $\lambda_i$'s. For example, you could have $\log\lambda_i \sim AR(1)$. Implemented in R: N <- 100 rho <- 0.6 log.lambda <- 1 + arima.sim(model=list(ar=rho), n=N) y <- rpois(N, lambda=exp(log.lambda)) > cor(head(y,-1), tail(y,-1)) [1] 0.4132512 > mean(y) [1] 4.35 > var(y) [1] 33.4015 Here $\lambda_i$'s come from a normal distribution, so the marginal distribution is not a classic distribution, but you could get more creative. Also note that the correlation of the $y$'s does not equal to rho, but it is some function of it.
Generating over-dispersed counts data with serial correlation A standard way of generating overdispersed count data is to generate data from a Poisson distribution with a random mean: $Y_i\sim Poisson(\lambda_i)$, $\lambda_i \sim F$. For example, if $\lambda_i$
46,941
Generating over-dispersed counts data with serial correlation
This is one way to do it: v = rnorm(1, 30, 10) for (i in 2:30) v = c(v, 0.5*v[i-1] + 0.5*rnorm(1, 30, 10)) round(v)
Generating over-dispersed counts data with serial correlation
This is one way to do it: v = rnorm(1, 30, 10) for (i in 2:30) v = c(v, 0.5*v[i-1] + 0.5*rnorm(1, 30, 10)) round(v)
Generating over-dispersed counts data with serial correlation This is one way to do it: v = rnorm(1, 30, 10) for (i in 2:30) v = c(v, 0.5*v[i-1] + 0.5*rnorm(1, 30, 10)) round(v)
Generating over-dispersed counts data with serial correlation This is one way to do it: v = rnorm(1, 30, 10) for (i in 2:30) v = c(v, 0.5*v[i-1] + 0.5*rnorm(1, 30, 10)) round(v)
46,942
How to do meta-regression in SPSS?
Don't use the built-in routines of SPSS to conduct a meta-regression (wrong standard errors; does not give you correct model indices; no heterogeneity statistics). Have a look at David Wilson's SPSS "macros for performing meta-analytic analyses". One of these macros is called MetaReg which can perform fixed-effect or mixed-effects meta-regression. I would always use Stata or R. By the way, user Wolfgang is the author of an R package called metafor. This is an excellent piece of software to conduct meta-regression. As a general (non-technical) intro to meta-regression, I can recommend Thompson/Higgins (2002) "How should meta-regression analyses be undertaken and interpreted?". Now to your question: Q1: What is the minimum number of studies necessary for a meta-regression? Some people suggest at least 10 studies are required. Why not 20 or 5 studies? The answer can be found in Borenstein et al (2009: 188): "As is true in primary studies, where we need an appropriately large ratio of subjects to covariates in order for the analysis be to meaningful, in meta-analysis we need an appropriately large ratio of studies to covariates. Therefore, the use of metaregression, especially with multiple covariates, is not a recommended option when the number of studies is small. In primary studies some have recommended a ratio of at least ten subjects for each covariate, which would correspond to ten studies for each covariate in meta-regression. In fact, though, there are no hard and fast rules in either case." Q2: Is the total sample size an important consideration? What is total sample size? The number of studies? Yes, it is important. Or the number of individuals? No, it is not (or less) important. Q3: Why would 10 studies with 200 patients be enough, but 5 studies with 400 patients not be enough? It is just a(n ordinary) regression. You wouldn't run a regression with 5 data points, would you? In your comment, you state that you have 20 studies which is enough to run a meta-regression. Q4: Can I enter all three regressors at once and report the global model, or do I have to enter one regressor at a time and report 3 models each one separately? It is just a regression. I would start with three simple bivariate models then build more complex models (be aware of multicollinearity, see below). Q5: How does the correlation between the independent variables affects this choice? A high correlation between your predictor variables will have a (negative) impact on your results. You should avoid that. Please consult a textbook for the problem of multicollinearity. Q6: How does the number of the studies affect the number of independent variables that I should enter simultaneously? See the Borenstein et al citation. Q7: Does the independent variable have to be a scale variable? [...] The independent variable must be also scale, or could be ordinal or nominal? What is a "scale variable"? Do you mean a continuous/metric variable? You predictor variables can have any level of measurement. However, if you have a categorical (nominal) predictor variable, you will have to deal with dummy variables (see Multiple Regression with Categorical Variables). Q8: How can I weight my effect size for sample size? As far as I know, all meta-regression approaches expect the weights to be the inverse study variance, i.e. $\frac{1}{v_i}=\frac{1}{SE_i^2}$. Again, you will need standard errors :-) Q9: What is the preferable level of significance? Is p<0.05 still acceptable for clinical research in such an analysis? I cannot answer your question. That really depends on your research question. In my (non-clinical) research I am happy with p < 0.10.
How to do meta-regression in SPSS?
Don't use the built-in routines of SPSS to conduct a meta-regression (wrong standard errors; does not give you correct model indices; no heterogeneity statistics). Have a look at David Wilson's SPSS "
How to do meta-regression in SPSS? Don't use the built-in routines of SPSS to conduct a meta-regression (wrong standard errors; does not give you correct model indices; no heterogeneity statistics). Have a look at David Wilson's SPSS "macros for performing meta-analytic analyses". One of these macros is called MetaReg which can perform fixed-effect or mixed-effects meta-regression. I would always use Stata or R. By the way, user Wolfgang is the author of an R package called metafor. This is an excellent piece of software to conduct meta-regression. As a general (non-technical) intro to meta-regression, I can recommend Thompson/Higgins (2002) "How should meta-regression analyses be undertaken and interpreted?". Now to your question: Q1: What is the minimum number of studies necessary for a meta-regression? Some people suggest at least 10 studies are required. Why not 20 or 5 studies? The answer can be found in Borenstein et al (2009: 188): "As is true in primary studies, where we need an appropriately large ratio of subjects to covariates in order for the analysis be to meaningful, in meta-analysis we need an appropriately large ratio of studies to covariates. Therefore, the use of metaregression, especially with multiple covariates, is not a recommended option when the number of studies is small. In primary studies some have recommended a ratio of at least ten subjects for each covariate, which would correspond to ten studies for each covariate in meta-regression. In fact, though, there are no hard and fast rules in either case." Q2: Is the total sample size an important consideration? What is total sample size? The number of studies? Yes, it is important. Or the number of individuals? No, it is not (or less) important. Q3: Why would 10 studies with 200 patients be enough, but 5 studies with 400 patients not be enough? It is just a(n ordinary) regression. You wouldn't run a regression with 5 data points, would you? In your comment, you state that you have 20 studies which is enough to run a meta-regression. Q4: Can I enter all three regressors at once and report the global model, or do I have to enter one regressor at a time and report 3 models each one separately? It is just a regression. I would start with three simple bivariate models then build more complex models (be aware of multicollinearity, see below). Q5: How does the correlation between the independent variables affects this choice? A high correlation between your predictor variables will have a (negative) impact on your results. You should avoid that. Please consult a textbook for the problem of multicollinearity. Q6: How does the number of the studies affect the number of independent variables that I should enter simultaneously? See the Borenstein et al citation. Q7: Does the independent variable have to be a scale variable? [...] The independent variable must be also scale, or could be ordinal or nominal? What is a "scale variable"? Do you mean a continuous/metric variable? You predictor variables can have any level of measurement. However, if you have a categorical (nominal) predictor variable, you will have to deal with dummy variables (see Multiple Regression with Categorical Variables). Q8: How can I weight my effect size for sample size? As far as I know, all meta-regression approaches expect the weights to be the inverse study variance, i.e. $\frac{1}{v_i}=\frac{1}{SE_i^2}$. Again, you will need standard errors :-) Q9: What is the preferable level of significance? Is p<0.05 still acceptable for clinical research in such an analysis? I cannot answer your question. That really depends on your research question. In my (non-clinical) research I am happy with p < 0.10.
How to do meta-regression in SPSS? Don't use the built-in routines of SPSS to conduct a meta-regression (wrong standard errors; does not give you correct model indices; no heterogeneity statistics). Have a look at David Wilson's SPSS "
46,943
How to do meta-regression in SPSS?
These are some wonderful responses to your initial questions, and the reference guide is particularly helpful. If you're looking for a relatively simple package to do meta-regression, may I recommend Borenstein 's software package Comprehensive Meta Analysis. It is limited to meta-regression of a single predictor but this can be scale/continuous or it can be categorical. The output and graphics produced are easy to follow, and unless you're using multiple predictors it will suffice. Given that you're talking about a small study size (eg 10) you probably wouldn't be able to detect multiple predictors anyway.
How to do meta-regression in SPSS?
These are some wonderful responses to your initial questions, and the reference guide is particularly helpful. If you're looking for a relatively simple package to do meta-regression, may I recommend
How to do meta-regression in SPSS? These are some wonderful responses to your initial questions, and the reference guide is particularly helpful. If you're looking for a relatively simple package to do meta-regression, may I recommend Borenstein 's software package Comprehensive Meta Analysis. It is limited to meta-regression of a single predictor but this can be scale/continuous or it can be categorical. The output and graphics produced are easy to follow, and unless you're using multiple predictors it will suffice. Given that you're talking about a small study size (eg 10) you probably wouldn't be able to detect multiple predictors anyway.
How to do meta-regression in SPSS? These are some wonderful responses to your initial questions, and the reference guide is particularly helpful. If you're looking for a relatively simple package to do meta-regression, may I recommend
46,944
How to use Confidence Intervals to find the true mean within a percentage
I am not sure what kind of variable is being audited, so I give 2 alternatives: To be able to compute the required sample size to give an acceptable estimate to a continuous variable (= given confidence interval) you have to know a few parameters: mean, standard deviation (and to be precise: population size). If you do not know these, you have to be able to give an accurate estimate to those (based on e.g. researches in the past). $$n=\left(\frac{Z_{c}\sigma}{E}\right)^2,$$ where $n$ is sample size, $Z_{c}$ is choosen from standard normal distribution table based on $\alpha$ and $\sigma$ is the standard deviation. I could image that the variable being examined is a discrete one, and the confidence interval shows that how many percent of the population is about to choose one category based on the sample (proportion). That way the required sample size could be computed easily with:$$n=p(1-p)\left(\frac{Z_{c}}{E}\right)^2$$ where $n$ is sample size, $p$ is proportion in population, $Z_{c}$ is choosen from standard normal distribution table based on $\alpha$, and $E$ is the margin of error. Note: you can find a lot of online calculators also (e.g.). Worth reading this article also.
How to use Confidence Intervals to find the true mean within a percentage
I am not sure what kind of variable is being audited, so I give 2 alternatives: To be able to compute the required sample size to give an acceptable estimate to a continuous variable (= given confide
How to use Confidence Intervals to find the true mean within a percentage I am not sure what kind of variable is being audited, so I give 2 alternatives: To be able to compute the required sample size to give an acceptable estimate to a continuous variable (= given confidence interval) you have to know a few parameters: mean, standard deviation (and to be precise: population size). If you do not know these, you have to be able to give an accurate estimate to those (based on e.g. researches in the past). $$n=\left(\frac{Z_{c}\sigma}{E}\right)^2,$$ where $n$ is sample size, $Z_{c}$ is choosen from standard normal distribution table based on $\alpha$ and $\sigma$ is the standard deviation. I could image that the variable being examined is a discrete one, and the confidence interval shows that how many percent of the population is about to choose one category based on the sample (proportion). That way the required sample size could be computed easily with:$$n=p(1-p)\left(\frac{Z_{c}}{E}\right)^2$$ where $n$ is sample size, $p$ is proportion in population, $Z_{c}$ is choosen from standard normal distribution table based on $\alpha$, and $E$ is the margin of error. Note: you can find a lot of online calculators also (e.g.). Worth reading this article also.
How to use Confidence Intervals to find the true mean within a percentage I am not sure what kind of variable is being audited, so I give 2 alternatives: To be able to compute the required sample size to give an acceptable estimate to a continuous variable (= given confide
46,945
How to use Confidence Intervals to find the true mean within a percentage
It does seem a bit odd for this problem, because there does not appear to be a pivotal statistic or if there is, it isn't the usual Z or T statistic. Here's why I think this is the case. The problem of estimating the population mean, say $\mu$, to within $\pm $ 0.5% obviously depends on the value of $\mu$ (a pivotal statistic would NOT depend on $\mu$). To estimate $\mu$ within an absolute amount, say $\pm $1, is independent of the actual value of $\mu$ (in the normally distributed case). To put it another way, the width of the standard "Z" confidence interval does not depend on $\mu$, it only depends on the population standard deviation, say $\sigma$, the sample size n, and the level of confidence, expressed by the value Z. You can call the length of this interval $ L=L(\sigma,n,Z)=\frac{2 \sigma Z}{\sqrt{n}} $ Now we want an interval which is $0.01 \mu $ wide (equal length either side of $\mu$). So the required equation that we need to solve is: $ L=0.01 \mu=\frac{2 \sigma Z}{\sqrt{n}} $ Re-arranging for n gives $ n = (\frac{2 \sigma Z}{0.01 \mu})^2 = 40,000 Z^2 (\frac{\sigma}{\mu})^2 $ Using Z=1.96 to have a 95% CI gives $ n = 153,664 * (\frac{\sigma}{\mu})^2 $ So that you need some prior information about the ratio $\frac{\sigma}{\mu}$ (by "prior information" I mean you need to know something about the ratio $\frac{\sigma}{\mu}$ in order to solve the problem). If $\frac{\sigma}{\mu}$ is not known with certainty, then the "optimal sample size" also cannot be known with certainty. The best way to go from here is to specify a probability distribution for $\frac{\sigma}{\mu}$ and then take the expected value of $(\frac{\sigma}{\mu})^2$ and put this into the above equation. What happens if we only require $\pm 0.005 $ (rather than $\pm 0.005 \mu$) is that $\mu$ in the above equations for n disappears.
How to use Confidence Intervals to find the true mean within a percentage
It does seem a bit odd for this problem, because there does not appear to be a pivotal statistic or if there is, it isn't the usual Z or T statistic. Here's why I think this is the case. The problem o
How to use Confidence Intervals to find the true mean within a percentage It does seem a bit odd for this problem, because there does not appear to be a pivotal statistic or if there is, it isn't the usual Z or T statistic. Here's why I think this is the case. The problem of estimating the population mean, say $\mu$, to within $\pm $ 0.5% obviously depends on the value of $\mu$ (a pivotal statistic would NOT depend on $\mu$). To estimate $\mu$ within an absolute amount, say $\pm $1, is independent of the actual value of $\mu$ (in the normally distributed case). To put it another way, the width of the standard "Z" confidence interval does not depend on $\mu$, it only depends on the population standard deviation, say $\sigma$, the sample size n, and the level of confidence, expressed by the value Z. You can call the length of this interval $ L=L(\sigma,n,Z)=\frac{2 \sigma Z}{\sqrt{n}} $ Now we want an interval which is $0.01 \mu $ wide (equal length either side of $\mu$). So the required equation that we need to solve is: $ L=0.01 \mu=\frac{2 \sigma Z}{\sqrt{n}} $ Re-arranging for n gives $ n = (\frac{2 \sigma Z}{0.01 \mu})^2 = 40,000 Z^2 (\frac{\sigma}{\mu})^2 $ Using Z=1.96 to have a 95% CI gives $ n = 153,664 * (\frac{\sigma}{\mu})^2 $ So that you need some prior information about the ratio $\frac{\sigma}{\mu}$ (by "prior information" I mean you need to know something about the ratio $\frac{\sigma}{\mu}$ in order to solve the problem). If $\frac{\sigma}{\mu}$ is not known with certainty, then the "optimal sample size" also cannot be known with certainty. The best way to go from here is to specify a probability distribution for $\frac{\sigma}{\mu}$ and then take the expected value of $(\frac{\sigma}{\mu})^2$ and put this into the above equation. What happens if we only require $\pm 0.005 $ (rather than $\pm 0.005 \mu$) is that $\mu$ in the above equations for n disappears.
How to use Confidence Intervals to find the true mean within a percentage It does seem a bit odd for this problem, because there does not appear to be a pivotal statistic or if there is, it isn't the usual Z or T statistic. Here's why I think this is the case. The problem o
46,946
Time series cross section forecasting with R
After a bit of research, I can give a partial answer. In his book Wooldridge discusses Poisson and negative binomial regressions for cross-section and panel data. But for regression with lagged variables he only discusses Poisson regression. Maybe negative binomial is discussed in the new edition. The main conclusion is that for random effects Poisson regression with lagged random variable can be estimated by mixed effects Poisson regression model. The detailed description can be found here. The mixed effects Poisson regression in R can be estimated with glmer from package lme4. To adapt it to work with panel data, you will need to create lagged variable explicitly. Then your estimation command should look something like this: glmer(y~lagy+exo+(1|Country),data,family=quasipoisson) You should also look into gplm package suggested by @dickoa. But be sure to check, whether it supports lagged variables. Yves Croissant, the creator of gplm and plm packages writes wonderful code, but unfortunately in my personal experience, the code is not tested enough, so bugs crop up more frequently than in standard R packages.
Time series cross section forecasting with R
After a bit of research, I can give a partial answer. In his book Wooldridge discusses Poisson and negative binomial regressions for cross-section and panel data. But for regression with lagged varia
Time series cross section forecasting with R After a bit of research, I can give a partial answer. In his book Wooldridge discusses Poisson and negative binomial regressions for cross-section and panel data. But for regression with lagged variables he only discusses Poisson regression. Maybe negative binomial is discussed in the new edition. The main conclusion is that for random effects Poisson regression with lagged random variable can be estimated by mixed effects Poisson regression model. The detailed description can be found here. The mixed effects Poisson regression in R can be estimated with glmer from package lme4. To adapt it to work with panel data, you will need to create lagged variable explicitly. Then your estimation command should look something like this: glmer(y~lagy+exo+(1|Country),data,family=quasipoisson) You should also look into gplm package suggested by @dickoa. But be sure to check, whether it supports lagged variables. Yves Croissant, the creator of gplm and plm packages writes wonderful code, but unfortunately in my personal experience, the code is not tested enough, so bugs crop up more frequently than in standard R packages.
Time series cross section forecasting with R After a bit of research, I can give a partial answer. In his book Wooldridge discusses Poisson and negative binomial regressions for cross-section and panel data. But for regression with lagged varia
46,947
Time series cross section forecasting with R
May be you can take a look at the pglm package (from the same author of plm), use the family negbin. You can also try from a Bayesian point of view the MCMCglmm package.
Time series cross section forecasting with R
May be you can take a look at the pglm package (from the same author of plm), use the family negbin. You can also try from a Bayesian point of view the MCMCglmm package.
Time series cross section forecasting with R May be you can take a look at the pglm package (from the same author of plm), use the family negbin. You can also try from a Bayesian point of view the MCMCglmm package.
Time series cross section forecasting with R May be you can take a look at the pglm package (from the same author of plm), use the family negbin. You can also try from a Bayesian point of view the MCMCglmm package.
46,948
Interpolating the empirical cumulative function
The EDF is the CDF of the population constituted by the data themselves. This is exactly what you need to describe and analyze any resampling process from the dataset, including nonparametric bootstrapping, jackknifing, cross-validation, etc. Not only that, it's perfectly general: any kind of interpolation would be invalid for discrete distributions.
Interpolating the empirical cumulative function
The EDF is the CDF of the population constituted by the data themselves. This is exactly what you need to describe and analyze any resampling process from the dataset, including nonparametric bootstr
Interpolating the empirical cumulative function The EDF is the CDF of the population constituted by the data themselves. This is exactly what you need to describe and analyze any resampling process from the dataset, including nonparametric bootstrapping, jackknifing, cross-validation, etc. Not only that, it's perfectly general: any kind of interpolation would be invalid for discrete distributions.
Interpolating the empirical cumulative function The EDF is the CDF of the population constituted by the data themselves. This is exactly what you need to describe and analyze any resampling process from the dataset, including nonparametric bootstr
46,949
Interpolating the empirical cumulative function
The empirical CDF is just one estimator for the CDF. It's consistent, converges pretty quickly in general, and is dead simple to understand. If you want something fancier you could certainly get a kernel density estimate for the PDF and integrate it to get another estimate for the CDF, which would do some kind of interpolation as you suggest. But if it ain't broke....
Interpolating the empirical cumulative function
The empirical CDF is just one estimator for the CDF. It's consistent, converges pretty quickly in general, and is dead simple to understand. If you want something fancier you could certainly get a ker
Interpolating the empirical cumulative function The empirical CDF is just one estimator for the CDF. It's consistent, converges pretty quickly in general, and is dead simple to understand. If you want something fancier you could certainly get a kernel density estimate for the PDF and integrate it to get another estimate for the CDF, which would do some kind of interpolation as you suggest. But if it ain't broke....
Interpolating the empirical cumulative function The empirical CDF is just one estimator for the CDF. It's consistent, converges pretty quickly in general, and is dead simple to understand. If you want something fancier you could certainly get a ker
46,950
Interpolating the empirical cumulative function
I can't answer this question in full generality, but I think I can state one circumstance where it certainly is not useful: The Anderson-Darling test: \begin{align*} A^2/n &:= \int_{-\infty}^{\infty} \frac{(F_{n}(x) -F(x))^2}{F(x)(1-F(x))} \, \mathrm{d}F(x) \\ &= \int_{-\infty}^{x_0} \frac{F(x)}{1-F(x)} \, \mathrm{d}F(x) + \int_{x_{n-1}}^{\infty} \frac{1-F(x)}{F(x)} \, \mathrm{d}F(x) + \sum_{i=0}^{n-2} \int_{x_i}^{x_{i+1}} \frac{(F_n(x) - F(x))^2}{F(x)(1-F(x))} \mathrm{d}F(x) \end{align*} Here, $F$ is the cumulative distribution function of the normal distribution, namely, $$ F(x) := \frac{1}{2}\left[1 + \mathrm{erf}\left(\frac{x}{\sqrt{2}}\right) \right] $$ and $F_n$ is the empirical cumulative distribution function $$ F_n(x) := \frac{1}{n} \sum_{i=0}^{n-1} \mathbb{1}_{x_i \le x} $$ (We will abuse notation a bit and let $F_{n}$ denote the linearly interpolated version of $F_n$ as well.) I repeatedly generated $n$ $~N(0,1)$ random numbers, sorted them, and then considered $F_n$ first as a step function, and then as a sequence of linear interpolants. Each interior integral was computed via Gaussian quadrature of ridiculously high degree, and the tails via exp-sinh. Did the empirical distribution fit the cumulative distribution better with linear interpolation than step interpolation? No, in fact they are indistinguishable as $n\to \infty$ and one is not uniformly better than the other for small $n$: Code to reproduce: #include <iostream> #include <random> #include <utility> #include <boost/math/distributions/anderson_darling.hpp> #include <quicksvg/scatter_plot.hpp> template<class Real> std::pair<Real, Real> step_vs_linear(size_t n) { std::random_device rd; Real mu = 0; Real sd = 1; std::normal_distribution<Real> dis(mu, sd); std::vector<Real> v(n); for (size_t i = 0; i < n; ++i) { v[i] = dis(rd); } std::sort(v.begin(), v.end()); Real Asq = boost::math::anderson_darling_normality_step(v, mu, sd); Real step = Asq; //std::cout << "n = " << n << "\n"; //std::cout << "Step: A^2 = " << Asq << ", Asq/n = " << Asq/n << "\n"; Asq = boost::math::anderson_darling_normality_linear(v, mu, sd); Real line = Asq; //std::cout << "Line: A^2 = " << Asq << ", Asq/n = " << Asq/n << "\n"; return std::pair<Real, Real>(step, line); } int main(int argc, char** argv) { using std::log; using std::pow; using std::floor; size_t samples = 10000; std::vector<std::pair<double, double>> linear_Asq(samples); std::vector<std::pair<double, double>> step_Asq(samples); std::default_random_engine generator; std::uniform_real_distribution<double> distribution(3, 18); #pragma omp parallel for for(size_t sample = 0; sample < samples; ++sample) { size_t n = floor(pow(2, distribution(generator))); auto [step , line] = step_vs_linear<double>(n); step_Asq[sample] = std::make_pair<double, double>(std::log2(double(n)), std::log(step/n)); linear_Asq[sample] = std::make_pair<double, double>(std::log2(double(n)), std::log(line/n)); if (sample % 10 == 0) { std::cout << "Sample " << sample << "/" << samples << "\n"; } } std::string title = "Linear (blue) vs step (orange) Anderson-Darling test"; std::string filename = "ad.svg"; std::string x_label = "log2(n)"; std::string y_label = "ln(A^2/n)"; auto scat = quicksvg::scatter_plot<double>(title, filename, x_label, y_label); scat.add_dataset(linear_Asq, false, "steelblue"); scat.add_dataset(step_Asq, false, "orange"); scat.write_all(); } Anderson-Darling tests: #ifndef BOOST_MATH_DISTRIBUTIONS_ANDERSON_DARLING_HPP #define BOOST_MATH_DISTRIBUTIONS_ANDERSON_DARLING_HPP #include <cmath> #include <algorithm> #include <boost/math/distributions/normal.hpp> #include <boost/math/quadrature/exp_sinh.hpp> #include <boost/math/quadrature/gauss_kronrod.hpp> namespace boost { namespace math { template<class RandomAccessContainer> auto anderson_darling_normality_step(RandomAccessContainer const & v, typename RandomAccessContainer::value_type mu = 0, typename RandomAccessContainer::value_type sd = 1) { using Real = typename RandomAccessContainer::value_type; using std::log; using std::pow; if (!std::is_sorted(v.begin(), v.end())) { throw std::domain_error("The input vector must be sorted in non-decreasing order v[0] <= v[1] <= ... <= v[n-1]."); } auto normal = boost::math::normal_distribution(mu, sd); auto left_integrand = [&normal](Real x)->Real { Real Fx = boost::math::cdf(normal, x); Real dmu = boost::math::pdf(normal, x); return Fx*dmu/(1-Fx); }; auto es = boost::math::quadrature::exp_sinh<Real>(); Real left_tail = es.integrate(left_integrand, -std::numeric_limits<Real>::infinity(), v[0]); auto right_integrand = [&normal](Real x)->Real { Real Fx = boost::math::cdf(normal, x); Real dmu = boost::math::pdf(normal, x); return (1-Fx)*dmu/Fx; }; Real right_tail = es.integrate(right_integrand, v[v.size()-1], std::numeric_limits<Real>::infinity()); auto integrator = boost::math::quadrature::gauss<Real, 30>(); Real integrals = 0; int64_t N = v.size(); for (int64_t i = 0; i < N - 1; ++i) { auto integrand = [&normal, &i, &N](Real x)->Real { Real Fx = boost::math::cdf(normal, x); Real Fn = (i+1)/Real(N); Real dmu = boost::math::pdf(normal, x); return (Fn - Fx)*(Fn-Fx)*dmu/(Fx*(1-Fx)); }; auto term = integrator.integrate(integrand, v[i], v[i+1]); integrals += term; } return v.size()*(left_tail + right_tail + integrals); } template<class RandomAccessContainer> auto anderson_darling_normality_linear(RandomAccessContainer const & v, typename RandomAccessContainer::value_type mu = 0, typename RandomAccessContainer::value_type sd = 1) { using Real = typename RandomAccessContainer::value_type; using std::log; using std::pow; if (!std::is_sorted(v.begin(), v.end())) { throw std::domain_error("The input vector must be sorted in non-decreasing order v[0] <= v[1] <= ... <= v[n-1]."); } auto normal = boost::math::normal_distribution(mu, sd); auto left_integrand = [&normal](Real x)->Real { Real Fx = boost::math::cdf(normal, x); Real dmu = boost::math::pdf(normal, x); return Fx*dmu/(1-Fx); }; auto es = boost::math::quadrature::exp_sinh<Real>(); Real left_tail = es.integrate(left_integrand, -std::numeric_limits<Real>::infinity(), v[0]); auto right_integrand = [&normal](Real x)->Real { Real Fx = boost::math::cdf(normal, x); Real dmu = boost::math::pdf(normal, x); return (1-Fx)*dmu/Fx; }; Real right_tail = es.integrate(right_integrand, v[v.size()-1], std::numeric_limits<Real>::infinity()); auto integrator = boost::math::quadrature::gauss<Real, 30>(); Real integrals = 0; int64_t N = v.size(); for (int64_t i = 0; i < N - 1; ++i) { auto integrand = [&](Real x)->Real { Real Fx = boost::math::cdf(normal, x); Real dmu = boost::math::pdf(normal, x); Real y0 = (i+1)/Real(N); Real y1 = (i+2)/Real(N); Real Fn = y0 + (y1-y0)*(x-v[i])/(v[i+1]-v[i]); return (Fn - Fx)*(Fn-Fx)*dmu/(Fx*(1-Fx)); }; auto term = integrator.integrate(integrand, v[i], v[i+1]); integrals += term; } return v.size()*(left_tail + right_tail + integrals); } }} #endif
Interpolating the empirical cumulative function
I can't answer this question in full generality, but I think I can state one circumstance where it certainly is not useful: The Anderson-Darling test: \begin{align*} A^2/n &:= \int_{-\infty}^{\infty}
Interpolating the empirical cumulative function I can't answer this question in full generality, but I think I can state one circumstance where it certainly is not useful: The Anderson-Darling test: \begin{align*} A^2/n &:= \int_{-\infty}^{\infty} \frac{(F_{n}(x) -F(x))^2}{F(x)(1-F(x))} \, \mathrm{d}F(x) \\ &= \int_{-\infty}^{x_0} \frac{F(x)}{1-F(x)} \, \mathrm{d}F(x) + \int_{x_{n-1}}^{\infty} \frac{1-F(x)}{F(x)} \, \mathrm{d}F(x) + \sum_{i=0}^{n-2} \int_{x_i}^{x_{i+1}} \frac{(F_n(x) - F(x))^2}{F(x)(1-F(x))} \mathrm{d}F(x) \end{align*} Here, $F$ is the cumulative distribution function of the normal distribution, namely, $$ F(x) := \frac{1}{2}\left[1 + \mathrm{erf}\left(\frac{x}{\sqrt{2}}\right) \right] $$ and $F_n$ is the empirical cumulative distribution function $$ F_n(x) := \frac{1}{n} \sum_{i=0}^{n-1} \mathbb{1}_{x_i \le x} $$ (We will abuse notation a bit and let $F_{n}$ denote the linearly interpolated version of $F_n$ as well.) I repeatedly generated $n$ $~N(0,1)$ random numbers, sorted them, and then considered $F_n$ first as a step function, and then as a sequence of linear interpolants. Each interior integral was computed via Gaussian quadrature of ridiculously high degree, and the tails via exp-sinh. Did the empirical distribution fit the cumulative distribution better with linear interpolation than step interpolation? No, in fact they are indistinguishable as $n\to \infty$ and one is not uniformly better than the other for small $n$: Code to reproduce: #include <iostream> #include <random> #include <utility> #include <boost/math/distributions/anderson_darling.hpp> #include <quicksvg/scatter_plot.hpp> template<class Real> std::pair<Real, Real> step_vs_linear(size_t n) { std::random_device rd; Real mu = 0; Real sd = 1; std::normal_distribution<Real> dis(mu, sd); std::vector<Real> v(n); for (size_t i = 0; i < n; ++i) { v[i] = dis(rd); } std::sort(v.begin(), v.end()); Real Asq = boost::math::anderson_darling_normality_step(v, mu, sd); Real step = Asq; //std::cout << "n = " << n << "\n"; //std::cout << "Step: A^2 = " << Asq << ", Asq/n = " << Asq/n << "\n"; Asq = boost::math::anderson_darling_normality_linear(v, mu, sd); Real line = Asq; //std::cout << "Line: A^2 = " << Asq << ", Asq/n = " << Asq/n << "\n"; return std::pair<Real, Real>(step, line); } int main(int argc, char** argv) { using std::log; using std::pow; using std::floor; size_t samples = 10000; std::vector<std::pair<double, double>> linear_Asq(samples); std::vector<std::pair<double, double>> step_Asq(samples); std::default_random_engine generator; std::uniform_real_distribution<double> distribution(3, 18); #pragma omp parallel for for(size_t sample = 0; sample < samples; ++sample) { size_t n = floor(pow(2, distribution(generator))); auto [step , line] = step_vs_linear<double>(n); step_Asq[sample] = std::make_pair<double, double>(std::log2(double(n)), std::log(step/n)); linear_Asq[sample] = std::make_pair<double, double>(std::log2(double(n)), std::log(line/n)); if (sample % 10 == 0) { std::cout << "Sample " << sample << "/" << samples << "\n"; } } std::string title = "Linear (blue) vs step (orange) Anderson-Darling test"; std::string filename = "ad.svg"; std::string x_label = "log2(n)"; std::string y_label = "ln(A^2/n)"; auto scat = quicksvg::scatter_plot<double>(title, filename, x_label, y_label); scat.add_dataset(linear_Asq, false, "steelblue"); scat.add_dataset(step_Asq, false, "orange"); scat.write_all(); } Anderson-Darling tests: #ifndef BOOST_MATH_DISTRIBUTIONS_ANDERSON_DARLING_HPP #define BOOST_MATH_DISTRIBUTIONS_ANDERSON_DARLING_HPP #include <cmath> #include <algorithm> #include <boost/math/distributions/normal.hpp> #include <boost/math/quadrature/exp_sinh.hpp> #include <boost/math/quadrature/gauss_kronrod.hpp> namespace boost { namespace math { template<class RandomAccessContainer> auto anderson_darling_normality_step(RandomAccessContainer const & v, typename RandomAccessContainer::value_type mu = 0, typename RandomAccessContainer::value_type sd = 1) { using Real = typename RandomAccessContainer::value_type; using std::log; using std::pow; if (!std::is_sorted(v.begin(), v.end())) { throw std::domain_error("The input vector must be sorted in non-decreasing order v[0] <= v[1] <= ... <= v[n-1]."); } auto normal = boost::math::normal_distribution(mu, sd); auto left_integrand = [&normal](Real x)->Real { Real Fx = boost::math::cdf(normal, x); Real dmu = boost::math::pdf(normal, x); return Fx*dmu/(1-Fx); }; auto es = boost::math::quadrature::exp_sinh<Real>(); Real left_tail = es.integrate(left_integrand, -std::numeric_limits<Real>::infinity(), v[0]); auto right_integrand = [&normal](Real x)->Real { Real Fx = boost::math::cdf(normal, x); Real dmu = boost::math::pdf(normal, x); return (1-Fx)*dmu/Fx; }; Real right_tail = es.integrate(right_integrand, v[v.size()-1], std::numeric_limits<Real>::infinity()); auto integrator = boost::math::quadrature::gauss<Real, 30>(); Real integrals = 0; int64_t N = v.size(); for (int64_t i = 0; i < N - 1; ++i) { auto integrand = [&normal, &i, &N](Real x)->Real { Real Fx = boost::math::cdf(normal, x); Real Fn = (i+1)/Real(N); Real dmu = boost::math::pdf(normal, x); return (Fn - Fx)*(Fn-Fx)*dmu/(Fx*(1-Fx)); }; auto term = integrator.integrate(integrand, v[i], v[i+1]); integrals += term; } return v.size()*(left_tail + right_tail + integrals); } template<class RandomAccessContainer> auto anderson_darling_normality_linear(RandomAccessContainer const & v, typename RandomAccessContainer::value_type mu = 0, typename RandomAccessContainer::value_type sd = 1) { using Real = typename RandomAccessContainer::value_type; using std::log; using std::pow; if (!std::is_sorted(v.begin(), v.end())) { throw std::domain_error("The input vector must be sorted in non-decreasing order v[0] <= v[1] <= ... <= v[n-1]."); } auto normal = boost::math::normal_distribution(mu, sd); auto left_integrand = [&normal](Real x)->Real { Real Fx = boost::math::cdf(normal, x); Real dmu = boost::math::pdf(normal, x); return Fx*dmu/(1-Fx); }; auto es = boost::math::quadrature::exp_sinh<Real>(); Real left_tail = es.integrate(left_integrand, -std::numeric_limits<Real>::infinity(), v[0]); auto right_integrand = [&normal](Real x)->Real { Real Fx = boost::math::cdf(normal, x); Real dmu = boost::math::pdf(normal, x); return (1-Fx)*dmu/Fx; }; Real right_tail = es.integrate(right_integrand, v[v.size()-1], std::numeric_limits<Real>::infinity()); auto integrator = boost::math::quadrature::gauss<Real, 30>(); Real integrals = 0; int64_t N = v.size(); for (int64_t i = 0; i < N - 1; ++i) { auto integrand = [&](Real x)->Real { Real Fx = boost::math::cdf(normal, x); Real dmu = boost::math::pdf(normal, x); Real y0 = (i+1)/Real(N); Real y1 = (i+2)/Real(N); Real Fn = y0 + (y1-y0)*(x-v[i])/(v[i+1]-v[i]); return (Fn - Fx)*(Fn-Fx)*dmu/(Fx*(1-Fx)); }; auto term = integrator.integrate(integrand, v[i], v[i+1]); integrals += term; } return v.size()*(left_tail + right_tail + integrals); } }} #endif
Interpolating the empirical cumulative function I can't answer this question in full generality, but I think I can state one circumstance where it certainly is not useful: The Anderson-Darling test: \begin{align*} A^2/n &:= \int_{-\infty}^{\infty}
46,951
Using k-fold cross-validation to test all data
As far as I understand your question, it can be formulated this way: Instead of calculating a quality measure for each of the k validation-folds and then calculate the average, may I aggregate all folds an then calculate my quality measure, hence getting only one instead of k values? This question requires two perspectives: From the perspective of crossvalidation itself it is ok, because training- and testsamples still have an empty intersection etc. Since you just aggregate multiple samples drawn without replacement, the test-distribution is not spoiled. From the perspective of the model, it depends whether the model produces comparable scores. SVM will work in my opinion, but imagine a model which min-max-normalizes the scores across the test-set (iiek), so that the calculation of a representative decision threshold across all test-samples will be quite hard. In general, a lot of techniques which require the estimation of a parameter, which itself depends on the quality of a model, utilize this approach. A concrete example is the calculating of an operator to calibrate the scores of a classification model, e.g. Platt Scaling. Furthermore (also this not an completely satisfying argument) the open source software Rapidminer has an operator for this approach. PS: I want to point out, that although this approach is useful to get reliable quality measures for only small datasets, it may be hard to perform statistical tests to compare the significance of two models, since cv cannot be repeated endlessly (example: how to estimate whether the assumption of the so-often-misused t-test is satisfied if you only have 6 data points?). PPS: Also interested, I was not able to find a paper focusing on the examination of this approach. The papers I have seen so far using this technique did not bother to reference it.
Using k-fold cross-validation to test all data
As far as I understand your question, it can be formulated this way: Instead of calculating a quality measure for each of the k validation-folds and then calculate the average, may I aggregate all fol
Using k-fold cross-validation to test all data As far as I understand your question, it can be formulated this way: Instead of calculating a quality measure for each of the k validation-folds and then calculate the average, may I aggregate all folds an then calculate my quality measure, hence getting only one instead of k values? This question requires two perspectives: From the perspective of crossvalidation itself it is ok, because training- and testsamples still have an empty intersection etc. Since you just aggregate multiple samples drawn without replacement, the test-distribution is not spoiled. From the perspective of the model, it depends whether the model produces comparable scores. SVM will work in my opinion, but imagine a model which min-max-normalizes the scores across the test-set (iiek), so that the calculation of a representative decision threshold across all test-samples will be quite hard. In general, a lot of techniques which require the estimation of a parameter, which itself depends on the quality of a model, utilize this approach. A concrete example is the calculating of an operator to calibrate the scores of a classification model, e.g. Platt Scaling. Furthermore (also this not an completely satisfying argument) the open source software Rapidminer has an operator for this approach. PS: I want to point out, that although this approach is useful to get reliable quality measures for only small datasets, it may be hard to perform statistical tests to compare the significance of two models, since cv cannot be repeated endlessly (example: how to estimate whether the assumption of the so-often-misused t-test is satisfied if you only have 6 data points?). PPS: Also interested, I was not able to find a paper focusing on the examination of this approach. The papers I have seen so far using this technique did not bother to reference it.
Using k-fold cross-validation to test all data As far as I understand your question, it can be formulated this way: Instead of calculating a quality measure for each of the k validation-folds and then calculate the average, may I aggregate all fol
46,952
Using k-fold cross-validation to test all data
Yes it is; and while this is a very reliable way of reporting error, I would say it is even encouraged.
Using k-fold cross-validation to test all data
Yes it is; and while this is a very reliable way of reporting error, I would say it is even encouraged.
Using k-fold cross-validation to test all data Yes it is; and while this is a very reliable way of reporting error, I would say it is even encouraged.
Using k-fold cross-validation to test all data Yes it is; and while this is a very reliable way of reporting error, I would say it is even encouraged.
46,953
Using k-fold cross-validation to test all data
I'm not 100% clear on the question, but I have a few points to add: I'm assuming that the error you are trying to estimate is the prediction error. If so, I agree that 10 fold cross validation would be good (and likely unbiased) approximation of the true prediction error IF your training sets are sufficiently large. Large in this case means that the training sets provide enough information to build a "good" SVM (one that, in a sense, captures most of the underlying relationships between between the predictors and response.) Training sets of size 900 are more than likely large enough. In fact, unless the SVM you are fitting is extremely complex, I would recommend using a 5-fold cross validation in order to get a more precise estimate of prediction error (and yes, you can average the error estimates of the 5 folds to obtain an final estimate.) With regards to the question: "Would events tested using separately trained svms be comparable? i.e. through this technique could I then use my entire dataset instead of setting aside a certain fraction for training, or is this a statistically unwise thing to do?" I don't understand this question, but since the phrase "entire dataset" is in a post about CV, I just want to warn you that estimating prediction error from models fit to all available data is generally a bad idea. For cross validation to make sense, each training set/test set pair should have no points in common. Otherwise, the true error will likely be underestimated.
Using k-fold cross-validation to test all data
I'm not 100% clear on the question, but I have a few points to add: I'm assuming that the error you are trying to estimate is the prediction error. If so, I agree that 10 fold cross validation would b
Using k-fold cross-validation to test all data I'm not 100% clear on the question, but I have a few points to add: I'm assuming that the error you are trying to estimate is the prediction error. If so, I agree that 10 fold cross validation would be good (and likely unbiased) approximation of the true prediction error IF your training sets are sufficiently large. Large in this case means that the training sets provide enough information to build a "good" SVM (one that, in a sense, captures most of the underlying relationships between between the predictors and response.) Training sets of size 900 are more than likely large enough. In fact, unless the SVM you are fitting is extremely complex, I would recommend using a 5-fold cross validation in order to get a more precise estimate of prediction error (and yes, you can average the error estimates of the 5 folds to obtain an final estimate.) With regards to the question: "Would events tested using separately trained svms be comparable? i.e. through this technique could I then use my entire dataset instead of setting aside a certain fraction for training, or is this a statistically unwise thing to do?" I don't understand this question, but since the phrase "entire dataset" is in a post about CV, I just want to warn you that estimating prediction error from models fit to all available data is generally a bad idea. For cross validation to make sense, each training set/test set pair should have no points in common. Otherwise, the true error will likely be underestimated.
Using k-fold cross-validation to test all data I'm not 100% clear on the question, but I have a few points to add: I'm assuming that the error you are trying to estimate is the prediction error. If so, I agree that 10 fold cross validation would b
46,954
Is it possible to apply Bayes Theorem with only samples from the prior?
The short answer is yes. Have a look at sequential MCMC/ particle filters. Essentially, your prior consists of a bunch of particles ($M$). So to sample from your prior, just select a particle with probability $1/M$. Since each particle has equal probability of being chosen, this term disappears in the M-H ratio. A big problem with particle filters is particle degeneracy. This happens because you are trying to represent a continuous distribution with discrete particles - there's no such thing as a free lunch! Clarification for Srikant Vadali The question as I read it is: I have output, i.e. posterior from a MCMC scheme. I want to use this posterior as a prior for a new data set. This (probably) means that you have a discrete representation of a continuous distribution, i.e. a particle representation. So rather than doing a random walk on a continous distribution (say), you need to pick values from your prior, i.e. you pick a particle. Toni et al., use this idea with ABC.
Is it possible to apply Bayes Theorem with only samples from the prior?
The short answer is yes. Have a look at sequential MCMC/ particle filters. Essentially, your prior consists of a bunch of particles ($M$). So to sample from your prior, just select a particle with pro
Is it possible to apply Bayes Theorem with only samples from the prior? The short answer is yes. Have a look at sequential MCMC/ particle filters. Essentially, your prior consists of a bunch of particles ($M$). So to sample from your prior, just select a particle with probability $1/M$. Since each particle has equal probability of being chosen, this term disappears in the M-H ratio. A big problem with particle filters is particle degeneracy. This happens because you are trying to represent a continuous distribution with discrete particles - there's no such thing as a free lunch! Clarification for Srikant Vadali The question as I read it is: I have output, i.e. posterior from a MCMC scheme. I want to use this posterior as a prior for a new data set. This (probably) means that you have a discrete representation of a continuous distribution, i.e. a particle representation. So rather than doing a random walk on a continous distribution (say), you need to pick values from your prior, i.e. you pick a particle. Toni et al., use this idea with ABC.
Is it possible to apply Bayes Theorem with only samples from the prior? The short answer is yes. Have a look at sequential MCMC/ particle filters. Essentially, your prior consists of a bunch of particles ($M$). So to sample from your prior, just select a particle with pro
46,955
Comparing test-retest reliabilities
Both situations are specific cases of test-retest, except that the recall period is null in the first case you described. I would also expect a larger agreement in the former case, but that may be confounded with a learning or memory effect. A chance-corrected measure of agreement, like Cohen's kappa, can be used with binary variables, and bootstraped confidence intervals might be compared in the two situations (this is better than using $\kappa$ sampling variance directly). This should give an indication of the reliability of your measures, or in this case diagnostic agreement, at the two occasions. A McNemar test which tests for marginal homogeneity in matched pairs can also be used. An approach based on the intraclass correlation is still valid and, provided your prevalence is not extreme, should be closed to a simple Pearson correlation (which, for binary data, is also called a phi coefficient) or the tetrachoric version suggested by @Skrikant, the aforementioned kappa (for a large sample, and assuming that the marginal distributions for case at the two occasions are the same, $\kappa\approx\text{ICC}$ from a one-way ANOVA). About your bonus question, you generally need 3 time points to separate the lack of (temporal) stability -- which can occur if the latent class or trait your are measuring evolve over time -- from the lack of reliability (see for an illustration the model proposed by Wiley and Wiley, 1970, American Sociological Review 35).
Comparing test-retest reliabilities
Both situations are specific cases of test-retest, except that the recall period is null in the first case you described. I would also expect a larger agreement in the former case, but that may be con
Comparing test-retest reliabilities Both situations are specific cases of test-retest, except that the recall period is null in the first case you described. I would also expect a larger agreement in the former case, but that may be confounded with a learning or memory effect. A chance-corrected measure of agreement, like Cohen's kappa, can be used with binary variables, and bootstraped confidence intervals might be compared in the two situations (this is better than using $\kappa$ sampling variance directly). This should give an indication of the reliability of your measures, or in this case diagnostic agreement, at the two occasions. A McNemar test which tests for marginal homogeneity in matched pairs can also be used. An approach based on the intraclass correlation is still valid and, provided your prevalence is not extreme, should be closed to a simple Pearson correlation (which, for binary data, is also called a phi coefficient) or the tetrachoric version suggested by @Skrikant, the aforementioned kappa (for a large sample, and assuming that the marginal distributions for case at the two occasions are the same, $\kappa\approx\text{ICC}$ from a one-way ANOVA). About your bonus question, you generally need 3 time points to separate the lack of (temporal) stability -- which can occur if the latent class or trait your are measuring evolve over time -- from the lack of reliability (see for an illustration the model proposed by Wiley and Wiley, 1970, American Sociological Review 35).
Comparing test-retest reliabilities Both situations are specific cases of test-retest, except that the recall period is null in the first case you described. I would also expect a larger agreement in the former case, but that may be con
46,956
Comparing test-retest reliabilities
Perhaps, computing the tetrachoric correlation would be useful. See this url: Introduction to the Tetrachoric and Polychoric Correlation Coefficients
Comparing test-retest reliabilities
Perhaps, computing the tetrachoric correlation would be useful. See this url: Introduction to the Tetrachoric and Polychoric Correlation Coefficients
Comparing test-retest reliabilities Perhaps, computing the tetrachoric correlation would be useful. See this url: Introduction to the Tetrachoric and Polychoric Correlation Coefficients
Comparing test-retest reliabilities Perhaps, computing the tetrachoric correlation would be useful. See this url: Introduction to the Tetrachoric and Polychoric Correlation Coefficients
46,957
Bayes' Theorem and Agresti-Coull: Will it blend?
When applying the formula for P(B|A) for Agresti-Coull, it seems important to me to use, for the denominator (ñ), a number with uncertainty. The formula ñ=P(A)*N+4 (where N is the size of your sample) gives you this number, after you calculate P(A) with an uncertainty. With the uncertainties package, this would be: # Calculation of P(A): P_A = ufloat(…, …) # Calculation of P(B|A): P_B_A = …/(P_A*N+4) Thus, P(B|A) is automatically correlated to P(A). Furthermore, you must make sure that you feed ufloat() with standard deviations. This mean using a particular z_{1-alpha/2} value, in the Agresti-Coull formula. Hope this helps!
Bayes' Theorem and Agresti-Coull: Will it blend?
When applying the formula for P(B|A) for Agresti-Coull, it seems important to me to use, for the denominator (ñ), a number with uncertainty. The formula ñ=P(A)*N+4 (where N is the size of your sample
Bayes' Theorem and Agresti-Coull: Will it blend? When applying the formula for P(B|A) for Agresti-Coull, it seems important to me to use, for the denominator (ñ), a number with uncertainty. The formula ñ=P(A)*N+4 (where N is the size of your sample) gives you this number, after you calculate P(A) with an uncertainty. With the uncertainties package, this would be: # Calculation of P(A): P_A = ufloat(…, …) # Calculation of P(B|A): P_B_A = …/(P_A*N+4) Thus, P(B|A) is automatically correlated to P(A). Furthermore, you must make sure that you feed ufloat() with standard deviations. This mean using a particular z_{1-alpha/2} value, in the Agresti-Coull formula. Hope this helps!
Bayes' Theorem and Agresti-Coull: Will it blend? When applying the formula for P(B|A) for Agresti-Coull, it seems important to me to use, for the denominator (ñ), a number with uncertainty. The formula ñ=P(A)*N+4 (where N is the size of your sample
46,958
Bayes' Theorem and Agresti-Coull: Will it blend?
Error propagation won't work in the way handled by the uncertainties package. As you note, they're dependent, so you have to take the covariances into account. You can obtain the variance of your distribution P(B|A) using the Delta Method and use that to obtain a confidence interval. With Bayesian inference, you might find it simpler to use a credible interval. The following slides do a good job of explaining how to obtain this: Bayesian analysis of one, two, and n-parameter models A Brief Tutorial on Bayesian Thinking
Bayes' Theorem and Agresti-Coull: Will it blend?
Error propagation won't work in the way handled by the uncertainties package. As you note, they're dependent, so you have to take the covariances into account. You can obtain the variance of your dis
Bayes' Theorem and Agresti-Coull: Will it blend? Error propagation won't work in the way handled by the uncertainties package. As you note, they're dependent, so you have to take the covariances into account. You can obtain the variance of your distribution P(B|A) using the Delta Method and use that to obtain a confidence interval. With Bayesian inference, you might find it simpler to use a credible interval. The following slides do a good job of explaining how to obtain this: Bayesian analysis of one, two, and n-parameter models A Brief Tutorial on Bayesian Thinking
Bayes' Theorem and Agresti-Coull: Will it blend? Error propagation won't work in the way handled by the uncertainties package. As you note, they're dependent, so you have to take the covariances into account. You can obtain the variance of your dis
46,959
Bayes' Theorem and Agresti-Coull: Will it blend?
This is a potential answer to the title of the original question and not necessarily the body of the question... Looking at the Agresti confidence interval measurement, to my eyes it bears a resemblance to a Bayesian estimator. https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Agresti-Coull_Interval Looking at p = 1/n * ( X + 1/2 * z^2), I suspect the 1/2 could be considered Bayesian prior knowledge. Let’s say for example, that one conducts a congressional approval poll and historically those polls have an approval rating of 0.35. Using a Bayesian train of thought perhaps one could use that information as prior knowledge and feed it in by using 0.35 instead of the 1/2 coefficient? I suspect one could do so successfully.
Bayes' Theorem and Agresti-Coull: Will it blend?
This is a potential answer to the title of the original question and not necessarily the body of the question... Looking at the Agresti confidence interval measurement, to my eyes it bears a resemblan
Bayes' Theorem and Agresti-Coull: Will it blend? This is a potential answer to the title of the original question and not necessarily the body of the question... Looking at the Agresti confidence interval measurement, to my eyes it bears a resemblance to a Bayesian estimator. https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval#Agresti-Coull_Interval Looking at p = 1/n * ( X + 1/2 * z^2), I suspect the 1/2 could be considered Bayesian prior knowledge. Let’s say for example, that one conducts a congressional approval poll and historically those polls have an approval rating of 0.35. Using a Bayesian train of thought perhaps one could use that information as prior knowledge and feed it in by using 0.35 instead of the 1/2 coefficient? I suspect one could do so successfully.
Bayes' Theorem and Agresti-Coull: Will it blend? This is a potential answer to the title of the original question and not necessarily the body of the question... Looking at the Agresti confidence interval measurement, to my eyes it bears a resemblan
46,960
Bayes' Theorem and Agresti-Coull: Will it blend?
Brown, Cai, and DasGupta, AS, 2002 Brown, Cai, and DasGupta, Stat Sci, 2001 I don't know if I understand you correctly, but in my knowledge the above two papers are the most cited ones recently when it comes to binomial proportions' CI and estimation. Sorry if this is not what you wanted.
Bayes' Theorem and Agresti-Coull: Will it blend?
Brown, Cai, and DasGupta, AS, 2002 Brown, Cai, and DasGupta, Stat Sci, 2001 I don't know if I understand you correctly, but in my knowledge the above two papers are the most cited ones recently when i
Bayes' Theorem and Agresti-Coull: Will it blend? Brown, Cai, and DasGupta, AS, 2002 Brown, Cai, and DasGupta, Stat Sci, 2001 I don't know if I understand you correctly, but in my knowledge the above two papers are the most cited ones recently when it comes to binomial proportions' CI and estimation. Sorry if this is not what you wanted.
Bayes' Theorem and Agresti-Coull: Will it blend? Brown, Cai, and DasGupta, AS, 2002 Brown, Cai, and DasGupta, Stat Sci, 2001 I don't know if I understand you correctly, but in my knowledge the above two papers are the most cited ones recently when i
46,961
Metric spaces and the support of a random variable
Here are some technical conveniences of separable metric spaces (a) If $X$ and $X'$ take values in a separable metric space $(E,d)$ then the event $\{X=X'\}$ is measurable, and this allows to define random variables in the elegant way: a random variable is the equivalence class of $X$ for the "almost surely equals" relation (note that the normed vector space $L^p$ is a set of equivalence class) (b) The distance $d(X,X')$ between the two $E$-valued r.v. $X, X'$ is measurable; in passing this allows to define the space $L^0$ of random variables equipped with the topology of convergence in probability (c) Simple r.v. (those taking only finitely many values) are dense in $L^0$ And some techical conveniences of complete separable (Polish) metric spaces : (d) Existence of the conditional law of a Polish-valued r.v. (e) Given a morphism between probability spaces, a Polish-valued r.v. on the first probability space always has a copy in the second one
Metric spaces and the support of a random variable
Here are some technical conveniences of separable metric spaces (a) If $X$ and $X'$ take values in a separable metric space $(E,d)$ then the event $\{X=X'\}$ is measurable, and this allows to define
Metric spaces and the support of a random variable Here are some technical conveniences of separable metric spaces (a) If $X$ and $X'$ take values in a separable metric space $(E,d)$ then the event $\{X=X'\}$ is measurable, and this allows to define random variables in the elegant way: a random variable is the equivalence class of $X$ for the "almost surely equals" relation (note that the normed vector space $L^p$ is a set of equivalence class) (b) The distance $d(X,X')$ between the two $E$-valued r.v. $X, X'$ is measurable; in passing this allows to define the space $L^0$ of random variables equipped with the topology of convergence in probability (c) Simple r.v. (those taking only finitely many values) are dense in $L^0$ And some techical conveniences of complete separable (Polish) metric spaces : (d) Existence of the conditional law of a Polish-valued r.v. (e) Given a morphism between probability spaces, a Polish-valued r.v. on the first probability space always has a copy in the second one
Metric spaces and the support of a random variable Here are some technical conveniences of separable metric spaces (a) If $X$ and $X'$ take values in a separable metric space $(E,d)$ then the event $\{X=X'\}$ is measurable, and this allows to define
46,962
Metric spaces and the support of a random variable
Interesting reference. Its value for me lies in questioning the ability of measure theoretic probability to capture an "intuition" about probability (whatever that might mean) and going on to propose an intriguing distinction; namely, between a set of measure zero having a measure zero neighborhood and a set of measure zero all of whose proper neighborhoods have positive measure. It is not apparent that separable metric spaces are the "right" way to capture this idea, though, as the comment by Matt Heath points out. It sounds like we only need a predefined subcollection of measurable sets (not necessarily even satisfying the axioms of a topology). Such a collection is conveniently obtained in a separable metric space but there are other ways to create such collections, too. Thus it appears that the idea presented here illuminates the connection between abstract measure theory and using random variables in models, but the use of metric spaces may be a bit of a red herring.
Metric spaces and the support of a random variable
Interesting reference. Its value for me lies in questioning the ability of measure theoretic probability to capture an "intuition" about probability (whatever that might mean) and going on to propose
Metric spaces and the support of a random variable Interesting reference. Its value for me lies in questioning the ability of measure theoretic probability to capture an "intuition" about probability (whatever that might mean) and going on to propose an intriguing distinction; namely, between a set of measure zero having a measure zero neighborhood and a set of measure zero all of whose proper neighborhoods have positive measure. It is not apparent that separable metric spaces are the "right" way to capture this idea, though, as the comment by Matt Heath points out. It sounds like we only need a predefined subcollection of measurable sets (not necessarily even satisfying the axioms of a topology). Such a collection is conveniently obtained in a separable metric space but there are other ways to create such collections, too. Thus it appears that the idea presented here illuminates the connection between abstract measure theory and using random variables in models, but the use of metric spaces may be a bit of a red herring.
Metric spaces and the support of a random variable Interesting reference. Its value for me lies in questioning the ability of measure theoretic probability to capture an "intuition" about probability (whatever that might mean) and going on to propose
46,963
Comparing noisy data sequences to estimate the likelihood of them being produced by different instances of an identical Markov process
You can perhaps use a hidden markov model (HMM). I know that there is a R package that estimates HMMs but cannot recall its name right now.
Comparing noisy data sequences to estimate the likelihood of them being produced by different instan
You can perhaps use a hidden markov model (HMM). I know that there is a R package that estimates HMMs but cannot recall its name right now.
Comparing noisy data sequences to estimate the likelihood of them being produced by different instances of an identical Markov process You can perhaps use a hidden markov model (HMM). I know that there is a R package that estimates HMMs but cannot recall its name right now.
Comparing noisy data sequences to estimate the likelihood of them being produced by different instan You can perhaps use a hidden markov model (HMM). I know that there is a R package that estimates HMMs but cannot recall its name right now.
46,964
Comparing noisy data sequences to estimate the likelihood of them being produced by different instances of an identical Markov process
A few thoughts: Can you not just use a goodness-of-fit test? Choose a distribution and compare both samples. Or use a qqplot. You may want to do this with returns (i.e. changes) instead of the original series, since this is often easier to model. There are also relative distribution functions (see, for instance, the reldist package). You could look at whether the two series are cointegrated (use the Johansen test). This is available in the urca package (and related book). There many multivariate time series models such as VAR that could be applied to model the dependencies (see the vars package). You could trying using a copula, which is used for dependence modeling, and is available in the copula package. If the noise is serious concern, then try using a filter on the data before analyzing it.
Comparing noisy data sequences to estimate the likelihood of them being produced by different instan
A few thoughts: Can you not just use a goodness-of-fit test? Choose a distribution and compare both samples. Or use a qqplot. You may want to do this with returns (i.e. changes) instead of the ori
Comparing noisy data sequences to estimate the likelihood of them being produced by different instances of an identical Markov process A few thoughts: Can you not just use a goodness-of-fit test? Choose a distribution and compare both samples. Or use a qqplot. You may want to do this with returns (i.e. changes) instead of the original series, since this is often easier to model. There are also relative distribution functions (see, for instance, the reldist package). You could look at whether the two series are cointegrated (use the Johansen test). This is available in the urca package (and related book). There many multivariate time series models such as VAR that could be applied to model the dependencies (see the vars package). You could trying using a copula, which is used for dependence modeling, and is available in the copula package. If the noise is serious concern, then try using a filter on the data before analyzing it.
Comparing noisy data sequences to estimate the likelihood of them being produced by different instan A few thoughts: Can you not just use a goodness-of-fit test? Choose a distribution and compare both samples. Or use a qqplot. You may want to do this with returns (i.e. changes) instead of the ori
46,965
Test if probabilities are statistically different?
If you have 1,000,000 independent "coin flips" that can produce 1 with probabilty (prob) and 0 with probability (1-prob), then the number of 1's observed will follow a Binomial distribution. Tests of statistical significance are rejection tests, i.e. reject the hypothesis that the two parameters are equal if the probability that param2 is observed in test2 when the true value is param1 is less than a certain number, like 5%, 1%, or 0.1%. These tests are typically constructed from the cumulative distribution function. The cumulative distribution function for a binomial is ugly, but can be found in R and probably some other statistics packages as well. But the good news is that with 1,000,000 cases you don't need to do that.... you would if you had a relatively small number of cases. Because you have 1,000,000 independent flips, the CDF of a normal distribution is a good approximation (the Law of Large Numbers plays a role here). The mean and variance you need to use are the obvious ones, and are in the Binomial Wikipedia article... You are then comparing two normally distributed variables and can use all the standard tests you would use with normally distributed variables. For instance, if the true probability were 40*10^-6 then in 1,000,000 tests you would expect to see 40 +/- 6 positive cases. If the acceptance interval for a test is, for instance, 5 standard deviations wide on each side, then this would be compatible with both observations. If it were just 3 std dev wide on each side, one case would fit and the other would be statistically different.
Test if probabilities are statistically different?
If you have 1,000,000 independent "coin flips" that can produce 1 with probabilty (prob) and 0 with probability (1-prob), then the number of 1's observed will follow a Binomial distribution. Tests o
Test if probabilities are statistically different? If you have 1,000,000 independent "coin flips" that can produce 1 with probabilty (prob) and 0 with probability (1-prob), then the number of 1's observed will follow a Binomial distribution. Tests of statistical significance are rejection tests, i.e. reject the hypothesis that the two parameters are equal if the probability that param2 is observed in test2 when the true value is param1 is less than a certain number, like 5%, 1%, or 0.1%. These tests are typically constructed from the cumulative distribution function. The cumulative distribution function for a binomial is ugly, but can be found in R and probably some other statistics packages as well. But the good news is that with 1,000,000 cases you don't need to do that.... you would if you had a relatively small number of cases. Because you have 1,000,000 independent flips, the CDF of a normal distribution is a good approximation (the Law of Large Numbers plays a role here). The mean and variance you need to use are the obvious ones, and are in the Binomial Wikipedia article... You are then comparing two normally distributed variables and can use all the standard tests you would use with normally distributed variables. For instance, if the true probability were 40*10^-6 then in 1,000,000 tests you would expect to see 40 +/- 6 positive cases. If the acceptance interval for a test is, for instance, 5 standard deviations wide on each side, then this would be compatible with both observations. If it were just 3 std dev wide on each side, one case would fit and the other would be statistically different.
Test if probabilities are statistically different? If you have 1,000,000 independent "coin flips" that can produce 1 with probabilty (prob) and 0 with probability (1-prob), then the number of 1's observed will follow a Binomial distribution. Tests o
46,966
How to determine the correlation between two normal random variables conditioned on their sum being negative?
The conditional expectations of $X$ and $Y$ are obviously equal. Moreover, because $(X+Y)/\sqrt 2$ has a standard Normal distribution, its conditional expectation is the negative of $-|Z|$ where $Z$ is standard normal, whence $$E((X+Y)/\sqrt 2\mid X+Y \lt 0) = -E[|Z|] = -\frac{2}{\sqrt {2\pi}} = -\sqrt\frac{2}{\pi},$$ from which we obtain $$E[X \mid X+Y \lt 0] = E[Y \mid X+Y \lt 0] = -\sqrt{\frac{1}{\pi}}.$$ Notice that $$E[XY\mid X+Y\lt 0] = \frac{1}{2}E[XY\mid X+Y\lt 0, X \gt Y] + \frac{1}{2} E[XY\mid X+Y\lt 0, X\lt Y] = 0.$$ Since $(X+Y)^2/2$ has a $\chi^2(1)$ distribution, $$\begin{aligned} 2 &= E[(X+Y)^2\mid X+Y\lt 0] = E[(X+Y)^2\mid X+Y\lt 0] \\&= E[X^2\mid X+Y\lt 0] + 2E[XY\mid X+Y\lt 0] + 2E[Y^2\mid X+Y\lt 0]\\&= 2 E[X^2\mid X+Y\lt 0] + 2E[XY\mid X+Y\lt 0]\\ &= 2 E[X^2\mid X+Y\lt 0]. \end{aligned}$$ Consequently $$\operatorname{Var}(X\mid X+Y\lt 0) = 1 - \left(\sqrt{\frac{1}{\pi}}\right)^2 = 1 - \frac{1}{\pi} = \operatorname{Var}(Y\mid X+Y\lt 0)$$ and $$\operatorname{Cov}(X, Y\mid X+Y\lt 0) = 0 - \left(\sqrt{\frac{1}{\pi}}\right)^2 = -\frac{1}{\pi}.$$ Therefore $$\operatorname{Cor}(X,Y\mid X+Y\lt 0) = \frac{-1/\pi}{1 - 1/\pi} = -\frac{1}{\pi-1}.$$ Here is an R simulation confirming this result. set.seed(17) n <- 1e8 x <- rnorm(n) y <- rnorm(n) i <- x + y < 0 signif(c(cor(x[i], y[i]),-1/(pi-1)), 4) [1] -0.4668 -0.4669 The empirical correlation and the formula differ by only a small amount attributable to sampling variation.
How to determine the correlation between two normal random variables conditioned on their sum being
The conditional expectations of $X$ and $Y$ are obviously equal. Moreover, because $(X+Y)/\sqrt 2$ has a standard Normal distribution, its conditional expectation is the negative of $-|Z|$ where $Z$
How to determine the correlation between two normal random variables conditioned on their sum being negative? The conditional expectations of $X$ and $Y$ are obviously equal. Moreover, because $(X+Y)/\sqrt 2$ has a standard Normal distribution, its conditional expectation is the negative of $-|Z|$ where $Z$ is standard normal, whence $$E((X+Y)/\sqrt 2\mid X+Y \lt 0) = -E[|Z|] = -\frac{2}{\sqrt {2\pi}} = -\sqrt\frac{2}{\pi},$$ from which we obtain $$E[X \mid X+Y \lt 0] = E[Y \mid X+Y \lt 0] = -\sqrt{\frac{1}{\pi}}.$$ Notice that $$E[XY\mid X+Y\lt 0] = \frac{1}{2}E[XY\mid X+Y\lt 0, X \gt Y] + \frac{1}{2} E[XY\mid X+Y\lt 0, X\lt Y] = 0.$$ Since $(X+Y)^2/2$ has a $\chi^2(1)$ distribution, $$\begin{aligned} 2 &= E[(X+Y)^2\mid X+Y\lt 0] = E[(X+Y)^2\mid X+Y\lt 0] \\&= E[X^2\mid X+Y\lt 0] + 2E[XY\mid X+Y\lt 0] + 2E[Y^2\mid X+Y\lt 0]\\&= 2 E[X^2\mid X+Y\lt 0] + 2E[XY\mid X+Y\lt 0]\\ &= 2 E[X^2\mid X+Y\lt 0]. \end{aligned}$$ Consequently $$\operatorname{Var}(X\mid X+Y\lt 0) = 1 - \left(\sqrt{\frac{1}{\pi}}\right)^2 = 1 - \frac{1}{\pi} = \operatorname{Var}(Y\mid X+Y\lt 0)$$ and $$\operatorname{Cov}(X, Y\mid X+Y\lt 0) = 0 - \left(\sqrt{\frac{1}{\pi}}\right)^2 = -\frac{1}{\pi}.$$ Therefore $$\operatorname{Cor}(X,Y\mid X+Y\lt 0) = \frac{-1/\pi}{1 - 1/\pi} = -\frac{1}{\pi-1}.$$ Here is an R simulation confirming this result. set.seed(17) n <- 1e8 x <- rnorm(n) y <- rnorm(n) i <- x + y < 0 signif(c(cor(x[i], y[i]),-1/(pi-1)), 4) [1] -0.4668 -0.4669 The empirical correlation and the formula differ by only a small amount attributable to sampling variation.
How to determine the correlation between two normal random variables conditioned on their sum being The conditional expectations of $X$ and $Y$ are obviously equal. Moreover, because $(X+Y)/\sqrt 2$ has a standard Normal distribution, its conditional expectation is the negative of $-|Z|$ where $Z$
46,967
How to determine the correlation between two normal random variables conditioned on their sum being negative?
Suppose $(\Omega, \mathscr{F}, P)$ is a probability space on which a random variable $\xi$ is defined with $E[|\xi|] < \infty$, this is an interesting problem that reflects the connection between two concepts: $E[\xi|A]$ (where $A$ is an event with $P(A) > 0$) and $E[\xi|\mathscr{G}$] (where $\mathscr{G}$ is a sub-$\sigma$-field of $\mathscr{F}$). The latter concept is the general measure-theoretic conditional expectation, while the former concept is less often seen (in probability). Rigorously, $E[\xi|A]$ should be interpreted as $\int_\Omega \xi dP^*$, where the probability measure $P^*$ is defined by (see this related question): \begin{align} P^*(B) = \frac{P(B \cap A)}{P(A)}, \quad B \in \mathscr{F}. \end{align} Therefore, unlike $E[\xi|\mathscr{G}]$ which is a r.v., $E[\xi|A]$ is a non-random number. It can be shown that$^\dagger$: \begin{align} E[\xi | A] = \frac{E[\xi I_A]}{P(A)}. \tag{1} \end{align} By the law of iterative expectations, the numerator of the right-hand side of $(1)$ equals to $E[E[\xi I_A|\mathscr{G}]]$. If we further know that $A \in \mathscr{G}$ hence $I_A$ is $\mathscr{G}$-measurable, then $I_A$ can be pulled out from the inner conditional expectation, results in \begin{align} E[\xi | A] = \frac{E[I_AE[\xi|\mathscr{G}]]}{P(A)}. \end{align} This is the connection between $E[\xi|A]$ and $E[\xi|\mathscr{G}]$, which is the backbone of calculations below (with $\mathscr{G} = \sigma(Z)$). Given $X, Y \text{ i.i.d.} \sim N(0, 1)$, denote $X + Y \sim N(0, 2)$ by $Z$ and $[Z < 0]$ by $A$, our interest is to compute \begin{align} \operatorname{Corr}(X, Y|A) = \frac{E[XY|A] - E[X|A]E[Y|A]}{\sqrt{\operatorname{Var}(X|A)\operatorname{Var}(Y|A)}}, \tag{$*$} \end{align} where $\operatorname{Var}(X|A) = E[X^2|A] - (E[X|A])^2$. By the affine transformation property of the multivariate normal distribution, we have \begin{align} \begin{bmatrix} X \\ Z \end{bmatrix} \overset{d}{=} \begin{bmatrix} Y \\ Z \end{bmatrix} \sim N_2\left(\begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 & 1 \\ 1 & 2 \end{bmatrix}\right) \end{align} whence $X | Z \overset{d}{=} Y | Z \sim N(Z/2, 1/2)$. Hence \begin{align} E[X | Z] = \frac{Z}{2}, \; \operatorname{Var}(X | Z) = \frac{1}{2}, \; E[X^2 | Z] = \frac{1}{2} + \frac{1}{4}Z^2. \tag{2} \end{align} The last preparation is to compute $P(A), E[ZI_A]$ and $E[Z^2I_A]$, which by $Z \sim N(0, 2)$ are respectively (where $\varphi$ denotes the density of $N(0, 1)$): \begin{align} & P(A) = P(Z < 0) = \frac{1}{2}, \\ & E[ZI_A] = \int_{-\infty}^0 t\frac{1}{\sqrt{2}}\varphi(t/\sqrt{2}) dt = -\frac{1}{\sqrt{\pi}}, \\ & E[Z^2I_A] = \int_{-\infty}^0 t^2\frac{1}{\sqrt{2}}\varphi(t/\sqrt{2}) dt = \frac{1}{2}E[Z^2] = 1. \tag{3} \end{align} Note that as $I_A$ is $\sigma(Z)$-measurable, it follows by the law of iterative expectations and $(2)$ -- $(3)$ that \begin{align} & E[XI_A] = E[E[XI_A|Z]] = E[I_AE[X|Z]] = \frac{1}{2}E[ZI_A] = -\frac{1}{2\sqrt{\pi}}, \tag{4} \\ & E[X^2I_A] = E[E[X^2I_A|Z]] = E[I_AE[X^2|Z]] = \frac{1}{2}P(A) + \frac{1}{4}E[Z^2I_A] = \frac{1}{2}. \tag{5} \end{align} To compute $E[XYI_A]$, write $XY = \frac{1}{2}(Z^2 - (X - Y)^2)$, and use the fact that $Z$ and $X - Y$ are independent, it follows that \begin{align} E[XY|Z] = \frac{1}{2}Z^2 - \frac{1}{2}E[(X - Y)^2|Z] = \frac{1}{2}Z^2 - \frac{1}{2}E[(X - Y)^2] = \frac{1}{2}Z^2 - 1, \end{align} which implies, by applying the law of iterative expectations and $(3)$ that \begin{align} E[XYI_A] = \frac{1}{2}E[Z^2I_A] - P(A) = \frac{1}{2} - \frac{1}{2} = 0. \tag{6} \end{align} By $(1)$, $P(A) = 1/2$ and symmetry, $(4)$ -- $(6)$ then yield \begin{align} & E[X|A] = E[Y|A] = -\frac{1}{\sqrt{\pi}}, \tag{7} \\ & \operatorname{Var}(X|A) = \operatorname{Var}(Y|A) = E[X^2|A] - (E[X|A])^2 = 1 - \frac{1}{\pi}, \tag{8} \\ & E[XY|A] = 0. \tag{9} \end{align} Substituting $(7)$ -- $(9)$ into $(*)$ gives \begin{align} \operatorname{Corr}(X, Y|A) = \frac{0 - \frac{1}{\pi}}{1 - \frac{1}{\pi}} = -\frac{1}{\pi - 1}. \end{align} $^\dagger$: Verify $(1)$ successively for indicator functions, simple functions, nonnegative r.v. and general integrable r.v.
How to determine the correlation between two normal random variables conditioned on their sum being
Suppose $(\Omega, \mathscr{F}, P)$ is a probability space on which a random variable $\xi$ is defined with $E[|\xi|] < \infty$, this is an interesting problem that reflects the connection between two
How to determine the correlation between two normal random variables conditioned on their sum being negative? Suppose $(\Omega, \mathscr{F}, P)$ is a probability space on which a random variable $\xi$ is defined with $E[|\xi|] < \infty$, this is an interesting problem that reflects the connection between two concepts: $E[\xi|A]$ (where $A$ is an event with $P(A) > 0$) and $E[\xi|\mathscr{G}$] (where $\mathscr{G}$ is a sub-$\sigma$-field of $\mathscr{F}$). The latter concept is the general measure-theoretic conditional expectation, while the former concept is less often seen (in probability). Rigorously, $E[\xi|A]$ should be interpreted as $\int_\Omega \xi dP^*$, where the probability measure $P^*$ is defined by (see this related question): \begin{align} P^*(B) = \frac{P(B \cap A)}{P(A)}, \quad B \in \mathscr{F}. \end{align} Therefore, unlike $E[\xi|\mathscr{G}]$ which is a r.v., $E[\xi|A]$ is a non-random number. It can be shown that$^\dagger$: \begin{align} E[\xi | A] = \frac{E[\xi I_A]}{P(A)}. \tag{1} \end{align} By the law of iterative expectations, the numerator of the right-hand side of $(1)$ equals to $E[E[\xi I_A|\mathscr{G}]]$. If we further know that $A \in \mathscr{G}$ hence $I_A$ is $\mathscr{G}$-measurable, then $I_A$ can be pulled out from the inner conditional expectation, results in \begin{align} E[\xi | A] = \frac{E[I_AE[\xi|\mathscr{G}]]}{P(A)}. \end{align} This is the connection between $E[\xi|A]$ and $E[\xi|\mathscr{G}]$, which is the backbone of calculations below (with $\mathscr{G} = \sigma(Z)$). Given $X, Y \text{ i.i.d.} \sim N(0, 1)$, denote $X + Y \sim N(0, 2)$ by $Z$ and $[Z < 0]$ by $A$, our interest is to compute \begin{align} \operatorname{Corr}(X, Y|A) = \frac{E[XY|A] - E[X|A]E[Y|A]}{\sqrt{\operatorname{Var}(X|A)\operatorname{Var}(Y|A)}}, \tag{$*$} \end{align} where $\operatorname{Var}(X|A) = E[X^2|A] - (E[X|A])^2$. By the affine transformation property of the multivariate normal distribution, we have \begin{align} \begin{bmatrix} X \\ Z \end{bmatrix} \overset{d}{=} \begin{bmatrix} Y \\ Z \end{bmatrix} \sim N_2\left(\begin{bmatrix} 0 \\ 0 \end{bmatrix}, \begin{bmatrix} 1 & 1 \\ 1 & 2 \end{bmatrix}\right) \end{align} whence $X | Z \overset{d}{=} Y | Z \sim N(Z/2, 1/2)$. Hence \begin{align} E[X | Z] = \frac{Z}{2}, \; \operatorname{Var}(X | Z) = \frac{1}{2}, \; E[X^2 | Z] = \frac{1}{2} + \frac{1}{4}Z^2. \tag{2} \end{align} The last preparation is to compute $P(A), E[ZI_A]$ and $E[Z^2I_A]$, which by $Z \sim N(0, 2)$ are respectively (where $\varphi$ denotes the density of $N(0, 1)$): \begin{align} & P(A) = P(Z < 0) = \frac{1}{2}, \\ & E[ZI_A] = \int_{-\infty}^0 t\frac{1}{\sqrt{2}}\varphi(t/\sqrt{2}) dt = -\frac{1}{\sqrt{\pi}}, \\ & E[Z^2I_A] = \int_{-\infty}^0 t^2\frac{1}{\sqrt{2}}\varphi(t/\sqrt{2}) dt = \frac{1}{2}E[Z^2] = 1. \tag{3} \end{align} Note that as $I_A$ is $\sigma(Z)$-measurable, it follows by the law of iterative expectations and $(2)$ -- $(3)$ that \begin{align} & E[XI_A] = E[E[XI_A|Z]] = E[I_AE[X|Z]] = \frac{1}{2}E[ZI_A] = -\frac{1}{2\sqrt{\pi}}, \tag{4} \\ & E[X^2I_A] = E[E[X^2I_A|Z]] = E[I_AE[X^2|Z]] = \frac{1}{2}P(A) + \frac{1}{4}E[Z^2I_A] = \frac{1}{2}. \tag{5} \end{align} To compute $E[XYI_A]$, write $XY = \frac{1}{2}(Z^2 - (X - Y)^2)$, and use the fact that $Z$ and $X - Y$ are independent, it follows that \begin{align} E[XY|Z] = \frac{1}{2}Z^2 - \frac{1}{2}E[(X - Y)^2|Z] = \frac{1}{2}Z^2 - \frac{1}{2}E[(X - Y)^2] = \frac{1}{2}Z^2 - 1, \end{align} which implies, by applying the law of iterative expectations and $(3)$ that \begin{align} E[XYI_A] = \frac{1}{2}E[Z^2I_A] - P(A) = \frac{1}{2} - \frac{1}{2} = 0. \tag{6} \end{align} By $(1)$, $P(A) = 1/2$ and symmetry, $(4)$ -- $(6)$ then yield \begin{align} & E[X|A] = E[Y|A] = -\frac{1}{\sqrt{\pi}}, \tag{7} \\ & \operatorname{Var}(X|A) = \operatorname{Var}(Y|A) = E[X^2|A] - (E[X|A])^2 = 1 - \frac{1}{\pi}, \tag{8} \\ & E[XY|A] = 0. \tag{9} \end{align} Substituting $(7)$ -- $(9)$ into $(*)$ gives \begin{align} \operatorname{Corr}(X, Y|A) = \frac{0 - \frac{1}{\pi}}{1 - \frac{1}{\pi}} = -\frac{1}{\pi - 1}. \end{align} $^\dagger$: Verify $(1)$ successively for indicator functions, simple functions, nonnegative r.v. and general integrable r.v.
How to determine the correlation between two normal random variables conditioned on their sum being Suppose $(\Omega, \mathscr{F}, P)$ is a probability space on which a random variable $\xi$ is defined with $E[|\xi|] < \infty$, this is an interesting problem that reflects the connection between two
46,968
Formula of conditional probability when we have discrete and continuous random variables
Suppose that $Y$ and $V$ are defined on a common probability space $(\Omega, \mathscr{F}, P)$, and $V$ has density $f$ with respect to Lebesgue measure, then your conjecture is true. In fact, whether $Y$ is discrete or not is unessential for the proof, as the formula you listed is a special case of a more general relation $(3)$ to be shown below. For any $A \in \mathscr{F}$, by the definition of conditional probability, it holds that \begin{align} P(A) = \int_\Omega P(A | V)dP, \tag{1} \end{align} where "$P(A|V)$" is a shorthand for the standard sub-$\sigma$-field-based conditional probability notation $P(A|\sigma(V))$. Also as a part of the definition of conditional probability, $P(A|V)$ is $\sigma(V)$-measurable, implying that there exists a Borel function $g: \mathbb{R} \to \mathbb{R}$ such that $P(A|V) = g(V)$. The change of variable theorem then gives \begin{align} \int_\Omega P(A|V)dP = \int_\Omega g(V(\omega))P(d\omega) = \int_\mathbb{R}g(v)f(v)dv. \tag{2} \end{align} By convention, the function $g(v)$ is usually denoted by $P(A|V = v)$. Technically speaking, the notation makes sense because for any $v \in \mathbb{R}$, the random variable $P(A | V)$ is almost surely a constant on the set $\{\omega: V(\omega) = v\}$. In other words, on this $\sigma(V)$-set, $P(A|V)_\omega$ almost surely equals to a constant that depends only on $v$, which is therefore reasonable to be denoted by $g(v)$ (for more discussions on how to justify the "$P(A|V = v)$" notation rigorously under the measure-theoretic probability, refer to Exercises 33.4, 33.5, 33.6 in Probability and Measure by Patrick Billingsley). Combining $(1)$ and $(2)$ gives \begin{align} P(A) = \int_\mathbb{R}P(A|V = v)f(v)dv. \tag{3} \end{align} Now take $A = \{Y = y\} = \{\omega: Y(\omega) = y\} \in \mathscr{F}$ in $(3)$ finishes the proof of your conjecture.
Formula of conditional probability when we have discrete and continuous random variables
Suppose that $Y$ and $V$ are defined on a common probability space $(\Omega, \mathscr{F}, P)$, and $V$ has density $f$ with respect to Lebesgue measure, then your conjecture is true. In fact, whether
Formula of conditional probability when we have discrete and continuous random variables Suppose that $Y$ and $V$ are defined on a common probability space $(\Omega, \mathscr{F}, P)$, and $V$ has density $f$ with respect to Lebesgue measure, then your conjecture is true. In fact, whether $Y$ is discrete or not is unessential for the proof, as the formula you listed is a special case of a more general relation $(3)$ to be shown below. For any $A \in \mathscr{F}$, by the definition of conditional probability, it holds that \begin{align} P(A) = \int_\Omega P(A | V)dP, \tag{1} \end{align} where "$P(A|V)$" is a shorthand for the standard sub-$\sigma$-field-based conditional probability notation $P(A|\sigma(V))$. Also as a part of the definition of conditional probability, $P(A|V)$ is $\sigma(V)$-measurable, implying that there exists a Borel function $g: \mathbb{R} \to \mathbb{R}$ such that $P(A|V) = g(V)$. The change of variable theorem then gives \begin{align} \int_\Omega P(A|V)dP = \int_\Omega g(V(\omega))P(d\omega) = \int_\mathbb{R}g(v)f(v)dv. \tag{2} \end{align} By convention, the function $g(v)$ is usually denoted by $P(A|V = v)$. Technically speaking, the notation makes sense because for any $v \in \mathbb{R}$, the random variable $P(A | V)$ is almost surely a constant on the set $\{\omega: V(\omega) = v\}$. In other words, on this $\sigma(V)$-set, $P(A|V)_\omega$ almost surely equals to a constant that depends only on $v$, which is therefore reasonable to be denoted by $g(v)$ (for more discussions on how to justify the "$P(A|V = v)$" notation rigorously under the measure-theoretic probability, refer to Exercises 33.4, 33.5, 33.6 in Probability and Measure by Patrick Billingsley). Combining $(1)$ and $(2)$ gives \begin{align} P(A) = \int_\mathbb{R}P(A|V = v)f(v)dv. \tag{3} \end{align} Now take $A = \{Y = y\} = \{\omega: Y(\omega) = y\} \in \mathscr{F}$ in $(3)$ finishes the proof of your conjecture.
Formula of conditional probability when we have discrete and continuous random variables Suppose that $Y$ and $V$ are defined on a common probability space $(\Omega, \mathscr{F}, P)$, and $V$ has density $f$ with respect to Lebesgue measure, then your conjecture is true. In fact, whether
46,969
Formula of conditional probability when we have discrete and continuous random variables
Let me offer a (long) intuitive explanation without entering into measure-theoretic arguments. The main problem is thus how to make sense of the conditional probability $P(Y = y| V=x)$ when $V$ is an (absolutely) continuous random variable and for which $P(V=x)=0$. First of all, rest assured that such a probability exists and makes sense. Here is a practical example. Example. Suppose that a real number $V$ is selected at random, with density $f$. If $V$ takes value $x$, a coin with probability $g(x)$ is tossed $(0 \leq g(x) \leq 1)$. It is thus natural to assert that the conditional probability of obtaining a head, given $V=x$, is $g(x)$. But $V$ is absolutely continuous, so $\{ V=x \}$ has probability 0, thus the conditional probabilities do exist but they need to be defined. In the explanation below I'll use this example to make my argument more concrete. An intuitive but less rigorous way to define the overall probability of obtaining a head is by the following infinitesimal argument. The probability that $V$ will fall into the interval $(x,x+dx]$ is approximately $f(x)dx$. Given that $V$ falls into this interval, the probability of a head is roughly $g(x)$. So from the law of total probability, we expect that the probability of a head will be $\sum_x g(x) f(x) dx$, which approximates $\int_{-\infty}^{\infty}g(x)f(x)dx$. Thus the probability in question is nothing but a weighted average of conditional probabilities, the weights being assigned in accordance with the density $f$. Now let's look a bit more closely at what's happening here. We have two random variables $Y$ and $V$ [$Y=y$, $V$ = (say) the number of heads obtained]. We are specifying the density of $V$ and for each $x$ and each Borel set $B$, we are specifying a quantity $P_x(B)$ that is to be interpreted intuitively as the conditional probability that $V\in B$ given that $Y=x$; a longer notation is $P_x(B) = P\{V\in B | Y = x\}$. We are trying to conclude that the probabilities of tall events involving $Y$ and $V$ are now determined. Suppose that $C$ is a two-dimensional Borel set. What is a reasonable figure for $P\{(Y, V)\in C\}$ ? Intuitively, the probability that $Y$ falls into $(x,x+dx]$ is $f_Y(x)dx$. Given that this happens, that is, roughly given $Y=x$, the only way $(Y,V)$ can lie in $C$ is if $Y$ belongs to the "section" $C_x = \{y: (x,y)\in C\}$ (as in the left figure). But this happens with probability $P_x(C_x)$. Thus we expect that the total probability that $(Y,V)$ will belong to $C$ is $$ \int_{-\infty}^{\infty} P_x(C_x)f_Y(x)dx. $$ In particular, if $C = A\times B = \{(x,y): x\in A, y\in B\}$ as in the Figure on the right, $C_x = \emptyset$ if $x\not \in A$, $C_x = B$ if $x\in A$. Thus $$ P\{(Y,V)\in C\} = P(Y\in A, V \in B) = \int_A P_x(B)f_Y(x)dx. $$ Now this reasoning may be formalized by letting the sample space be $\Omega = \mathbb{R}^2$, the Borel subsets be $\mathcal{F}$ $Y(x,v) = y$, $V(x,v) = v$ and letting $f_Y$ be the density function on $\mathbb{R}$. Suppose that for each real $x$ we are given a probability measure $P_x$ on Borel subsets of $\mathbb{R}$ and assume also that $P_x(B)$ is a piecewise continuous function of $x$, for each fixed $B$. Then it turns out that there is a unique probability measure $P$ on $\mathcal F$ such that for all Borel subsets $A, B$ of $\mathbb{R}$ $$ P(A\times B) = \int_A P_x(B) f_Y(x)dx.\tag{*} $$ The requirement (*), which can be regarded as a continuous version of the law of total probability, determines $P$ uniquely. In fact, if $C\in \mathcal{F}$, $P(C)$ is given by $$ P(C) = \int_{-\infty}^{\infty} P_x(C_x)f_Y(x)dx.\tag{**} $$ Now if $Y(x,y) = x, V(x,y) = y$, then $$ P(A\times B) = P(Y\in A, V\in B) $$ and $$P(C) = P\{(Y,V)\in C\}.$$ Furthermore, the distribution function of $Y$ is given by $$ F_Y(x_0) = P(Y\leq x_0) = P\{X \in A, V \in B\} = \int_{A}P_x(B)f_Y(x)dx = \int_{-\infty}^{x_0} f_Y(x)dx, $$ where $A = (-\infty, x_0]$ and $B = (-\infty, \infty)$. Furthermore, $$ P(V \in B) = P\{Y\in A, V\in B\} $$ where $A = (-\infty, \infty)$, hence $$ P(Y\in B) = \int_{\infty}^{\infty} P_x(B)f_Y(x)dx. $$ So to summarize: If we start out with a density for $Y$ and a set of probabilities $P_x(B)$ that we interpret as $P(Y\in B| V=x)$, the probabilities of events of the form $\{(Y, V)\in C\}$ are determined in a natural way if you believe that there should be a continuous version of the law of total probability; $P\{(Y, V)\in C\}$ is given by (**), which reduces to (*), in the special case when $C = A\times B$. Example (Cont'd). If $Y$ has density $f_Y$, and a coin with probability of heads $g(x)$ is tossed whenever $Y = x$ (suppose a head corresponds to $V=1$), then the probability of obtaining a head is $$ P(V = 1) = \int_{-\infty}^{\infty} P(V=1 | Y=x) f_Y(x)dx = \int_{-\infty}^{\infty} g(x) f_Y(x)dx, $$ in agreement with the previous intuitive argument.
Formula of conditional probability when we have discrete and continuous random variables
Let me offer a (long) intuitive explanation without entering into measure-theoretic arguments. The main problem is thus how to make sense of the conditional probability $P(Y = y| V=x)$ when $V$ is an
Formula of conditional probability when we have discrete and continuous random variables Let me offer a (long) intuitive explanation without entering into measure-theoretic arguments. The main problem is thus how to make sense of the conditional probability $P(Y = y| V=x)$ when $V$ is an (absolutely) continuous random variable and for which $P(V=x)=0$. First of all, rest assured that such a probability exists and makes sense. Here is a practical example. Example. Suppose that a real number $V$ is selected at random, with density $f$. If $V$ takes value $x$, a coin with probability $g(x)$ is tossed $(0 \leq g(x) \leq 1)$. It is thus natural to assert that the conditional probability of obtaining a head, given $V=x$, is $g(x)$. But $V$ is absolutely continuous, so $\{ V=x \}$ has probability 0, thus the conditional probabilities do exist but they need to be defined. In the explanation below I'll use this example to make my argument more concrete. An intuitive but less rigorous way to define the overall probability of obtaining a head is by the following infinitesimal argument. The probability that $V$ will fall into the interval $(x,x+dx]$ is approximately $f(x)dx$. Given that $V$ falls into this interval, the probability of a head is roughly $g(x)$. So from the law of total probability, we expect that the probability of a head will be $\sum_x g(x) f(x) dx$, which approximates $\int_{-\infty}^{\infty}g(x)f(x)dx$. Thus the probability in question is nothing but a weighted average of conditional probabilities, the weights being assigned in accordance with the density $f$. Now let's look a bit more closely at what's happening here. We have two random variables $Y$ and $V$ [$Y=y$, $V$ = (say) the number of heads obtained]. We are specifying the density of $V$ and for each $x$ and each Borel set $B$, we are specifying a quantity $P_x(B)$ that is to be interpreted intuitively as the conditional probability that $V\in B$ given that $Y=x$; a longer notation is $P_x(B) = P\{V\in B | Y = x\}$. We are trying to conclude that the probabilities of tall events involving $Y$ and $V$ are now determined. Suppose that $C$ is a two-dimensional Borel set. What is a reasonable figure for $P\{(Y, V)\in C\}$ ? Intuitively, the probability that $Y$ falls into $(x,x+dx]$ is $f_Y(x)dx$. Given that this happens, that is, roughly given $Y=x$, the only way $(Y,V)$ can lie in $C$ is if $Y$ belongs to the "section" $C_x = \{y: (x,y)\in C\}$ (as in the left figure). But this happens with probability $P_x(C_x)$. Thus we expect that the total probability that $(Y,V)$ will belong to $C$ is $$ \int_{-\infty}^{\infty} P_x(C_x)f_Y(x)dx. $$ In particular, if $C = A\times B = \{(x,y): x\in A, y\in B\}$ as in the Figure on the right, $C_x = \emptyset$ if $x\not \in A$, $C_x = B$ if $x\in A$. Thus $$ P\{(Y,V)\in C\} = P(Y\in A, V \in B) = \int_A P_x(B)f_Y(x)dx. $$ Now this reasoning may be formalized by letting the sample space be $\Omega = \mathbb{R}^2$, the Borel subsets be $\mathcal{F}$ $Y(x,v) = y$, $V(x,v) = v$ and letting $f_Y$ be the density function on $\mathbb{R}$. Suppose that for each real $x$ we are given a probability measure $P_x$ on Borel subsets of $\mathbb{R}$ and assume also that $P_x(B)$ is a piecewise continuous function of $x$, for each fixed $B$. Then it turns out that there is a unique probability measure $P$ on $\mathcal F$ such that for all Borel subsets $A, B$ of $\mathbb{R}$ $$ P(A\times B) = \int_A P_x(B) f_Y(x)dx.\tag{*} $$ The requirement (*), which can be regarded as a continuous version of the law of total probability, determines $P$ uniquely. In fact, if $C\in \mathcal{F}$, $P(C)$ is given by $$ P(C) = \int_{-\infty}^{\infty} P_x(C_x)f_Y(x)dx.\tag{**} $$ Now if $Y(x,y) = x, V(x,y) = y$, then $$ P(A\times B) = P(Y\in A, V\in B) $$ and $$P(C) = P\{(Y,V)\in C\}.$$ Furthermore, the distribution function of $Y$ is given by $$ F_Y(x_0) = P(Y\leq x_0) = P\{X \in A, V \in B\} = \int_{A}P_x(B)f_Y(x)dx = \int_{-\infty}^{x_0} f_Y(x)dx, $$ where $A = (-\infty, x_0]$ and $B = (-\infty, \infty)$. Furthermore, $$ P(V \in B) = P\{Y\in A, V\in B\} $$ where $A = (-\infty, \infty)$, hence $$ P(Y\in B) = \int_{\infty}^{\infty} P_x(B)f_Y(x)dx. $$ So to summarize: If we start out with a density for $Y$ and a set of probabilities $P_x(B)$ that we interpret as $P(Y\in B| V=x)$, the probabilities of events of the form $\{(Y, V)\in C\}$ are determined in a natural way if you believe that there should be a continuous version of the law of total probability; $P\{(Y, V)\in C\}$ is given by (**), which reduces to (*), in the special case when $C = A\times B$. Example (Cont'd). If $Y$ has density $f_Y$, and a coin with probability of heads $g(x)$ is tossed whenever $Y = x$ (suppose a head corresponds to $V=1$), then the probability of obtaining a head is $$ P(V = 1) = \int_{-\infty}^{\infty} P(V=1 | Y=x) f_Y(x)dx = \int_{-\infty}^{\infty} g(x) f_Y(x)dx, $$ in agreement with the previous intuitive argument.
Formula of conditional probability when we have discrete and continuous random variables Let me offer a (long) intuitive explanation without entering into measure-theoretic arguments. The main problem is thus how to make sense of the conditional probability $P(Y = y| V=x)$ when $V$ is an
46,970
Formula of conditional probability when we have discrete and continuous random variables
Let $(\Omega,\mathcal F,P)$ be a probability space and $V$ a continuous random variable having density $f$ defined on the space and $A$ some element of $\mathcal F$. Then actually the question is: what is the function prescribed by:$$v\mapsto P(A|V=v)$$?????... Because events like $\{V=v\}$ have probability $0$ we cannot use the usual definition stating that $PA\mid C)=P(A\cap C)/P(C)$. Further we want this to be a function that satisfies:$$P(A)=\int P(A\mid V=v)f(v)dv\tag1$$ Here the Radon-Nikodym derivative comes in. Let $\nu$ denote the measure on $\sigma(V)$ prescribed by:$$B\mapsto P(A\cap B)$$Then - if $Q$ is the restriction of $P$ on $\sigma(V)$ - measure $\nu$ is absolutely continuous with respect to $Q$ (i.e. $Q(B)=0$ implies that $\nu(B)=0$). Then according to the theorem of Radon-Nikodym a $\sigma(V)$-measurable random variable $Z$ exists with:$$\nu(B)=\int_BZdQ=\mathbb E1_BZ\text{ for every }B\in\sigma(V)$$The fact that $Z$ is $\sigma(V)$-measurable implies that a Borel measurable function $g$ exists such that $Z=g(V)$ which makes us - in special case $B=\Omega$ - arrive at:$$P(A)=\nu(\Omega)=\mathbb Eg(V)=\int g(v)f(v)dv$$So this $g$ is a suitable candidate for satisfaction of $(1)$ and we take the liberty to define it is as $P(A|V=v)$. Well defined?... Well, not exactly because in general there are more candidates and we just picked one out. Fortunately though it can be proved that two candidates at most differ on a $\sigma(V)$-measurable $P$-null set. This of course works for every event $A$ so also for events $\{Y=y\}$ where $Y$ is some other random variable on the space. Doing so we find:$$P(Y=y)=\int P(Y=y\mid V=v)f(v)dv$$
Formula of conditional probability when we have discrete and continuous random variables
Let $(\Omega,\mathcal F,P)$ be a probability space and $V$ a continuous random variable having density $f$ defined on the space and $A$ some element of $\mathcal F$. Then actually the question is: wha
Formula of conditional probability when we have discrete and continuous random variables Let $(\Omega,\mathcal F,P)$ be a probability space and $V$ a continuous random variable having density $f$ defined on the space and $A$ some element of $\mathcal F$. Then actually the question is: what is the function prescribed by:$$v\mapsto P(A|V=v)$$?????... Because events like $\{V=v\}$ have probability $0$ we cannot use the usual definition stating that $PA\mid C)=P(A\cap C)/P(C)$. Further we want this to be a function that satisfies:$$P(A)=\int P(A\mid V=v)f(v)dv\tag1$$ Here the Radon-Nikodym derivative comes in. Let $\nu$ denote the measure on $\sigma(V)$ prescribed by:$$B\mapsto P(A\cap B)$$Then - if $Q$ is the restriction of $P$ on $\sigma(V)$ - measure $\nu$ is absolutely continuous with respect to $Q$ (i.e. $Q(B)=0$ implies that $\nu(B)=0$). Then according to the theorem of Radon-Nikodym a $\sigma(V)$-measurable random variable $Z$ exists with:$$\nu(B)=\int_BZdQ=\mathbb E1_BZ\text{ for every }B\in\sigma(V)$$The fact that $Z$ is $\sigma(V)$-measurable implies that a Borel measurable function $g$ exists such that $Z=g(V)$ which makes us - in special case $B=\Omega$ - arrive at:$$P(A)=\nu(\Omega)=\mathbb Eg(V)=\int g(v)f(v)dv$$So this $g$ is a suitable candidate for satisfaction of $(1)$ and we take the liberty to define it is as $P(A|V=v)$. Well defined?... Well, not exactly because in general there are more candidates and we just picked one out. Fortunately though it can be proved that two candidates at most differ on a $\sigma(V)$-measurable $P$-null set. This of course works for every event $A$ so also for events $\{Y=y\}$ where $Y$ is some other random variable on the space. Doing so we find:$$P(Y=y)=\int P(Y=y\mid V=v)f(v)dv$$
Formula of conditional probability when we have discrete and continuous random variables Let $(\Omega,\mathcal F,P)$ be a probability space and $V$ a continuous random variable having density $f$ defined on the space and $A$ some element of $\mathcal F$. Then actually the question is: wha
46,971
Are only confounders used to generate propensity scores for propensity score matching/IPW?
The balancing property of propensity scores has nothing to do with whether the predictors are confounders or not. It is a purely statistical property that has nothing to do with causal inference or matching, etc. Pearl explains this extremely clearly in his book Causality (see section 11.3.5 here in particular). Propensity scores are used to estimate the non-causal "adjustment" estimand $E[E[Y|X, T = 1]] - E[E[Y|X, T = 0]]$ without specifying models for $E[Y|X, T = 1]$ and $E[Y|X, T = 0]$. What the adjustment estimand means to you is a different story. If you seek to estimate the total effect of $T$ on $Y$, i.e., $E[Y^1] - E[Y^0]$, then the adjustment estimand is equal to the total effect when the assumptions required for causal inference are met, which include that $X$ contains a sufficient set of variables to eliminate confounding by closing all backdoor paths and not opening new backdoor paths. This set of variables is called a "sufficient adjustment set". If the adjustment estimand means something different to you (i.e., it refers to a non-causal quantity or a causal quantity other than the total effect), then you will need different rules for choosing $X$. The definition of a confounder is precise and described in my answer here. A confounder is a member of a minimal sufficient adjustment set, a set for which no proper subset of variables is also a sufficient adjustment set. However, there may be variables part of your adjustment set that are not part of a minimal sufficient adjustment set, in which case they are not confounders. However, we know that including such variables can affect the precision of the resulting effect estimate in finite samples. In particular, including instruments (causes of treatment, not the outcome) reduces the precision, whereas including prognostic variables (causes of the outcome that do not cause and are not caused by the treatment) increases precision. This is true whether you are using propensity scores to estimate the adjustment estimand or any other method that relies on covariate adjustment, like matching methods that don't use the propensity score or regression adjustment. There is nothing special about propensity scores in terms of variable selection. The goal is not to predict treatment well. The goal is to estimate a propensity score that balances the sufficient adjustment set and does so while maintaining precision. Including the confounders and prognostic variables accomplishes this, and including instruments can make things worse.
Are only confounders used to generate propensity scores for propensity score matching/IPW?
The balancing property of propensity scores has nothing to do with whether the predictors are confounders or not. It is a purely statistical property that has nothing to do with causal inference or ma
Are only confounders used to generate propensity scores for propensity score matching/IPW? The balancing property of propensity scores has nothing to do with whether the predictors are confounders or not. It is a purely statistical property that has nothing to do with causal inference or matching, etc. Pearl explains this extremely clearly in his book Causality (see section 11.3.5 here in particular). Propensity scores are used to estimate the non-causal "adjustment" estimand $E[E[Y|X, T = 1]] - E[E[Y|X, T = 0]]$ without specifying models for $E[Y|X, T = 1]$ and $E[Y|X, T = 0]$. What the adjustment estimand means to you is a different story. If you seek to estimate the total effect of $T$ on $Y$, i.e., $E[Y^1] - E[Y^0]$, then the adjustment estimand is equal to the total effect when the assumptions required for causal inference are met, which include that $X$ contains a sufficient set of variables to eliminate confounding by closing all backdoor paths and not opening new backdoor paths. This set of variables is called a "sufficient adjustment set". If the adjustment estimand means something different to you (i.e., it refers to a non-causal quantity or a causal quantity other than the total effect), then you will need different rules for choosing $X$. The definition of a confounder is precise and described in my answer here. A confounder is a member of a minimal sufficient adjustment set, a set for which no proper subset of variables is also a sufficient adjustment set. However, there may be variables part of your adjustment set that are not part of a minimal sufficient adjustment set, in which case they are not confounders. However, we know that including such variables can affect the precision of the resulting effect estimate in finite samples. In particular, including instruments (causes of treatment, not the outcome) reduces the precision, whereas including prognostic variables (causes of the outcome that do not cause and are not caused by the treatment) increases precision. This is true whether you are using propensity scores to estimate the adjustment estimand or any other method that relies on covariate adjustment, like matching methods that don't use the propensity score or regression adjustment. There is nothing special about propensity scores in terms of variable selection. The goal is not to predict treatment well. The goal is to estimate a propensity score that balances the sufficient adjustment set and does so while maintaining precision. Including the confounders and prognostic variables accomplishes this, and including instruments can make things worse.
Are only confounders used to generate propensity scores for propensity score matching/IPW? The balancing property of propensity scores has nothing to do with whether the predictors are confounders or not. It is a purely statistical property that has nothing to do with causal inference or ma
46,972
What does it mean by "maximum likelihood estimation (MLE) problem is unbounded"?
$\DeclareMathOperator{\diag}{diag}$ $\DeclareMathOperator{\tr}{tr}$ A great question. Most standard multivariate analysis texts only treated the $N > m$ case to give the MLE of $\mu$ and $X$ (or more frequently, $\Sigma = X^{-1}$). The $N \leq m$ case seems always overlooked (or merely qualitatively mentioned). In short, "the MLE problem is unbounded" means that the objective function \begin{align} l(\mu, X) := -\log\det(X) + \frac{1}{N}\sum_{i = 1}^N(\hat{\xi}_i - \mu)^TX(\hat{\xi}_i - \mu), \quad \mu \in \mathbb{R}^m, X \in S_+^m \end{align} does not have a fixed lower bound (which depends on $\hat{\xi}_1, \ldots, \hat{\xi}_N$ only) when $m \geq N$ (by contrast, when $N > m$, almost every multivariate analysis textbook will show you $l(\mu, X)$ is bounded below). To show that $l(\mu, X)$ can be arbitrarily small, first introduce the following notations: \begin{align} & Z = \begin{bmatrix} \hat{\xi}_1^T \\ \vdots \\ \hat{\xi}_N^T \end{bmatrix} \in \mathbb{R}^{N \times m}, \; e = \begin{bmatrix} 1 \\ \vdots \\ 1 \end{bmatrix} \in \mathbb{R}^{N \times 1}, \; \bar{\xi} = N^{-1}\sum_{i = 1}^N\hat{\xi}_i, \\ & A = \sum_{i = 1}^N(\hat{\xi}_i - \bar{\xi})(\hat{\xi}_i - \bar{\xi})^T = Z^T(I_{(N)} - N^{-1}ee^T)Z := Z^TPZ \in \mathbb{R}^{m \times m}. \end{align} It can be seen that $P$ is idempotent, whence $A = (PZ)^TPZ$ is positive semi-definite, and \begin{align} r:= \operatorname{rank}(A) = \operatorname{rank}(PZ) \leq \operatorname{rank}(P) = N - 1 < m. \end{align} Therefore, the spectral decomposition of $A$ can be written as \begin{align} A = O\diag(\Lambda, 0_{(m - r)})O^T, \end{align} where $O$ is an order $m$ orthogonal matrix, $\Lambda = \diag(\lambda_1, \ldots, \lambda_r)$, $\lambda_i (1 \leq i \leq r)$ are positive eigenvalues of $A$. Using these notations, $l(\mu, X)$ can be rewritten as: \begin{align} l(\mu, X) = -\log\det(X)+\frac{1}{N}\tr(XA) + (\bar{\xi} - \mu)^TX(\bar{\xi} - \mu). \end{align} Let $X^* = O\diag(\Lambda^{-1}, kI_{(m - r)})O^T$ (where $k > 0$ is to be determined), $\mu^* = \bar{\xi}$, it then follows that \begin{align} & l(\mu^*, X^*) = \sum_{i = 1}^r\log\lambda_i - (m - r)\log k+ \frac{r}{N}, \end{align} whence $l(\mu^*, X^*) \to -\infty$ as $k \to \infty$, i.e., $l$ cannot be bounded from below. If the above matrix operations are too complicated to understand, you can get some sense by considering the case $m = N = 1$ (univariate normal distribution with only one observation), for which the objective function is \begin{align} l(\mu, X) = -\log X + (\hat{\xi}_1 - \mu)^2X. \end{align} Therefore $l(\hat{\xi}_1, X) = -\log X \to -\infty$ as $X \to +\infty$, i.e., the objective function is unbounded.
What does it mean by "maximum likelihood estimation (MLE) problem is unbounded"?
$\DeclareMathOperator{\diag}{diag}$ $\DeclareMathOperator{\tr}{tr}$ A great question. Most standard multivariate analysis texts only treated the $N > m$ case to give the MLE of $\mu$ and $X$ (or more
What does it mean by "maximum likelihood estimation (MLE) problem is unbounded"? $\DeclareMathOperator{\diag}{diag}$ $\DeclareMathOperator{\tr}{tr}$ A great question. Most standard multivariate analysis texts only treated the $N > m$ case to give the MLE of $\mu$ and $X$ (or more frequently, $\Sigma = X^{-1}$). The $N \leq m$ case seems always overlooked (or merely qualitatively mentioned). In short, "the MLE problem is unbounded" means that the objective function \begin{align} l(\mu, X) := -\log\det(X) + \frac{1}{N}\sum_{i = 1}^N(\hat{\xi}_i - \mu)^TX(\hat{\xi}_i - \mu), \quad \mu \in \mathbb{R}^m, X \in S_+^m \end{align} does not have a fixed lower bound (which depends on $\hat{\xi}_1, \ldots, \hat{\xi}_N$ only) when $m \geq N$ (by contrast, when $N > m$, almost every multivariate analysis textbook will show you $l(\mu, X)$ is bounded below). To show that $l(\mu, X)$ can be arbitrarily small, first introduce the following notations: \begin{align} & Z = \begin{bmatrix} \hat{\xi}_1^T \\ \vdots \\ \hat{\xi}_N^T \end{bmatrix} \in \mathbb{R}^{N \times m}, \; e = \begin{bmatrix} 1 \\ \vdots \\ 1 \end{bmatrix} \in \mathbb{R}^{N \times 1}, \; \bar{\xi} = N^{-1}\sum_{i = 1}^N\hat{\xi}_i, \\ & A = \sum_{i = 1}^N(\hat{\xi}_i - \bar{\xi})(\hat{\xi}_i - \bar{\xi})^T = Z^T(I_{(N)} - N^{-1}ee^T)Z := Z^TPZ \in \mathbb{R}^{m \times m}. \end{align} It can be seen that $P$ is idempotent, whence $A = (PZ)^TPZ$ is positive semi-definite, and \begin{align} r:= \operatorname{rank}(A) = \operatorname{rank}(PZ) \leq \operatorname{rank}(P) = N - 1 < m. \end{align} Therefore, the spectral decomposition of $A$ can be written as \begin{align} A = O\diag(\Lambda, 0_{(m - r)})O^T, \end{align} where $O$ is an order $m$ orthogonal matrix, $\Lambda = \diag(\lambda_1, \ldots, \lambda_r)$, $\lambda_i (1 \leq i \leq r)$ are positive eigenvalues of $A$. Using these notations, $l(\mu, X)$ can be rewritten as: \begin{align} l(\mu, X) = -\log\det(X)+\frac{1}{N}\tr(XA) + (\bar{\xi} - \mu)^TX(\bar{\xi} - \mu). \end{align} Let $X^* = O\diag(\Lambda^{-1}, kI_{(m - r)})O^T$ (where $k > 0$ is to be determined), $\mu^* = \bar{\xi}$, it then follows that \begin{align} & l(\mu^*, X^*) = \sum_{i = 1}^r\log\lambda_i - (m - r)\log k+ \frac{r}{N}, \end{align} whence $l(\mu^*, X^*) \to -\infty$ as $k \to \infty$, i.e., $l$ cannot be bounded from below. If the above matrix operations are too complicated to understand, you can get some sense by considering the case $m = N = 1$ (univariate normal distribution with only one observation), for which the objective function is \begin{align} l(\mu, X) = -\log X + (\hat{\xi}_1 - \mu)^2X. \end{align} Therefore $l(\hat{\xi}_1, X) = -\log X \to -\infty$ as $X \to +\infty$, i.e., the objective function is unbounded.
What does it mean by "maximum likelihood estimation (MLE) problem is unbounded"? $\DeclareMathOperator{\diag}{diag}$ $\DeclareMathOperator{\tr}{tr}$ A great question. Most standard multivariate analysis texts only treated the $N > m$ case to give the MLE of $\mu$ and $X$ (or more
46,973
EFA: N factors are too many for N variables
The reason is that factanal implements maximum likelihood estimation, which imposes a constraint. Given a random sample $X_1,\ldots,X_n$, the factor model assumes that $$X_i-\mu = LF +\Psi,$$ where $L$ is the matrix of factor loadings, $F$ are the factor scores and $\Psi$ is the diagonal matrix of specific variances. In particular, given an observed iid sample $x_1,\ldots,x_n$ from $N_p(\mu, \Sigma)$, the log-likelihood function is \begin{eqnarray} \ell(\theta) & = & \log\prod_{j=1}^n \phi_{p}(x_j;\mu,\Sigma)\\ &= & -\frac{n}{2}\log\vert\Sigma\vert - \sum_{j=1}^n(x_j-\mu)^\top\Sigma^{-1}(x_j-\mu)/2, \end{eqnarray} where $\Sigma = LL^\top+\Psi$. It can be shown that the MLE of $\mu$ is $\bar x$. For $L,\Psi$ there is some further work to be done. First, to find a unique solution a constraint has to be placed. This is $$ L^\top\Psi^{-1} L = \Delta,\quad\text{with}\,\,\,\Delta\,\,\,\text{diagonal matrix}.\quad (*) $$ The estimation of $L,\Psi$ then proceeds numerically and the solution found has to satisfy $(*)$. It can be shown that the degrees of freedom are $$\nu = [(p-m)^2 -p-m]/2,$$ where $m$ is the number of factors.
EFA: N factors are too many for N variables
The reason is that factanal implements maximum likelihood estimation, which imposes a constraint. Given a random sample $X_1,\ldots,X_n$, the factor model assumes that $$X_i-\mu = LF +\Psi,$$ where $L
EFA: N factors are too many for N variables The reason is that factanal implements maximum likelihood estimation, which imposes a constraint. Given a random sample $X_1,\ldots,X_n$, the factor model assumes that $$X_i-\mu = LF +\Psi,$$ where $L$ is the matrix of factor loadings, $F$ are the factor scores and $\Psi$ is the diagonal matrix of specific variances. In particular, given an observed iid sample $x_1,\ldots,x_n$ from $N_p(\mu, \Sigma)$, the log-likelihood function is \begin{eqnarray} \ell(\theta) & = & \log\prod_{j=1}^n \phi_{p}(x_j;\mu,\Sigma)\\ &= & -\frac{n}{2}\log\vert\Sigma\vert - \sum_{j=1}^n(x_j-\mu)^\top\Sigma^{-1}(x_j-\mu)/2, \end{eqnarray} where $\Sigma = LL^\top+\Psi$. It can be shown that the MLE of $\mu$ is $\bar x$. For $L,\Psi$ there is some further work to be done. First, to find a unique solution a constraint has to be placed. This is $$ L^\top\Psi^{-1} L = \Delta,\quad\text{with}\,\,\,\Delta\,\,\,\text{diagonal matrix}.\quad (*) $$ The estimation of $L,\Psi$ then proceeds numerically and the solution found has to satisfy $(*)$. It can be shown that the degrees of freedom are $$\nu = [(p-m)^2 -p-m]/2,$$ where $m$ is the number of factors.
EFA: N factors are too many for N variables The reason is that factanal implements maximum likelihood estimation, which imposes a constraint. Given a random sample $X_1,\ldots,X_n$, the factor model assumes that $$X_i-\mu = LF +\Psi,$$ where $L
46,974
Why does univariate Mahalanobis distance not match z-score?
Your intuition about the Mahalanobis distance is correct. However, the EllipticEnvelope algorithm computes robust estimates of the location and covariance matrix which don't match the raw estimates. (See the scikit-learn documentation for details.) In practice, this means that the z scores you compute by hand are not equal to (the square root of) the Mahalanobis distances. First we sample from a univariate Normal and we compute the raw estimates of the mean and variance parameters. import numpy as np from sklearn.covariance import EllipticEnvelope x = np.random.default_rng(42).normal(loc=1, scale=10, size=1000) x.mean() #> 0.71108449 x.var() #> 97.75718960 The robust estimates are not equal to the raw estimates. (location = mean, precision = 1 / variance). ee = EllipticEnvelope() ee = ee.fit(x.reshape(-1, 1)) ee.location_ #> array([1.09312831]) 1 / ee.precision_ #> array([[79.27975244]]) As a result, the "raw" z scores are not equal to the "robust" z scores. dist = ee.mahalanobis(x.reshape(-1, 1)) robust_mean = ee.location_ robust_std = 1 / np.sqrt(ee.precision_[0, 0]) np.vstack([ np.sqrt(dist), np.abs(x - x.mean()) / x.std(), np.abs(x - robust_mean) / robust_std ]) #> array([[0.33176884, 1.17846656, 0.83237332, ..., 0.12563584, 0.13640215, 0.91469881], #> [0.33741386, 1.02262536, 0.78823215, ..., 0.15178124, 0.16147682, 0.86237019], #> [0.33176884, 1.17846656, 0.83237332, ..., 0.12563584, 0.13640215, 0.91469881]]) Hopefully, this is instructive. It seems to me that first computing z scores, as you do, and then "robustifying" them to find outliers defeats the purpose. Instead apply EllipticEnvelope to the original features and trust the method to come up with reliable estimates of the mean and the covariance.
Why does univariate Mahalanobis distance not match z-score?
Your intuition about the Mahalanobis distance is correct. However, the EllipticEnvelope algorithm computes robust estimates of the location and covariance matrix which don't match the raw estimates. (
Why does univariate Mahalanobis distance not match z-score? Your intuition about the Mahalanobis distance is correct. However, the EllipticEnvelope algorithm computes robust estimates of the location and covariance matrix which don't match the raw estimates. (See the scikit-learn documentation for details.) In practice, this means that the z scores you compute by hand are not equal to (the square root of) the Mahalanobis distances. First we sample from a univariate Normal and we compute the raw estimates of the mean and variance parameters. import numpy as np from sklearn.covariance import EllipticEnvelope x = np.random.default_rng(42).normal(loc=1, scale=10, size=1000) x.mean() #> 0.71108449 x.var() #> 97.75718960 The robust estimates are not equal to the raw estimates. (location = mean, precision = 1 / variance). ee = EllipticEnvelope() ee = ee.fit(x.reshape(-1, 1)) ee.location_ #> array([1.09312831]) 1 / ee.precision_ #> array([[79.27975244]]) As a result, the "raw" z scores are not equal to the "robust" z scores. dist = ee.mahalanobis(x.reshape(-1, 1)) robust_mean = ee.location_ robust_std = 1 / np.sqrt(ee.precision_[0, 0]) np.vstack([ np.sqrt(dist), np.abs(x - x.mean()) / x.std(), np.abs(x - robust_mean) / robust_std ]) #> array([[0.33176884, 1.17846656, 0.83237332, ..., 0.12563584, 0.13640215, 0.91469881], #> [0.33741386, 1.02262536, 0.78823215, ..., 0.15178124, 0.16147682, 0.86237019], #> [0.33176884, 1.17846656, 0.83237332, ..., 0.12563584, 0.13640215, 0.91469881]]) Hopefully, this is instructive. It seems to me that first computing z scores, as you do, and then "robustifying" them to find outliers defeats the purpose. Instead apply EllipticEnvelope to the original features and trust the method to come up with reliable estimates of the mean and the covariance.
Why does univariate Mahalanobis distance not match z-score? Your intuition about the Mahalanobis distance is correct. However, the EllipticEnvelope algorithm computes robust estimates of the location and covariance matrix which don't match the raw estimates. (
46,975
How is the denominator in one sample Z test of proportion derived?
When you take a random sample of size $n$ and observe a binary outcome, to conduct this test you first code one of the possible outcomes as $1$ and the other as $0.$ Your model is that the probability of $1$ is some unknown number $p_0.$ Letting the values in the sample be the (random variables) $X_1, X_2, \ldots, X_n,$ the proportion coded $1$ in the sample can be found by summing the $X_i$ (a mathematically simpler operation than counting the ones) because the zeros don't contribute anything. You then divide by the sample size to obtain the proportion: $$\hat p = \frac{1}{n}\sum_{i=1}^n X_i.$$ Because $\hat p$ is a function of random variables, it, too, is a random variable. The test relies on that and on working out the distribution of $\hat p.$ For an exact test of proportion, we would work out this distribution exactly. The $Z$ test approximates the distribution of $\hat p.$ It uses a Normal distribution. This use is partially justified by the Central Limit Theorem. It is particularly convenient because you can identify any Normal distribution from just two values. The simplest, and most often used, are its mean and variance. What's nice about the mean and the variance is that they are easy to work out for $\hat p,$ because (in a sense about to be illustrated) both of these quantities add. A conventional symbol for the mean, or expectation, is $E$. What it means to "add" is that it is a linear operator: namely, the expectation of a linear combination of random variables is the same linear combination of their expectations: $$E[\hat p] = E\left[\frac{1}{n}\sum_{i=1}^n X_i\right] = \frac{1}{n}\sum_{i=1}^n E[X_i].$$ The expectation of any of the $X_i$ is, as always, the sum of its possible values weighted by their chances of occurring. And since the chance $X_0=0$ must be $1-p_0,$ we find $$E[X_i] = 0 \times (1-p_0) + 1 \times p_0 = p_0.$$ A conventional symbol for the variance is $\operatorname{Var}.$ It is a quadratic form. This has a somewhat complicated meaning in general, but in the special case where the $X_i$ have been independently sampled it means $$\operatorname{Var}(\hat p) = \operatorname{Var}\left(\frac{1}{n}\sum_{i=1}^n X_i\right) = \frac{1}{n^2}\sum_{i=1}^n \operatorname{Var}(X_i).$$ Notice how $1/n$ was squared: that's the meaning of "quadratic." One formula for the variance is in terms of squared deviations from the mean, $(X_i - E[X_i])^2:$ it's their expectation. Once again, this is the probability-weighted sum of the values, whence $$\begin{aligned} \operatorname{Var}(X_i) &= (0 - E[X_i])^2 \times (1-p_0) + (1 - E[X_i])^2 \times p_0 \\ &= (-p_0)^2(1-p_0) + (1 - p_0)^2 p_0\\ &=p_0(1-p_0). \end{aligned}$$ Plugging this into the previous formula shows us $$\operatorname{Var}(\hat p) = \frac{1}{n^2}\sum_{i=1}^n p_0(1-p_0) = \frac{p_0(1-p_0)}{n}.$$ Notice where the $1/n = (1/n^2)\times n$ came from: it equals the square of $1/n$ (because the variance is a quadratic form) but has been repeated $n$ times for the $n$ independent observations $X_i.$ The upshot is that The Normal approximation to the distribution of $\hat p$ has a mean of $p_0$ and variance of $p_0(1-p_0)/n.$ This is sufficient to work out the $Z$ test. However, it is convenient to use just a single reference distribution rather than a family of distributions that depend on two numbers (parameters). To this end we always standardize $\hat p.$ This simply means to change how we measure it. Just like converting from degrees F to degrees C, we shift its origin (its zero value) to have an expectation of zero and rescale it to have unit variance. Using the same algebraic rules as before -- expectations add and variance is a quadratic form -- we finally deduce that the distribution of $$Z = \frac{\hat p - E[\hat p]}{\sqrt{\operatorname{Var}(\hat p)}} = \frac{\hat p - p_0}{\sqrt{p_0(1-p_0)/n}}$$ is approximately that of the standard Normal distribution with mean $0$ and unit variance. Whenever you see this formula, or one like it, take a moment to recall where each of its terms comes from: it standardizes a test statistic by shifting its mean and rescaling it to have unit variance. This will help you recall such formulas, use them correctly, and understand other statistical formulas as well.
How is the denominator in one sample Z test of proportion derived?
When you take a random sample of size $n$ and observe a binary outcome, to conduct this test you first code one of the possible outcomes as $1$ and the other as $0.$ Your model is that the probabilit
How is the denominator in one sample Z test of proportion derived? When you take a random sample of size $n$ and observe a binary outcome, to conduct this test you first code one of the possible outcomes as $1$ and the other as $0.$ Your model is that the probability of $1$ is some unknown number $p_0.$ Letting the values in the sample be the (random variables) $X_1, X_2, \ldots, X_n,$ the proportion coded $1$ in the sample can be found by summing the $X_i$ (a mathematically simpler operation than counting the ones) because the zeros don't contribute anything. You then divide by the sample size to obtain the proportion: $$\hat p = \frac{1}{n}\sum_{i=1}^n X_i.$$ Because $\hat p$ is a function of random variables, it, too, is a random variable. The test relies on that and on working out the distribution of $\hat p.$ For an exact test of proportion, we would work out this distribution exactly. The $Z$ test approximates the distribution of $\hat p.$ It uses a Normal distribution. This use is partially justified by the Central Limit Theorem. It is particularly convenient because you can identify any Normal distribution from just two values. The simplest, and most often used, are its mean and variance. What's nice about the mean and the variance is that they are easy to work out for $\hat p,$ because (in a sense about to be illustrated) both of these quantities add. A conventional symbol for the mean, or expectation, is $E$. What it means to "add" is that it is a linear operator: namely, the expectation of a linear combination of random variables is the same linear combination of their expectations: $$E[\hat p] = E\left[\frac{1}{n}\sum_{i=1}^n X_i\right] = \frac{1}{n}\sum_{i=1}^n E[X_i].$$ The expectation of any of the $X_i$ is, as always, the sum of its possible values weighted by their chances of occurring. And since the chance $X_0=0$ must be $1-p_0,$ we find $$E[X_i] = 0 \times (1-p_0) + 1 \times p_0 = p_0.$$ A conventional symbol for the variance is $\operatorname{Var}.$ It is a quadratic form. This has a somewhat complicated meaning in general, but in the special case where the $X_i$ have been independently sampled it means $$\operatorname{Var}(\hat p) = \operatorname{Var}\left(\frac{1}{n}\sum_{i=1}^n X_i\right) = \frac{1}{n^2}\sum_{i=1}^n \operatorname{Var}(X_i).$$ Notice how $1/n$ was squared: that's the meaning of "quadratic." One formula for the variance is in terms of squared deviations from the mean, $(X_i - E[X_i])^2:$ it's their expectation. Once again, this is the probability-weighted sum of the values, whence $$\begin{aligned} \operatorname{Var}(X_i) &= (0 - E[X_i])^2 \times (1-p_0) + (1 - E[X_i])^2 \times p_0 \\ &= (-p_0)^2(1-p_0) + (1 - p_0)^2 p_0\\ &=p_0(1-p_0). \end{aligned}$$ Plugging this into the previous formula shows us $$\operatorname{Var}(\hat p) = \frac{1}{n^2}\sum_{i=1}^n p_0(1-p_0) = \frac{p_0(1-p_0)}{n}.$$ Notice where the $1/n = (1/n^2)\times n$ came from: it equals the square of $1/n$ (because the variance is a quadratic form) but has been repeated $n$ times for the $n$ independent observations $X_i.$ The upshot is that The Normal approximation to the distribution of $\hat p$ has a mean of $p_0$ and variance of $p_0(1-p_0)/n.$ This is sufficient to work out the $Z$ test. However, it is convenient to use just a single reference distribution rather than a family of distributions that depend on two numbers (parameters). To this end we always standardize $\hat p.$ This simply means to change how we measure it. Just like converting from degrees F to degrees C, we shift its origin (its zero value) to have an expectation of zero and rescale it to have unit variance. Using the same algebraic rules as before -- expectations add and variance is a quadratic form -- we finally deduce that the distribution of $$Z = \frac{\hat p - E[\hat p]}{\sqrt{\operatorname{Var}(\hat p)}} = \frac{\hat p - p_0}{\sqrt{p_0(1-p_0)/n}}$$ is approximately that of the standard Normal distribution with mean $0$ and unit variance. Whenever you see this formula, or one like it, take a moment to recall where each of its terms comes from: it standardizes a test statistic by shifting its mean and rescaling it to have unit variance. This will help you recall such formulas, use them correctly, and understand other statistical formulas as well.
How is the denominator in one sample Z test of proportion derived? When you take a random sample of size $n$ and observe a binary outcome, to conduct this test you first code one of the possible outcomes as $1$ and the other as $0.$ Your model is that the probabilit
46,976
How is the denominator in one sample Z test of proportion derived?
The denominator is the standard deviation of the sampling distribution, which is known as the standard error. Standard error, at least under the assumptions of the z-test, is equal to the population standard deviation divided by the square root of the sample size. For the proportion variable you are considering, calculus shows the variance to be $p_0(1-p_0)$. Therefore, the standard deviation is $\sqrt{p_0(1-p_0)}$. Then we divide by the square root of the sample size to get $\dfrac{\sqrt{p_0(1-p_0)}}{\sqrt{n}} = \sqrt{\dfrac{p_0(1-p_0)}{n}}$.
How is the denominator in one sample Z test of proportion derived?
The denominator is the standard deviation of the sampling distribution, which is known as the standard error. Standard error, at least under the assumptions of the z-test, is equal to the population s
How is the denominator in one sample Z test of proportion derived? The denominator is the standard deviation of the sampling distribution, which is known as the standard error. Standard error, at least under the assumptions of the z-test, is equal to the population standard deviation divided by the square root of the sample size. For the proportion variable you are considering, calculus shows the variance to be $p_0(1-p_0)$. Therefore, the standard deviation is $\sqrt{p_0(1-p_0)}$. Then we divide by the square root of the sample size to get $\dfrac{\sqrt{p_0(1-p_0)}}{\sqrt{n}} = \sqrt{\dfrac{p_0(1-p_0)}{n}}$.
How is the denominator in one sample Z test of proportion derived? The denominator is the standard deviation of the sampling distribution, which is known as the standard error. Standard error, at least under the assumptions of the z-test, is equal to the population s
46,977
Proper loss function for regression with uniform target distribution
using the L2 norm assumes that the target is normally distributed Sorry, but this is nonsense. (There is a lot of nonsense on the internet.) Your choice of error measure or loss function assumes nothing about the (conditional or unconditional) distribution of the target variable. Rather, different loss functions elicit different functionals of the target variable. The MSE will be minimized in expectation by the conditional mean, whether the conditional distribution is normal or Poisson. (Assuming this expectation exists, and we are not dealing with a Cauchy.) The MAE will be minimized in expectation by the median. If your distribution is indeed symmetric, like the normal, both MSE and MAE will tend towards the same point prediction, but if the distribution is asymmetric, like the Poisson, the two minimizers will be different. You may find a paper of mine (Kolassa, 2020, IJF) useful. Or this thread: What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? Thus, your strategy should be to first decide which functional of your target distribution you are looking for - the mean, the median, a quantile, whatever. Then, and only then, can you choose an error measure that elicits this functional.
Proper loss function for regression with uniform target distribution
using the L2 norm assumes that the target is normally distributed Sorry, but this is nonsense. (There is a lot of nonsense on the internet.) Your choice of error measure or loss function assumes noth
Proper loss function for regression with uniform target distribution using the L2 norm assumes that the target is normally distributed Sorry, but this is nonsense. (There is a lot of nonsense on the internet.) Your choice of error measure or loss function assumes nothing about the (conditional or unconditional) distribution of the target variable. Rather, different loss functions elicit different functionals of the target variable. The MSE will be minimized in expectation by the conditional mean, whether the conditional distribution is normal or Poisson. (Assuming this expectation exists, and we are not dealing with a Cauchy.) The MAE will be minimized in expectation by the median. If your distribution is indeed symmetric, like the normal, both MSE and MAE will tend towards the same point prediction, but if the distribution is asymmetric, like the Poisson, the two minimizers will be different. You may find a paper of mine (Kolassa, 2020, IJF) useful. Or this thread: What are the shortcomings of the Mean Absolute Percentage Error (MAPE)? Thus, your strategy should be to first decide which functional of your target distribution you are looking for - the mean, the median, a quantile, whatever. Then, and only then, can you choose an error measure that elicits this functional.
Proper loss function for regression with uniform target distribution using the L2 norm assumes that the target is normally distributed Sorry, but this is nonsense. (There is a lot of nonsense on the internet.) Your choice of error measure or loss function assumes noth
46,978
Visualizing repeated measures (not longitudinal)
Ed Tufte's spare redesign of the boxplot permits a large "small multiple" graphic to be displayed. Another point Tufte makes is that by ordering small multiples according to another factor, one often gets "free" information out of the graphic. Ordering the plots by median or box height is usually insightful, because relationships among the statistics (especially between level and spread) suggest useful ways of re-expressing the data. Here are examples based on the code in the question (to generate sample data) and code offered by former CV moderator chl to make the plots. Nine boxplots 100 boxplots 500 boxplots (log scale) 50 boxplots ordered by spread R code # # Courtesy chl. Code has been simplified and customized. # tufte.boxplot <- function(x, g, thickness=1, col.med="White", ...) { k <- nlevels(g) plot(c(1,k), range(x), type="n", xlab=deparse(substitute(g)), ylab=deparse(substitute(x)), ...) for (i in 1:k) with(boxplot.stats(x[as.numeric(g)==i]), { segments(i, stats[2], i, stats[4], col=gray(.10), lwd=thickness) # "Box" segments(i, stats[1], i, stats[2], col=gray(.7)) # Bottom whisker segments(i, stats[4], i, stats[5], col=gray(.7)) # Top whisker points(rep(i, length(out)), out, cex=.8) # Outliers points(i, stats[3], cex=1.0, col=col.med, pch=19) # Median }) } # # Create data. # N <- 9 # Number of individuals # N <- 100 # N <- 50 set.seed(17) # For reproducibility # Vary the counts, medians, and spreads l <- lapply(3 + rpois(N, 5), function(n) exp(rnorm(n, log(rgamma(1, 20, scale=1/20)), sqrt(rgamma(1, 15, 60)))) ) df <- do.call(rbind, lapply(seq_along(l), function(i) data.frame(Individual=factor(i), Value=l[[i]]))) # # Visualize. # # Order by decreasing median df$Individual <- with(df, reorder(Individual, Value, function(x) -median(x))) # Alternatively, order by decreasing IQR df$Individual <- with(df, reorder(Individual, Value, function(x) diff(quantile(x, c(3/4, 1/4))))) with(df, tufte.boxplot(Value, Individual, bty="n", xaxt="n", log="", col.med="#8080f080", thickness=2, main="Ordered Boxplots Ordered by Spread (IQR)"))
Visualizing repeated measures (not longitudinal)
Ed Tufte's spare redesign of the boxplot permits a large "small multiple" graphic to be displayed. Another point Tufte makes is that by ordering small multiples according to another factor, one often
Visualizing repeated measures (not longitudinal) Ed Tufte's spare redesign of the boxplot permits a large "small multiple" graphic to be displayed. Another point Tufte makes is that by ordering small multiples according to another factor, one often gets "free" information out of the graphic. Ordering the plots by median or box height is usually insightful, because relationships among the statistics (especially between level and spread) suggest useful ways of re-expressing the data. Here are examples based on the code in the question (to generate sample data) and code offered by former CV moderator chl to make the plots. Nine boxplots 100 boxplots 500 boxplots (log scale) 50 boxplots ordered by spread R code # # Courtesy chl. Code has been simplified and customized. # tufte.boxplot <- function(x, g, thickness=1, col.med="White", ...) { k <- nlevels(g) plot(c(1,k), range(x), type="n", xlab=deparse(substitute(g)), ylab=deparse(substitute(x)), ...) for (i in 1:k) with(boxplot.stats(x[as.numeric(g)==i]), { segments(i, stats[2], i, stats[4], col=gray(.10), lwd=thickness) # "Box" segments(i, stats[1], i, stats[2], col=gray(.7)) # Bottom whisker segments(i, stats[4], i, stats[5], col=gray(.7)) # Top whisker points(rep(i, length(out)), out, cex=.8) # Outliers points(i, stats[3], cex=1.0, col=col.med, pch=19) # Median }) } # # Create data. # N <- 9 # Number of individuals # N <- 100 # N <- 50 set.seed(17) # For reproducibility # Vary the counts, medians, and spreads l <- lapply(3 + rpois(N, 5), function(n) exp(rnorm(n, log(rgamma(1, 20, scale=1/20)), sqrt(rgamma(1, 15, 60)))) ) df <- do.call(rbind, lapply(seq_along(l), function(i) data.frame(Individual=factor(i), Value=l[[i]]))) # # Visualize. # # Order by decreasing median df$Individual <- with(df, reorder(Individual, Value, function(x) -median(x))) # Alternatively, order by decreasing IQR df$Individual <- with(df, reorder(Individual, Value, function(x) diff(quantile(x, c(3/4, 1/4))))) with(df, tufte.boxplot(Value, Individual, bty="n", xaxt="n", log="", col.med="#8080f080", thickness=2, main="Ordered Boxplots Ordered by Spread (IQR)"))
Visualizing repeated measures (not longitudinal) Ed Tufte's spare redesign of the boxplot permits a large "small multiple" graphic to be displayed. Another point Tufte makes is that by ordering small multiples according to another factor, one often
46,979
Visualizing repeated measures (not longitudinal)
In my opinion the 2nd plot is pretty good. I might just add colour = so that each individual has their own colour, but the two main things that jump out about that plot are: there is considerably variation between individuals there is, by comparison, much less variation within individuals there is considerable heterogeneity. In particular, three individuals appear to have extremely low variation
Visualizing repeated measures (not longitudinal)
In my opinion the 2nd plot is pretty good. I might just add colour = so that each individual has their own colour, but the two main things that jump out about that plot are: there is considerably va
Visualizing repeated measures (not longitudinal) In my opinion the 2nd plot is pretty good. I might just add colour = so that each individual has their own colour, but the two main things that jump out about that plot are: there is considerably variation between individuals there is, by comparison, much less variation within individuals there is considerable heterogeneity. In particular, three individuals appear to have extremely low variation
Visualizing repeated measures (not longitudinal) In my opinion the 2nd plot is pretty good. I might just add colour = so that each individual has their own colour, but the two main things that jump out about that plot are: there is considerably va
46,980
Transforming a Kumaraswamy distribution to a gamma distribution?
Let $q$ be the quantile function (inverse cdf) of the desired gamma with whatever parameters are required, and let $X \sim \text{Kumaraswamy}(a,b)$. Then (following the same general method given in part (ii) of Step 1 here), we get that $Y = q(1-(1-X^a)^b)$ has the required gamma distribution. If you don't care which gamma it is, the exponential is an easy choice to use.
Transforming a Kumaraswamy distribution to a gamma distribution?
Let $q$ be the quantile function (inverse cdf) of the desired gamma with whatever parameters are required, and let $X \sim \text{Kumaraswamy}(a,b)$. Then (following the same general method given in pa
Transforming a Kumaraswamy distribution to a gamma distribution? Let $q$ be the quantile function (inverse cdf) of the desired gamma with whatever parameters are required, and let $X \sim \text{Kumaraswamy}(a,b)$. Then (following the same general method given in part (ii) of Step 1 here), we get that $Y = q(1-(1-X^a)^b)$ has the required gamma distribution. If you don't care which gamma it is, the exponential is an easy choice to use.
Transforming a Kumaraswamy distribution to a gamma distribution? Let $q$ be the quantile function (inverse cdf) of the desired gamma with whatever parameters are required, and let $X \sim \text{Kumaraswamy}(a,b)$. Then (following the same general method given in pa
46,981
How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$?
Edit : This answer applies to the original question, that was : "for ANY random variable Y, what is the lower bound of (formula)" The lower bound is 0. Let's take $(Y_{n})$ a series following a Bernoulli law such as : $P(Y_{n} = 1) =1/n$. Then, ${E(Y_{n})= \lambda = 1/n}$. $P(\lvert Y_{n} \rvert > \frac{\lvert \lambda \rvert}{2}) = P(\lvert Y_{n} \rvert > \frac{1}{2n}) = P(Y_{n} =1) = \frac{1}{n} < \frac{2}{n}\underset{n\to +\infty}{\longrightarrow} 0 $ Then, $0 < P(\lvert Y_{n} \rvert > \frac{\lvert \lambda \rvert}{2}) < \frac{2}{n}$ , so asymptotically, the lower bound is 0.
How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$?
Edit : This answer applies to the original question, that was : "for ANY random variable Y, what is the lower bound of (formula)" The lower bound is 0. Let's take $(Y_{n})$ a series following a Bernou
How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$? Edit : This answer applies to the original question, that was : "for ANY random variable Y, what is the lower bound of (formula)" The lower bound is 0. Let's take $(Y_{n})$ a series following a Bernoulli law such as : $P(Y_{n} = 1) =1/n$. Then, ${E(Y_{n})= \lambda = 1/n}$. $P(\lvert Y_{n} \rvert > \frac{\lvert \lambda \rvert}{2}) = P(\lvert Y_{n} \rvert > \frac{1}{2n}) = P(Y_{n} =1) = \frac{1}{n} < \frac{2}{n}\underset{n\to +\infty}{\longrightarrow} 0 $ Then, $0 < P(\lvert Y_{n} \rvert > \frac{\lvert \lambda \rvert}{2}) < \frac{2}{n}$ , so asymptotically, the lower bound is 0.
How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$? Edit : This answer applies to the original question, that was : "for ANY random variable Y, what is the lower bound of (formula)" The lower bound is 0. Let's take $(Y_{n})$ a series following a Bernou
46,982
How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$?
Given $\lambda,$ there must be a universal lower bound $p(\lambda)$ (because $0$ will certainly work.) The question is whether there are any $\lambda$ where this bound exceeds $0.$ Regardless, being a lower bound means that for any random variable $Y$ with $E[Y^2]\lt \infty$ and $E[Y]=\lambda,$ $$\Pr(|Y| \gt |\lambda|/2) \ge p(\lambda).\tag{*}$$ Consider $Y=\lambda X/q$ where $X$ is a Bernoulli$(q)$ variable and $\lambda \ne 0.$ Because $E[X]=q,$ $E[Y] = \lambda (q)/q = \lambda.$ $Y$ also has finite second moment because it is bounded. Moreover, since $q \lt 2,$ the event "$|Y|\gt|\lambda|/2$" equals the event "$Y=\lambda/q.$" We may therefore conclude from $(*)$ that $$q = \Pr(X=1) \ge \Pr(Y=\lambda/q) = \Pr(|Y| \gt |\lambda|/2) \ge p(\lambda).$$ If we choose $q = p(\lambda)/2,$ this says $$p(\lambda)/2 \ge p(\lambda),$$ which is true only for non-positive numbers $p(\lambda).$ (You can check that this example still works when $\lambda=0,$ for then $Y$ reduces to the atom at $0$ and $\Pr(|Y|\gt |\lambda|/2) = \Pr(Y \ne 0) = 0.$) Therefore $0$ is the only universal lower bound, no matter what the value of $\lambda$ might be.
How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$?
Given $\lambda,$ there must be a universal lower bound $p(\lambda)$ (because $0$ will certainly work.) The question is whether there are any $\lambda$ where this bound exceeds $0.$ Regardless, being
How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$? Given $\lambda,$ there must be a universal lower bound $p(\lambda)$ (because $0$ will certainly work.) The question is whether there are any $\lambda$ where this bound exceeds $0.$ Regardless, being a lower bound means that for any random variable $Y$ with $E[Y^2]\lt \infty$ and $E[Y]=\lambda,$ $$\Pr(|Y| \gt |\lambda|/2) \ge p(\lambda).\tag{*}$$ Consider $Y=\lambda X/q$ where $X$ is a Bernoulli$(q)$ variable and $\lambda \ne 0.$ Because $E[X]=q,$ $E[Y] = \lambda (q)/q = \lambda.$ $Y$ also has finite second moment because it is bounded. Moreover, since $q \lt 2,$ the event "$|Y|\gt|\lambda|/2$" equals the event "$Y=\lambda/q.$" We may therefore conclude from $(*)$ that $$q = \Pr(X=1) \ge \Pr(Y=\lambda/q) = \Pr(|Y| \gt |\lambda|/2) \ge p(\lambda).$$ If we choose $q = p(\lambda)/2,$ this says $$p(\lambda)/2 \ge p(\lambda),$$ which is true only for non-positive numbers $p(\lambda).$ (You can check that this example still works when $\lambda=0,$ for then $Y$ reduces to the atom at $0$ and $\Pr(|Y|\gt |\lambda|/2) = \Pr(Y \ne 0) = 0.$) Therefore $0$ is the only universal lower bound, no matter what the value of $\lambda$ might be.
How to calculate lower bound on $P \left[|Y| > \frac{|\lambda|}{2} \right]$? Given $\lambda,$ there must be a universal lower bound $p(\lambda)$ (because $0$ will certainly work.) The question is whether there are any $\lambda$ where this bound exceeds $0.$ Regardless, being
46,983
Gradient of the log likelihood for energy based models
The issue emerges in the evaluation of the second term in line $(3)$ and $(4)$ of your derivation. Note that $$\nabla_{\theta} Z(\theta)^{-1} = \nabla_{\theta} \frac{1}{\int_x \exp(-E_{\theta}(x))\, dx} \neq \nabla_{\theta}\int_x \exp(E(_{\theta}(x)) \, dx.$$ Instead, we have $$\nabla_{\theta} Z(\theta)^{-1} = \nabla_{\theta} \left( \frac{1}{Z(\theta)} \right) = \frac{-\nabla_{\theta}Z(\theta)}{Z(\theta)^2}.$$ Hence, \begin{align}Z(\theta)\nabla_{\theta} Z(\theta)^{-1} &= \frac{-1}{Z(\theta)}\nabla_{\theta} Z(\theta) \\ &= \frac{-1}{Z(\theta)} \nabla_{\theta} \int_x \exp(-E_{\theta}(x)) \, dx \\ &= \frac{-1}{Z(\theta)} \int_x \nabla_{\theta} \exp(-E_{\theta}(x)) \, dx \\ &= \frac{1}{Z(\theta)} \int_x \exp(-E_{\theta}(x)) \nabla_{\theta} E_{\theta}(x) \, dx, ​ \end{align} which yields the expression you seek. There are some technicalities concerning differentiating under the integral, or commutativity of $\nabla_{\theta}$ and $\int_x$ operators, on which this derivation relies. If the functional form of $E_{\theta}(x)$ is such that $p_{\theta}(x)$ parametrises an exponential family, then there are no issues. If not, then depending on whether the support of $p_{\theta}$ and hence limits of integration in $\int_x$ are finite or infinite, then you will need Leibniz integral rule or Lebesgue's dominated convergence theorem to justify this. See Statistical Inference by Casella and Berger (2004) if you are looking for simple tests of the latter without recourse to analysis results pertaining to Lesbesgue integration.
Gradient of the log likelihood for energy based models
The issue emerges in the evaluation of the second term in line $(3)$ and $(4)$ of your derivation. Note that $$\nabla_{\theta} Z(\theta)^{-1} = \nabla_{\theta} \frac{1}{\int_x \exp(-E_{\theta}(x))\, d
Gradient of the log likelihood for energy based models The issue emerges in the evaluation of the second term in line $(3)$ and $(4)$ of your derivation. Note that $$\nabla_{\theta} Z(\theta)^{-1} = \nabla_{\theta} \frac{1}{\int_x \exp(-E_{\theta}(x))\, dx} \neq \nabla_{\theta}\int_x \exp(E(_{\theta}(x)) \, dx.$$ Instead, we have $$\nabla_{\theta} Z(\theta)^{-1} = \nabla_{\theta} \left( \frac{1}{Z(\theta)} \right) = \frac{-\nabla_{\theta}Z(\theta)}{Z(\theta)^2}.$$ Hence, \begin{align}Z(\theta)\nabla_{\theta} Z(\theta)^{-1} &= \frac{-1}{Z(\theta)}\nabla_{\theta} Z(\theta) \\ &= \frac{-1}{Z(\theta)} \nabla_{\theta} \int_x \exp(-E_{\theta}(x)) \, dx \\ &= \frac{-1}{Z(\theta)} \int_x \nabla_{\theta} \exp(-E_{\theta}(x)) \, dx \\ &= \frac{1}{Z(\theta)} \int_x \exp(-E_{\theta}(x)) \nabla_{\theta} E_{\theta}(x) \, dx, ​ \end{align} which yields the expression you seek. There are some technicalities concerning differentiating under the integral, or commutativity of $\nabla_{\theta}$ and $\int_x$ operators, on which this derivation relies. If the functional form of $E_{\theta}(x)$ is such that $p_{\theta}(x)$ parametrises an exponential family, then there are no issues. If not, then depending on whether the support of $p_{\theta}$ and hence limits of integration in $\int_x$ are finite or infinite, then you will need Leibniz integral rule or Lebesgue's dominated convergence theorem to justify this. See Statistical Inference by Casella and Berger (2004) if you are looking for simple tests of the latter without recourse to analysis results pertaining to Lesbesgue integration.
Gradient of the log likelihood for energy based models The issue emerges in the evaluation of the second term in line $(3)$ and $(4)$ of your derivation. Note that $$\nabla_{\theta} Z(\theta)^{-1} = \nabla_{\theta} \frac{1}{\int_x \exp(-E_{\theta}(x))\, d
46,984
Estimate normal distribution from dnorm in R
For a normal density function $f,$ if you have a grid of points X and corresponding density values $y = f(x),$ then you can use numerical integration to find $\mu$ and $\sigma.$ [See Note (2) at the end.] If you have many realizations $X_i$ from the distribution, you can estimate the population mean $\mu$ by the sample mean $\bar X$ and the population SD $\sigma$ by the sample SD $S.$ Another possibility is to use a kernel density estimator (KDE) of $f$ based on a sufficiently large sample. In R the procedure density gives points $(x, y)$ that can be used to plot a density estimator. set.seed(718) x = rnorm(100, 50, 7) mean(x); sd(x) [1] 50.62287 [1] 6.443036 hist(x, prob=T, col="skyblue2") rug(x); lines(density(x), col="red") In R, the KDE consists of 512 points with values summarized as below: density(x) Call: density.default(x = x) Data: x (100 obs.); Bandwidth 'bw' = 2.309 x y Min. :31.36 Min. :1.974e-05 1st Qu.:41.69 1st Qu.:3.239e-03 Median :52.03 Median :2.371e-02 Mean :52.03 Mean :2.417e-02 3rd Qu.:62.36 3rd Qu.:4.378e-02 Max. :72.70 Max. :5.566e-02 You can estimate $\mu$ and $\sigma$ corresponding to the KDE as follows: xx = density(x)$x yy = density(x)$y # (xx, yy) is KDE plot point sum(xx*yy)/sum(yy) [1] 50.62329 # aprx pop mean = 50 sum((xx-50.62)^2 * yy)/sum(yy) [1] 46.42294 # aprx pop variance = 49 sqrt(sum((xx-50.62)^2 * yy)/sum(yy)) [1] 6.813438 # aprx pop SD = 7 Because $\bar X$ and $S$ are sufficient statistics for $\mu$ and $\sigma,$ it is hard to imagine that $\hat \mu$ and $\hat \sigma$ re-claimed from a KDE (based on data) would be systematically better than the sample mean $\bar X = 50.62$ and SD $S = 6.44.$ I mention the KDE method because it seems possibly related to your question. Notes: (1) Of course there are also methods for estimating $\bar X$ and $S$ from a histogram, but they can be very inaccurate for small samples. (2) Here is a numerical evaluation of $\mu \approx \int_0^{100} x\varphi(x,50,7)\, dx,$ using the sum of areas of 1000 rectangles. m = 1000 w = (100-0)/m x = seq(0+w/2, 100-w/2, len=m) f = x*dnorm(x, 50, 7) sum(w*f) [1] 50 # mu f2 = (x-50)^2*dnorm(x,50,7) sum(w*f2) [1] 49 # sigma^2
Estimate normal distribution from dnorm in R
For a normal density function $f,$ if you have a grid of points X and corresponding density values $y = f(x),$ then you can use numerical integration to find $\mu$ and $\sigma.$ [See Note (2) at the e
Estimate normal distribution from dnorm in R For a normal density function $f,$ if you have a grid of points X and corresponding density values $y = f(x),$ then you can use numerical integration to find $\mu$ and $\sigma.$ [See Note (2) at the end.] If you have many realizations $X_i$ from the distribution, you can estimate the population mean $\mu$ by the sample mean $\bar X$ and the population SD $\sigma$ by the sample SD $S.$ Another possibility is to use a kernel density estimator (KDE) of $f$ based on a sufficiently large sample. In R the procedure density gives points $(x, y)$ that can be used to plot a density estimator. set.seed(718) x = rnorm(100, 50, 7) mean(x); sd(x) [1] 50.62287 [1] 6.443036 hist(x, prob=T, col="skyblue2") rug(x); lines(density(x), col="red") In R, the KDE consists of 512 points with values summarized as below: density(x) Call: density.default(x = x) Data: x (100 obs.); Bandwidth 'bw' = 2.309 x y Min. :31.36 Min. :1.974e-05 1st Qu.:41.69 1st Qu.:3.239e-03 Median :52.03 Median :2.371e-02 Mean :52.03 Mean :2.417e-02 3rd Qu.:62.36 3rd Qu.:4.378e-02 Max. :72.70 Max. :5.566e-02 You can estimate $\mu$ and $\sigma$ corresponding to the KDE as follows: xx = density(x)$x yy = density(x)$y # (xx, yy) is KDE plot point sum(xx*yy)/sum(yy) [1] 50.62329 # aprx pop mean = 50 sum((xx-50.62)^2 * yy)/sum(yy) [1] 46.42294 # aprx pop variance = 49 sqrt(sum((xx-50.62)^2 * yy)/sum(yy)) [1] 6.813438 # aprx pop SD = 7 Because $\bar X$ and $S$ are sufficient statistics for $\mu$ and $\sigma,$ it is hard to imagine that $\hat \mu$ and $\hat \sigma$ re-claimed from a KDE (based on data) would be systematically better than the sample mean $\bar X = 50.62$ and SD $S = 6.44.$ I mention the KDE method because it seems possibly related to your question. Notes: (1) Of course there are also methods for estimating $\bar X$ and $S$ from a histogram, but they can be very inaccurate for small samples. (2) Here is a numerical evaluation of $\mu \approx \int_0^{100} x\varphi(x,50,7)\, dx,$ using the sum of areas of 1000 rectangles. m = 1000 w = (100-0)/m x = seq(0+w/2, 100-w/2, len=m) f = x*dnorm(x, 50, 7) sum(w*f) [1] 50 # mu f2 = (x-50)^2*dnorm(x,50,7) sum(w*f2) [1] 49 # sigma^2
Estimate normal distribution from dnorm in R For a normal density function $f,$ if you have a grid of points X and corresponding density values $y = f(x),$ then you can use numerical integration to find $\mu$ and $\sigma.$ [See Note (2) at the e
46,985
Estimate normal distribution from dnorm in R
A very simple, general-purpose solution: First, write a function that takes parameters as an input, and returns the different between the predicted PDF for those parameters and the actual PDF (I've used the sum of squared differences here). Then, use optim() to find the parameters than minimise this function. x = seq(-3,3,0.1) pdf = dnorm(x, mean = -.5, sd = .2) f = function(pars){ pred_pdf = dnorm(x, mean = pars[1], sd = pars[2]) err = sum((pdf - pred_pdf)^2) } result = optim(c(0, 1), f) # c(0, 1) are initial values round(result$par, 3) # [1] -0.5 0.2
Estimate normal distribution from dnorm in R
A very simple, general-purpose solution: First, write a function that takes parameters as an input, and returns the different between the predicted PDF for those parameters and the actual PDF (I've us
Estimate normal distribution from dnorm in R A very simple, general-purpose solution: First, write a function that takes parameters as an input, and returns the different between the predicted PDF for those parameters and the actual PDF (I've used the sum of squared differences here). Then, use optim() to find the parameters than minimise this function. x = seq(-3,3,0.1) pdf = dnorm(x, mean = -.5, sd = .2) f = function(pars){ pred_pdf = dnorm(x, mean = pars[1], sd = pars[2]) err = sum((pdf - pred_pdf)^2) } result = optim(c(0, 1), f) # c(0, 1) are initial values round(result$par, 3) # [1] -0.5 0.2
Estimate normal distribution from dnorm in R A very simple, general-purpose solution: First, write a function that takes parameters as an input, and returns the different between the predicted PDF for those parameters and the actual PDF (I've us
46,986
Why is $k = \sqrt{N}$ a good solution of the number of neighbors to consider?
There are a number of quantitative finite-sample results, and also asymptotic arguments, in support of using the heuristic $k = \sqrt{n}$, where $n$ is the sample size. However, in practice, it would seem that this heuristic really should only be a starting point for selecting $k$ using data-dependent methods. Theoretical arguments for the use of such a heuristic. Adapting from Devroye, Györfi and Lugosi (1996), theoretical performance of the $k$-nearest neighbour classifier can be organised, albeit not exclusively, along the following lines: $k$ is determined to be a finite, fixed constant, whilst the sample size $n \rightarrow \infty$. The determination of $k$ is a priori, i.e. selected in advance. That is, using prior knowledge, after exploratory data analysis, or using a heuristic. $k \rightarrow \infty$ whilst $k / n \rightarrow 0$. Similar to above in a priori determination of $k$, but now $k$ is not fixed relative to sample size $n$. Data dependent methods for determining $k$ e.g. using a training set, test set and selecting $k$ to minimise the estimated classification error rate, or using cross-validation. The heuristic $k = \lfloor \sqrt{n} \rfloor$, where $\lfloor \cdot \rfloor$ is the floor function, would fall under the second category. From the above book, the following is a quantitative, finite-sample probabilistic bound on the excess risk $L_n - L^*$, which in turn implies an asymptotic result: Theorem 11.1. (Devroye and Györfi (1985), Zhao (1987)). Assume that each $\mu$ has a density. If $k \rightarrow \infty$ and $k / n \rightarrow 0$ then for every $\epsilon > 0$ there is an $n_0$ such that for $n > n_0$, $$P \left( L_n - L^* > \epsilon \right) \leq 2e^{-n \epsilon^2 / (72 \gamma_d^2)},$$ where the $\gamma_d$ is the minimal number of cones centered at the origin of angle $\pi / 6$ that cover $\mathbb{R}^d$. (For the definition of a cone, see Chapter 5). Thus, the $k$-NN rule is strongly consistent. Supplying context on the terms not defined in the extract, $L_n = L_n({g}_n) = P({g}_n(X) \neq Y)$ is the risk of the $k$-nearest neighbour classifier ${g}_n(X)$, where ${g}_n$ is estimated from a sample of size $n$. $L^* = L(g^*) = \inf_{g \in \mathcal{G}} P(g(X) \neq Y)$, is the Bayes-optimal classification risk, or Bayes error rate, that is, the risk of the Bayes classifier $g^*$. Glossing over the measure theoretic technicalities, that the measure $\mu$ has a density just means that $X$ has a density. Parsing the theorem, the main condition requires that $k \rightarrow \infty$ as the sample size $n \rightarrow \infty$ in such a way that $k / n \rightarrow 0$. Your heuristic satisifies this condition because $k = \lfloor \sqrt{n} \rfloor \rightarrow \infty$ and $k / n = \lfloor \sqrt{n} \rfloor / n \approx (1 / \sqrt{n}) \rightarrow 0$. Taking the theorem as an asymptotic result, the $k$-nearest neighbour classifier is strongly consistent, in the sense of $$L_n \overset{a.s.}{\longrightarrow} L^* \iff P\left( \lim_{n \rightarrow \infty} L_n = L^* \right) = 1.$$ That is, as you collect more observations $n \rightarrow \infty$, the classification error rate $L_n$ of the $k$-nearest neighbour classifier $g_n$ will converge almost surely to the minimal classification error rate you can possibly hope to achieve, $L^*$. And that this converges exponentially quickly. Furthermore, the result is non-asymptotic in that for finite $n$, it bounds the probability that $L_n$ deviates from $L^*$ by more than $\epsilon$ in terms of finite constants. On the use of data-dependent methods for selecting $k$. The utility of the above theoretical result then is that it supplies insight on heuristics like the one you have outlined. Its limitations, like many results in statistical learning theory, is that the constant $\gamma_d$ may be difficult to compute, or in the case that it is computable, renders the bound too loose to give any practical prescriptions. Echoing the sentiment expressed in the following linked question, the authors advocate the use of data-dependent means of selecting $k$ in practice: Consistency by itself may be obtained by choosing $k = \lfloor \sqrt{n} \rfloor$, but few --if any-- users will want to blindly use such recipes. Instead, a healthy dose of feedback from the data is preferable. Similar consistency results for the use of a test set to select $k$ based on minimising a holdout estimate of the classification error rate are supplied therein. There seems to be some consensus that the kind of result listed above is a continuation of a line of work in the spirit of Stone (1977). A more recent, specialised treatment is by Chaudhuri and Dasgupta (2014). Further details can be found in the references below. References. Devroye, L., Györfi, L., & Lugosi, G. (1996). A Probabilistic Theory of Pattern Recognition. Springer. https://doi.org/10.1007/978-1-4612-0711-5. See chapters 5, 6, 11, 26. Stone, C. J. (1977). Consistent nonparametric regression. The Annals of Statistics, 5(4) 595 - 645. https://doi.org/10.1214/aos/1176343886 Chaudhuri, K., & Dasgupta, S. (2014). Rates of convergence for nearest neighbour classification. Advances in Neural Information Processing Systems 27, NIPS 2014. https://papers.nips.cc/paper/2014/hash/db957c626a8cd7a27231adfbf51e20eb-Abstract.html
Why is $k = \sqrt{N}$ a good solution of the number of neighbors to consider?
There are a number of quantitative finite-sample results, and also asymptotic arguments, in support of using the heuristic $k = \sqrt{n}$, where $n$ is the sample size. However, in practice, it would
Why is $k = \sqrt{N}$ a good solution of the number of neighbors to consider? There are a number of quantitative finite-sample results, and also asymptotic arguments, in support of using the heuristic $k = \sqrt{n}$, where $n$ is the sample size. However, in practice, it would seem that this heuristic really should only be a starting point for selecting $k$ using data-dependent methods. Theoretical arguments for the use of such a heuristic. Adapting from Devroye, Györfi and Lugosi (1996), theoretical performance of the $k$-nearest neighbour classifier can be organised, albeit not exclusively, along the following lines: $k$ is determined to be a finite, fixed constant, whilst the sample size $n \rightarrow \infty$. The determination of $k$ is a priori, i.e. selected in advance. That is, using prior knowledge, after exploratory data analysis, or using a heuristic. $k \rightarrow \infty$ whilst $k / n \rightarrow 0$. Similar to above in a priori determination of $k$, but now $k$ is not fixed relative to sample size $n$. Data dependent methods for determining $k$ e.g. using a training set, test set and selecting $k$ to minimise the estimated classification error rate, or using cross-validation. The heuristic $k = \lfloor \sqrt{n} \rfloor$, where $\lfloor \cdot \rfloor$ is the floor function, would fall under the second category. From the above book, the following is a quantitative, finite-sample probabilistic bound on the excess risk $L_n - L^*$, which in turn implies an asymptotic result: Theorem 11.1. (Devroye and Györfi (1985), Zhao (1987)). Assume that each $\mu$ has a density. If $k \rightarrow \infty$ and $k / n \rightarrow 0$ then for every $\epsilon > 0$ there is an $n_0$ such that for $n > n_0$, $$P \left( L_n - L^* > \epsilon \right) \leq 2e^{-n \epsilon^2 / (72 \gamma_d^2)},$$ where the $\gamma_d$ is the minimal number of cones centered at the origin of angle $\pi / 6$ that cover $\mathbb{R}^d$. (For the definition of a cone, see Chapter 5). Thus, the $k$-NN rule is strongly consistent. Supplying context on the terms not defined in the extract, $L_n = L_n({g}_n) = P({g}_n(X) \neq Y)$ is the risk of the $k$-nearest neighbour classifier ${g}_n(X)$, where ${g}_n$ is estimated from a sample of size $n$. $L^* = L(g^*) = \inf_{g \in \mathcal{G}} P(g(X) \neq Y)$, is the Bayes-optimal classification risk, or Bayes error rate, that is, the risk of the Bayes classifier $g^*$. Glossing over the measure theoretic technicalities, that the measure $\mu$ has a density just means that $X$ has a density. Parsing the theorem, the main condition requires that $k \rightarrow \infty$ as the sample size $n \rightarrow \infty$ in such a way that $k / n \rightarrow 0$. Your heuristic satisifies this condition because $k = \lfloor \sqrt{n} \rfloor \rightarrow \infty$ and $k / n = \lfloor \sqrt{n} \rfloor / n \approx (1 / \sqrt{n}) \rightarrow 0$. Taking the theorem as an asymptotic result, the $k$-nearest neighbour classifier is strongly consistent, in the sense of $$L_n \overset{a.s.}{\longrightarrow} L^* \iff P\left( \lim_{n \rightarrow \infty} L_n = L^* \right) = 1.$$ That is, as you collect more observations $n \rightarrow \infty$, the classification error rate $L_n$ of the $k$-nearest neighbour classifier $g_n$ will converge almost surely to the minimal classification error rate you can possibly hope to achieve, $L^*$. And that this converges exponentially quickly. Furthermore, the result is non-asymptotic in that for finite $n$, it bounds the probability that $L_n$ deviates from $L^*$ by more than $\epsilon$ in terms of finite constants. On the use of data-dependent methods for selecting $k$. The utility of the above theoretical result then is that it supplies insight on heuristics like the one you have outlined. Its limitations, like many results in statistical learning theory, is that the constant $\gamma_d$ may be difficult to compute, or in the case that it is computable, renders the bound too loose to give any practical prescriptions. Echoing the sentiment expressed in the following linked question, the authors advocate the use of data-dependent means of selecting $k$ in practice: Consistency by itself may be obtained by choosing $k = \lfloor \sqrt{n} \rfloor$, but few --if any-- users will want to blindly use such recipes. Instead, a healthy dose of feedback from the data is preferable. Similar consistency results for the use of a test set to select $k$ based on minimising a holdout estimate of the classification error rate are supplied therein. There seems to be some consensus that the kind of result listed above is a continuation of a line of work in the spirit of Stone (1977). A more recent, specialised treatment is by Chaudhuri and Dasgupta (2014). Further details can be found in the references below. References. Devroye, L., Györfi, L., & Lugosi, G. (1996). A Probabilistic Theory of Pattern Recognition. Springer. https://doi.org/10.1007/978-1-4612-0711-5. See chapters 5, 6, 11, 26. Stone, C. J. (1977). Consistent nonparametric regression. The Annals of Statistics, 5(4) 595 - 645. https://doi.org/10.1214/aos/1176343886 Chaudhuri, K., & Dasgupta, S. (2014). Rates of convergence for nearest neighbour classification. Advances in Neural Information Processing Systems 27, NIPS 2014. https://papers.nips.cc/paper/2014/hash/db957c626a8cd7a27231adfbf51e20eb-Abstract.html
Why is $k = \sqrt{N}$ a good solution of the number of neighbors to consider? There are a number of quantitative finite-sample results, and also asymptotic arguments, in support of using the heuristic $k = \sqrt{n}$, where $n$ is the sample size. However, in practice, it would
46,987
Does oversampling lead to more overfitting than classweights for really small classes?
This depends at least a little on the model being used. Most often, simple oversampling is asymptotically equivalent to using class weights: an integer weight $w$ on a datapoint has an equivalent effect on loss calculations as duplicating the datapoint $w$ times. Oversampling then is just a discrete version of class-weighting, so asymptotically they should be equivalent, but also for small samples sizes it doesn't seem clear that the discrete version should lead to consistently more or less overfitting. If your model does any bagging though, things change: by oversampling, you are likely to include a subset of the duplicates of one point, whereas when weighting the subsetting happens before the weights come into play. However, it's still not clear to me that the final effect will be positive or negative in the sense of overfitting. (Unless you're also planning on using out-of-bag scores, in which case this would be quite bad, being very similar to the resampling-before-splitting in cross-validation.)
Does oversampling lead to more overfitting than classweights for really small classes?
This depends at least a little on the model being used. Most often, simple oversampling is asymptotically equivalent to using class weights: an integer weight $w$ on a datapoint has an equivalent eff
Does oversampling lead to more overfitting than classweights for really small classes? This depends at least a little on the model being used. Most often, simple oversampling is asymptotically equivalent to using class weights: an integer weight $w$ on a datapoint has an equivalent effect on loss calculations as duplicating the datapoint $w$ times. Oversampling then is just a discrete version of class-weighting, so asymptotically they should be equivalent, but also for small samples sizes it doesn't seem clear that the discrete version should lead to consistently more or less overfitting. If your model does any bagging though, things change: by oversampling, you are likely to include a subset of the duplicates of one point, whereas when weighting the subsetting happens before the weights come into play. However, it's still not clear to me that the final effect will be positive or negative in the sense of overfitting. (Unless you're also planning on using out-of-bag scores, in which case this would be quite bad, being very similar to the resampling-before-splitting in cross-validation.)
Does oversampling lead to more overfitting than classweights for really small classes? This depends at least a little on the model being used. Most often, simple oversampling is asymptotically equivalent to using class weights: an integer weight $w$ on a datapoint has an equivalent eff
46,988
What would be a good objective function for a computer vision model that predicts rotation?
A common approach is to have the neural network output $u, v$ representing the angle $\hat \theta$ such that $\cos(\hat \theta) = \frac{u}{u^2+v^2}$, $\sin(\hat \theta) = \frac{v}{u^2+v^2}$. This avoids "boundaries" in the output which might prove challenging for backprop to learn. Each rotation $\theta$ has a corresponding rotation matrix, which, conveniently, can be constructed with the sin and cosine of the angle. $$R = \left[ \begin{array}{cc} \cos \theta & -\sin \theta\\ \sin \theta & \cos \theta\\ \end{array} \right]$$ Let $\hat R$ be the rotation matrix corresponding to the predicted angle, and $R$ be the true rotation matrix. Then $\text{tr}(R^T\hat R) = 2\cos (\hat \theta - \theta)$ is useful for computing the cosine distance loss. Conveniently, this trace trick also works for 3D rotations, even if the rotation happens along more than one coordinate axis.
What would be a good objective function for a computer vision model that predicts rotation?
A common approach is to have the neural network output $u, v$ representing the angle $\hat \theta$ such that $\cos(\hat \theta) = \frac{u}{u^2+v^2}$, $\sin(\hat \theta) = \frac{v}{u^2+v^2}$. This avoi
What would be a good objective function for a computer vision model that predicts rotation? A common approach is to have the neural network output $u, v$ representing the angle $\hat \theta$ such that $\cos(\hat \theta) = \frac{u}{u^2+v^2}$, $\sin(\hat \theta) = \frac{v}{u^2+v^2}$. This avoids "boundaries" in the output which might prove challenging for backprop to learn. Each rotation $\theta$ has a corresponding rotation matrix, which, conveniently, can be constructed with the sin and cosine of the angle. $$R = \left[ \begin{array}{cc} \cos \theta & -\sin \theta\\ \sin \theta & \cos \theta\\ \end{array} \right]$$ Let $\hat R$ be the rotation matrix corresponding to the predicted angle, and $R$ be the true rotation matrix. Then $\text{tr}(R^T\hat R) = 2\cos (\hat \theta - \theta)$ is useful for computing the cosine distance loss. Conveniently, this trace trick also works for 3D rotations, even if the rotation happens along more than one coordinate axis.
What would be a good objective function for a computer vision model that predicts rotation? A common approach is to have the neural network output $u, v$ representing the angle $\hat \theta$ such that $\cos(\hat \theta) = \frac{u}{u^2+v^2}$, $\sin(\hat \theta) = \frac{v}{u^2+v^2}$. This avoi
46,989
What would be a good objective function for a computer vision model that predicts rotation?
In circular statistics, your suggestion $\min((y−\hat{y})^2,(y−\hat{y}−2π)^2,(\hat{y}−y+2π)^2), $ which we could call arc distance loss, is actually one of the known loss functions that can be used. It works, and intuitively it is certainly more sensible than the categorical approach, for the reason you mention. The reason that it is still used, I suppose, is that it circumvents treating the predictions as circular entirely. A possibly simpler and more common approach in the field of circular statistics is to use $1 - \cos(y - \hat{y}),$ which we can call the cosine distance loss, and which ranges between 0 (when $y = \hat{y}$) and 2 (when $y = \hat{y} + \pi).$ The arc distance loss is related to the Wrapped Normal distribution; the cosine distance loss is related to the von Mises distribution. Of course, other loss functions on the circle (and hypersphere, for that matter) are conceivable, and used, such as analogues of absolute error loss.
What would be a good objective function for a computer vision model that predicts rotation?
In circular statistics, your suggestion $\min((y−\hat{y})^2,(y−\hat{y}−2π)^2,(\hat{y}−y+2π)^2), $ which we could call arc distance loss, is actually one of the known loss functions that can be used. I
What would be a good objective function for a computer vision model that predicts rotation? In circular statistics, your suggestion $\min((y−\hat{y})^2,(y−\hat{y}−2π)^2,(\hat{y}−y+2π)^2), $ which we could call arc distance loss, is actually one of the known loss functions that can be used. It works, and intuitively it is certainly more sensible than the categorical approach, for the reason you mention. The reason that it is still used, I suppose, is that it circumvents treating the predictions as circular entirely. A possibly simpler and more common approach in the field of circular statistics is to use $1 - \cos(y - \hat{y}),$ which we can call the cosine distance loss, and which ranges between 0 (when $y = \hat{y}$) and 2 (when $y = \hat{y} + \pi).$ The arc distance loss is related to the Wrapped Normal distribution; the cosine distance loss is related to the von Mises distribution. Of course, other loss functions on the circle (and hypersphere, for that matter) are conceivable, and used, such as analogues of absolute error loss.
What would be a good objective function for a computer vision model that predicts rotation? In circular statistics, your suggestion $\min((y−\hat{y})^2,(y−\hat{y}−2π)^2,(\hat{y}−y+2π)^2), $ which we could call arc distance loss, is actually one of the known loss functions that can be used. I
46,990
What is the best way to regress proportions (as both dependent and independent variables)?
Suppose we have a model such as $$y = x$$ where $y$ and $x$ are some measurements in a number of samples. Now, if we introduce a third variable, something like a number of subjects in each sample or size of each population, $z$, and we wish to form another model so that we are dealing with proportions, we could have the model $$\frac{y}{z} = \frac{x}{z}$$ it should now be obvious, that since $z$ appears in the denominator on both side, the two sides are "coupled", hence the term mathematical coupling. A simple example in R can show this. For simplicity we simulate three variables from a standard normal distribution independently: > set.seed(1) > x <- rnorm(100) > y <- rnorm(100) > cor(x,y) [1] -0.0009943199 ...so the correlation is close to zero. Or in linear regression: > summary(lm(y~x)) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -1.8768 -0.6138 -0.1395 0.5394 2.3462 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.03769 0.09699 -0.389 0.698 x -0.00106 0.10773 -0.010 0.992 Residual standard error: 0.9628 on 98 degrees of freedom Multiple R-squared: 9.887e-07, Adjusted R-squared: -0.0102 F-statistic: 9.689e-05 on 1 and 98 DF, p-value: 0.9922 so the estimates are close to zero and so is R^2. Now we introduce a third variable: > z <- rnorm(100) > cor(x/z, y/z) [1] 0.9168795 and suddenly the correlation is above 0.9. Or in regression: > summary(lm(I(y/z) ~ I(x/z))) Call: lm(formula = I(y/z) ~ I(x/z)) Residuals: Min 1Q Median 3Q Max -45.996 -4.733 -2.784 -1.524 214.929 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2.74090 2.53884 1.08 0.283 I(x/z) 1.44965 0.06375 22.74 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 25.35 on 98 degrees of freedom Multiple R-squared: 0.8407, Adjusted R-squared: 0.839 F-statistic: 517.1 on 1 and 98 DF, p-value: < 2.2e-16 ...and the estimate for the slope is above zero with a very small p value, and the R^2 is 0.8407, which is 0.9168795^2 It is worth noting that this example is rather extreme because all the variables are standard normal, and this induces the largest possible effect of mathematical coupling. When the variables are on different scales, with different variances, of different types, or correlated with each other, the effect of mathematical coupling is less pronounced, but nevertheless still present. So extreme caution is advised when dealing with proportions.
What is the best way to regress proportions (as both dependent and independent variables)?
Suppose we have a model such as $$y = x$$ where $y$ and $x$ are some measurements in a number of samples. Now, if we introduce a third variable, something like a number of subjects in each sample or s
What is the best way to regress proportions (as both dependent and independent variables)? Suppose we have a model such as $$y = x$$ where $y$ and $x$ are some measurements in a number of samples. Now, if we introduce a third variable, something like a number of subjects in each sample or size of each population, $z$, and we wish to form another model so that we are dealing with proportions, we could have the model $$\frac{y}{z} = \frac{x}{z}$$ it should now be obvious, that since $z$ appears in the denominator on both side, the two sides are "coupled", hence the term mathematical coupling. A simple example in R can show this. For simplicity we simulate three variables from a standard normal distribution independently: > set.seed(1) > x <- rnorm(100) > y <- rnorm(100) > cor(x,y) [1] -0.0009943199 ...so the correlation is close to zero. Or in linear regression: > summary(lm(y~x)) Call: lm(formula = y ~ x) Residuals: Min 1Q Median 3Q Max -1.8768 -0.6138 -0.1395 0.5394 2.3462 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.03769 0.09699 -0.389 0.698 x -0.00106 0.10773 -0.010 0.992 Residual standard error: 0.9628 on 98 degrees of freedom Multiple R-squared: 9.887e-07, Adjusted R-squared: -0.0102 F-statistic: 9.689e-05 on 1 and 98 DF, p-value: 0.9922 so the estimates are close to zero and so is R^2. Now we introduce a third variable: > z <- rnorm(100) > cor(x/z, y/z) [1] 0.9168795 and suddenly the correlation is above 0.9. Or in regression: > summary(lm(I(y/z) ~ I(x/z))) Call: lm(formula = I(y/z) ~ I(x/z)) Residuals: Min 1Q Median 3Q Max -45.996 -4.733 -2.784 -1.524 214.929 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2.74090 2.53884 1.08 0.283 I(x/z) 1.44965 0.06375 22.74 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 25.35 on 98 degrees of freedom Multiple R-squared: 0.8407, Adjusted R-squared: 0.839 F-statistic: 517.1 on 1 and 98 DF, p-value: < 2.2e-16 ...and the estimate for the slope is above zero with a very small p value, and the R^2 is 0.8407, which is 0.9168795^2 It is worth noting that this example is rather extreme because all the variables are standard normal, and this induces the largest possible effect of mathematical coupling. When the variables are on different scales, with different variances, of different types, or correlated with each other, the effect of mathematical coupling is less pronounced, but nevertheless still present. So extreme caution is advised when dealing with proportions.
What is the best way to regress proportions (as both dependent and independent variables)? Suppose we have a model such as $$y = x$$ where $y$ and $x$ are some measurements in a number of samples. Now, if we introduce a third variable, something like a number of subjects in each sample or s
46,991
Difference between an “Empirical Strategy” and an “Identification Strategy” in econometrics?
Both terms are fairly loaded terms with meanings that will depend on who is using it. Broadly speaking, "empirical strategy" is an umbrella term used by researchers to indicate their overall "process" in approaching a question and delivering an answer. Indeed, Angrist and Krueger write in Empirical Strategies in Labor Economics (my own highlighting): We use the term empirical strategy broadly, beginning with the statement of a causal question, and extending to identification strategies and econometric methods, selection of data sources, measurement issues, and sensitivity tests. This re-affirms the idea that empirical strategy is a catch-all term to indicate your overall method. In contrast, I would argue "identification strategy" means something very specific. I wrote a bit about identification in this CV question. Based on my definition of identification presented there, I would define identification strategy as the process of defining a parameter you are interested in (such as the causal effect of treatment on outcome), and proving that your observed data (a DGP) and imposed assumptions (such as parallel trends in diff in diffs) identify this parameter In applied work, you'll probably notice people are not so rigorous about this, and instead will say something hand-wavy such as "our identification strategy leverages a difference in difference design..." This is fine, because in many commonly used designs, previous work has already done the tedious work of showing the formalities, so applied research can simply state that they are doing a diff in diff without having to explain every detail. But in works where the identification strategy is novel, then they have to actually prove the identification strategy "works".
Difference between an “Empirical Strategy” and an “Identification Strategy” in econometrics?
Both terms are fairly loaded terms with meanings that will depend on who is using it. Broadly speaking, "empirical strategy" is an umbrella term used by researchers to indicate their overall "process"
Difference between an “Empirical Strategy” and an “Identification Strategy” in econometrics? Both terms are fairly loaded terms with meanings that will depend on who is using it. Broadly speaking, "empirical strategy" is an umbrella term used by researchers to indicate their overall "process" in approaching a question and delivering an answer. Indeed, Angrist and Krueger write in Empirical Strategies in Labor Economics (my own highlighting): We use the term empirical strategy broadly, beginning with the statement of a causal question, and extending to identification strategies and econometric methods, selection of data sources, measurement issues, and sensitivity tests. This re-affirms the idea that empirical strategy is a catch-all term to indicate your overall method. In contrast, I would argue "identification strategy" means something very specific. I wrote a bit about identification in this CV question. Based on my definition of identification presented there, I would define identification strategy as the process of defining a parameter you are interested in (such as the causal effect of treatment on outcome), and proving that your observed data (a DGP) and imposed assumptions (such as parallel trends in diff in diffs) identify this parameter In applied work, you'll probably notice people are not so rigorous about this, and instead will say something hand-wavy such as "our identification strategy leverages a difference in difference design..." This is fine, because in many commonly used designs, previous work has already done the tedious work of showing the formalities, so applied research can simply state that they are doing a diff in diff without having to explain every detail. But in works where the identification strategy is novel, then they have to actually prove the identification strategy "works".
Difference between an “Empirical Strategy” and an “Identification Strategy” in econometrics? Both terms are fairly loaded terms with meanings that will depend on who is using it. Broadly speaking, "empirical strategy" is an umbrella term used by researchers to indicate their overall "process"
46,992
Interpreting the VIF in checking the multicollinearity in logistic regression
Yes, you can use VIF in the same way for logistic regression as you would in linear regression. Variance inflation factor measures how much the behaviour (variance) of an independent variable is influenced, or inflated, by its interaction/correlation with the other independent variables. Variance inflation factors allow a quick measure of how much a variable is contributing to the standard error in the regression - therefore it doesn't really matter if it's a logistic regression model or another type of regression.
Interpreting the VIF in checking the multicollinearity in logistic regression
Yes, you can use VIF in the same way for logistic regression as you would in linear regression. Variance inflation factor measures how much the behaviour (variance) of an independent variable is influ
Interpreting the VIF in checking the multicollinearity in logistic regression Yes, you can use VIF in the same way for logistic regression as you would in linear regression. Variance inflation factor measures how much the behaviour (variance) of an independent variable is influenced, or inflated, by its interaction/correlation with the other independent variables. Variance inflation factors allow a quick measure of how much a variable is contributing to the standard error in the regression - therefore it doesn't really matter if it's a logistic regression model or another type of regression.
Interpreting the VIF in checking the multicollinearity in logistic regression Yes, you can use VIF in the same way for logistic regression as you would in linear regression. Variance inflation factor measures how much the behaviour (variance) of an independent variable is influ
46,993
Interpreting the VIF in checking the multicollinearity in logistic regression
One caveat. While in linear regression, it is traditional to use a threshold of 10 for VIFs, in logistic regression you use a much lower value. But afaik, I haven't seen any work detailing what such value might be.
Interpreting the VIF in checking the multicollinearity in logistic regression
One caveat. While in linear regression, it is traditional to use a threshold of 10 for VIFs, in logistic regression you use a much lower value. But afaik, I haven't seen any work detailing what such v
Interpreting the VIF in checking the multicollinearity in logistic regression One caveat. While in linear regression, it is traditional to use a threshold of 10 for VIFs, in logistic regression you use a much lower value. But afaik, I haven't seen any work detailing what such value might be.
Interpreting the VIF in checking the multicollinearity in logistic regression One caveat. While in linear regression, it is traditional to use a threshold of 10 for VIFs, in logistic regression you use a much lower value. But afaik, I haven't seen any work detailing what such v
46,994
Using Gaussian Processes to learn a function online
This is pretty straightforward to do with Bayesian learning since it corresponds to sequentially updating the posterior over $f$ as more and more data comes in. Bayesian optimization uses this a lot so that's one application to look at for this kind of thing. $\newcommand{\f}{\mathbf f}$$\newcommand{\one}{\mathbf 1}$Let's say we start with $(x_1, f(x_1)), \dots, (x_n, f(x_n))$. Our prior is $f \sim \mathcal{GP}(m, k)$ and for simplicity I'll assume $m(x) = \mu$ for all $x$, i.e. the mean function is constant. We then have $$ \left(\begin{array}{c} f_1 \\ \vdots \\ f_n\end{array}\right) := \f_n \sim \mathcal N(\mu\one_n, K_n + \alpha I) $$ where I'm adding $\alpha I$ with $\alpha > 0$ small to $K_n$ to condition it better. For a new point $x_*$ with value $f_* = f(x_*)$ we'll have $$ {\f_n \choose f_*} \sim \mathcal N\left(\mu\one_{n+1}, \left[\begin{array}{c|c}K_n + \alpha I & k_* \\ \hline k_*^T & k_{**}\end{array}\right] \right) $$ (I'm using the common notation of $k_* = (k(x_1, x_*), \dots, k(x_n, x_*))^T$ and $k_{**} = k(x_*, x_*)$) so the conditional mean of this new point given the $n$ that we've already observed is $\text E[f_* \mid \f_n] = \mu + k_*^T(K_n + \alpha I_n)^{-1}(\f_n - \mu\one_n)$. This conditional mean will be our approximation to $f$ after our first $n$ samples so $$ \hat f_n(x) = \mu + k_*(x)^T(K_n + \alpha I_n)^{-1}(\f_n - \mu\one_n). $$ Now when we get a new observation $(x_{n+1}, f(x_{n+1}))$ we will update our understanding of $f_*$ by adding this to $\f_n$ to get $\f_{n+1}$ and augmenting $K_n$ into $K_{n+1}$ so now for a new point $(x_*, f_*)$ we have $$ \hat f_{n+1}(x) = \mu + k_*(x)^T(K_{n+1} + \alpha I_{n+1})^{-1}(\f_{n+1}- \mu\one_{n+1}) $$ and the process continues. We've moved from conditioning on $n$ things to conditioning on $n+1$ things. We can also save some time with these inverses by using $K_n^{-1}$ in the computation of $K_{n+1}^{-1}$ so we can compute them recursively. I'll drop the $\alpha I$ from this part for cleaner notation but it can be added back in with no issue. We can treat $K_{n+1}$ as a 2x2 block matrix so we can use the formula for such a matrix's inverse. Precomputing $v_n = K_n^{-1}k_*$ and the inverse of the relevant (1x1) Schur compliment $S_n := (k_{**} - k_*^Tv_n)^{-1}$, we have $$ K_{n+1}^{-1} = \begin{bmatrix} K_n & k_* \\ k_*^T & k_{**}\end{bmatrix}^{-1} = \begin{bmatrix}K_n^{-1} + S_n v_nv_n^T & -S_n v_n \\ -S_nv_n^T & S_n\end{bmatrix} . $$ Given $K_n^{-1}$ and $k_*$, computing $v_n$ is $\mathcal O(n^2)$ and once we have that $S_n$ is $\mathcal O(n)$. $K_n^{-1} + S_nv_nv_n^T$ is additionally $\mathcal O(n^2)$ so overall getting $K_{n+1}^{-1}$ from $K_n^{-1}$ is $\mathcal O(n^2)$. This is better than the best known times for explictly inverting $K_{n+1}^{-1}$ from scratch.
Using Gaussian Processes to learn a function online
This is pretty straightforward to do with Bayesian learning since it corresponds to sequentially updating the posterior over $f$ as more and more data comes in. Bayesian optimization uses this a lot s
Using Gaussian Processes to learn a function online This is pretty straightforward to do with Bayesian learning since it corresponds to sequentially updating the posterior over $f$ as more and more data comes in. Bayesian optimization uses this a lot so that's one application to look at for this kind of thing. $\newcommand{\f}{\mathbf f}$$\newcommand{\one}{\mathbf 1}$Let's say we start with $(x_1, f(x_1)), \dots, (x_n, f(x_n))$. Our prior is $f \sim \mathcal{GP}(m, k)$ and for simplicity I'll assume $m(x) = \mu$ for all $x$, i.e. the mean function is constant. We then have $$ \left(\begin{array}{c} f_1 \\ \vdots \\ f_n\end{array}\right) := \f_n \sim \mathcal N(\mu\one_n, K_n + \alpha I) $$ where I'm adding $\alpha I$ with $\alpha > 0$ small to $K_n$ to condition it better. For a new point $x_*$ with value $f_* = f(x_*)$ we'll have $$ {\f_n \choose f_*} \sim \mathcal N\left(\mu\one_{n+1}, \left[\begin{array}{c|c}K_n + \alpha I & k_* \\ \hline k_*^T & k_{**}\end{array}\right] \right) $$ (I'm using the common notation of $k_* = (k(x_1, x_*), \dots, k(x_n, x_*))^T$ and $k_{**} = k(x_*, x_*)$) so the conditional mean of this new point given the $n$ that we've already observed is $\text E[f_* \mid \f_n] = \mu + k_*^T(K_n + \alpha I_n)^{-1}(\f_n - \mu\one_n)$. This conditional mean will be our approximation to $f$ after our first $n$ samples so $$ \hat f_n(x) = \mu + k_*(x)^T(K_n + \alpha I_n)^{-1}(\f_n - \mu\one_n). $$ Now when we get a new observation $(x_{n+1}, f(x_{n+1}))$ we will update our understanding of $f_*$ by adding this to $\f_n$ to get $\f_{n+1}$ and augmenting $K_n$ into $K_{n+1}$ so now for a new point $(x_*, f_*)$ we have $$ \hat f_{n+1}(x) = \mu + k_*(x)^T(K_{n+1} + \alpha I_{n+1})^{-1}(\f_{n+1}- \mu\one_{n+1}) $$ and the process continues. We've moved from conditioning on $n$ things to conditioning on $n+1$ things. We can also save some time with these inverses by using $K_n^{-1}$ in the computation of $K_{n+1}^{-1}$ so we can compute them recursively. I'll drop the $\alpha I$ from this part for cleaner notation but it can be added back in with no issue. We can treat $K_{n+1}$ as a 2x2 block matrix so we can use the formula for such a matrix's inverse. Precomputing $v_n = K_n^{-1}k_*$ and the inverse of the relevant (1x1) Schur compliment $S_n := (k_{**} - k_*^Tv_n)^{-1}$, we have $$ K_{n+1}^{-1} = \begin{bmatrix} K_n & k_* \\ k_*^T & k_{**}\end{bmatrix}^{-1} = \begin{bmatrix}K_n^{-1} + S_n v_nv_n^T & -S_n v_n \\ -S_nv_n^T & S_n\end{bmatrix} . $$ Given $K_n^{-1}$ and $k_*$, computing $v_n$ is $\mathcal O(n^2)$ and once we have that $S_n$ is $\mathcal O(n)$. $K_n^{-1} + S_nv_nv_n^T$ is additionally $\mathcal O(n^2)$ so overall getting $K_{n+1}^{-1}$ from $K_n^{-1}$ is $\mathcal O(n^2)$. This is better than the best known times for explictly inverting $K_{n+1}^{-1}$ from scratch.
Using Gaussian Processes to learn a function online This is pretty straightforward to do with Bayesian learning since it corresponds to sequentially updating the posterior over $f$ as more and more data comes in. Bayesian optimization uses this a lot s
46,995
Trying to make sense of claims regarding Rao-Blackwell and Lehmann-Scheffé for sufficient/complete statistics
There is a complete sufficient statistic for $\theta$ in a model ${\cal P}_\theta$ if and only if the minimal sufficient statistic is complete (according to Lehmann "An Interpretation of Completeness and Basu’s Theorem"). This means you can't have distinct $T_1(X)$ and $T_2(X)$ the way you want. As the paper says (first complete paragraph of the second column) On the other hand, existence of a complete sufficient statistic is equivalent to the completeness of the minimal sufficient statistic, and hence is a property of the model ${\cal P}$. That is, in any given model, if $T_2$ is complete sufficient and $T_1$ is sufficient, $T_1$ is also complete sufficient. The two theorems say 1/ Rao-Blackwell: Conditioning on any sufficient statistic will reduce the variance. This follows from the law of total variance 2/ Lehmann-Scheffé: In the special case that the model has a complete sufficient statistic, you get a fully efficient estimator. In the Poisson case, the minimal sufficient statistic $\bar X$ is complete and the two estimators are identical. There's an interesting example here of a situation where a Rao-Blackwell-type estimator is not fully efficient (not even admissible). The model is $X\sim U[\theta(1-k),\theta(1+k)]$ for known $k$ and unknown $\theta$. The Cramér-Rao bound does not apply, since the range of $X$ depends on $\theta$. A sufficient statistic is the pair $(\min X_1, \max X_n)$, and any observation is an unbiased estimator, however $E[X_1|(\min X_1, \max X_n)]$ is not even the best linear function of the two components of the sufficient statistic. A couple more points to fill in potential gaps: This leaves open is whether there might be an unbiased estimator attaining the Cramér-Rao bound that isn't obtainable using the Lehmann-Scheffé theorem. There isn't (in reasonably nice models): any model where the bound is attained has a score function of the form $$\frac{\partial \ell}{\partial \theta}=I(\theta)(g(x)-\theta)$$ for some $g()$ (where $I()$ is the information), in which case $g(x)$ is a both a complete sufficient statistic for $\theta$ and the minimum variance unbiased estimator. As @AdamO indicates, none of this translates tidily to asymptotics: there are asymptotically unbiased estimators that beat the asymptotic information bound at a point (Hodges superefficient estimator) and even on a dense set of measure zero(Le Cam's extension of Hodge's estimator). The best you can do is the Local Asymptotic Minimax theorem, which says you can't beat an 'efficient' estimator uniformly over neighbourhoods of $\theta_0$ with diameter $O(n^{-1/2})$
Trying to make sense of claims regarding Rao-Blackwell and Lehmann-Scheffé for sufficient/complete s
There is a complete sufficient statistic for $\theta$ in a model ${\cal P}_\theta$ if and only if the minimal sufficient statistic is complete (according to Lehmann "An Interpretation of Completeness
Trying to make sense of claims regarding Rao-Blackwell and Lehmann-Scheffé for sufficient/complete statistics There is a complete sufficient statistic for $\theta$ in a model ${\cal P}_\theta$ if and only if the minimal sufficient statistic is complete (according to Lehmann "An Interpretation of Completeness and Basu’s Theorem"). This means you can't have distinct $T_1(X)$ and $T_2(X)$ the way you want. As the paper says (first complete paragraph of the second column) On the other hand, existence of a complete sufficient statistic is equivalent to the completeness of the minimal sufficient statistic, and hence is a property of the model ${\cal P}$. That is, in any given model, if $T_2$ is complete sufficient and $T_1$ is sufficient, $T_1$ is also complete sufficient. The two theorems say 1/ Rao-Blackwell: Conditioning on any sufficient statistic will reduce the variance. This follows from the law of total variance 2/ Lehmann-Scheffé: In the special case that the model has a complete sufficient statistic, you get a fully efficient estimator. In the Poisson case, the minimal sufficient statistic $\bar X$ is complete and the two estimators are identical. There's an interesting example here of a situation where a Rao-Blackwell-type estimator is not fully efficient (not even admissible). The model is $X\sim U[\theta(1-k),\theta(1+k)]$ for known $k$ and unknown $\theta$. The Cramér-Rao bound does not apply, since the range of $X$ depends on $\theta$. A sufficient statistic is the pair $(\min X_1, \max X_n)$, and any observation is an unbiased estimator, however $E[X_1|(\min X_1, \max X_n)]$ is not even the best linear function of the two components of the sufficient statistic. A couple more points to fill in potential gaps: This leaves open is whether there might be an unbiased estimator attaining the Cramér-Rao bound that isn't obtainable using the Lehmann-Scheffé theorem. There isn't (in reasonably nice models): any model where the bound is attained has a score function of the form $$\frac{\partial \ell}{\partial \theta}=I(\theta)(g(x)-\theta)$$ for some $g()$ (where $I()$ is the information), in which case $g(x)$ is a both a complete sufficient statistic for $\theta$ and the minimum variance unbiased estimator. As @AdamO indicates, none of this translates tidily to asymptotics: there are asymptotically unbiased estimators that beat the asymptotic information bound at a point (Hodges superefficient estimator) and even on a dense set of measure zero(Le Cam's extension of Hodge's estimator). The best you can do is the Local Asymptotic Minimax theorem, which says you can't beat an 'efficient' estimator uniformly over neighbourhoods of $\theta_0$ with diameter $O(n^{-1/2})$
Trying to make sense of claims regarding Rao-Blackwell and Lehmann-Scheffé for sufficient/complete s There is a complete sufficient statistic for $\theta$ in a model ${\cal P}_\theta$ if and only if the minimal sufficient statistic is complete (according to Lehmann "An Interpretation of Completeness
46,996
How do I cross validate when I don't have a test set?
Since the dataset is small, you can estimate out-of-sample error/performance via leave-one-out cross validation, and compare LOOCV performances of the two models. Note that, alongside the benefits, LOOCV has its cons as well: It's computationally more expensive, and it may have larger variance in the performance metric. But, this problem can be mitigated via calculating the performance after assigning all the class labels and calculating the performance for the whole set. Also, we won't have a single model in the end.
How do I cross validate when I don't have a test set?
Since the dataset is small, you can estimate out-of-sample error/performance via leave-one-out cross validation, and compare LOOCV performances of the two models. Note that, alongside the benefits, LO
How do I cross validate when I don't have a test set? Since the dataset is small, you can estimate out-of-sample error/performance via leave-one-out cross validation, and compare LOOCV performances of the two models. Note that, alongside the benefits, LOOCV has its cons as well: It's computationally more expensive, and it may have larger variance in the performance metric. But, this problem can be mitigated via calculating the performance after assigning all the class labels and calculating the performance for the whole set. Also, we won't have a single model in the end.
How do I cross validate when I don't have a test set? Since the dataset is small, you can estimate out-of-sample error/performance via leave-one-out cross validation, and compare LOOCV performances of the two models. Note that, alongside the benefits, LO
46,997
How do I cross validate when I don't have a test set?
You could use a Bayesian machine learning model. The distribution of your predictions shows you how accurate the model is. If your model overfits it will have a large variance in the predictions, because every model of the ensemble overfits differently. When you have enough data the distribution is very narrow and when you have infinite data equals the prediction of the non Bayesian model. Every parameter of your model is now a distribution. Depending on the type of the model they can even be interpretable. One easy way to obtain a Bayesian model is via Bayesian bootstrapping. You can either do that by training the model multiple times with different weights for each data point or by using a package like bayesian_bootstrap. The weights must be drawn from a Direchlet distribution (which in this case is a multivariate uniform distribution).
How do I cross validate when I don't have a test set?
You could use a Bayesian machine learning model. The distribution of your predictions shows you how accurate the model is. If your model overfits it will have a large variance in the predictions, beca
How do I cross validate when I don't have a test set? You could use a Bayesian machine learning model. The distribution of your predictions shows you how accurate the model is. If your model overfits it will have a large variance in the predictions, because every model of the ensemble overfits differently. When you have enough data the distribution is very narrow and when you have infinite data equals the prediction of the non Bayesian model. Every parameter of your model is now a distribution. Depending on the type of the model they can even be interpretable. One easy way to obtain a Bayesian model is via Bayesian bootstrapping. You can either do that by training the model multiple times with different weights for each data point or by using a package like bayesian_bootstrap. The weights must be drawn from a Direchlet distribution (which in this case is a multivariate uniform distribution).
How do I cross validate when I don't have a test set? You could use a Bayesian machine learning model. The distribution of your predictions shows you how accurate the model is. If your model overfits it will have a large variance in the predictions, beca
46,998
What is a partial chi-square statistic according to Frank Harrell?
This is an appendix to @EdM answer (+1). I look at the implementation (in base R) to make sure I understand the partial $\chi^2$ statistic. The R code borrows heavily from the rms::anova. This is for illustration only, so there are restrictions; most importantly, it's assumed the predictors have unique names. library("rms") getHdata(nhgh) g <- function(x) 0.09 - x^-(1 / 1.75) formula <- g(gh) ~ rcs(age, 4) + re + sex + rcs(bmi, 4) plot(anova( ols(formula, data = nhgh) )) # Fit model in base R model <- lm(formula, data = nhgh) # `age` is transformed into a restricted cubic spline with 4 knots, # so there are 3 components. associated_terms(model, "age") #> [1] "rcs(age, 4)age" "rcs(age, 4)age'" "rcs(age, 4)age''" The partial $\chi^2$ statistic is $\hat{\beta}_{S}^\top\widehat{\Sigma}_{S}^{-1}\hat{\beta}_S$ where $S$ is the set of terms associated with the predictor (linear, nonlinear, interactions), $\hat{\beta}_S$ is the corresponding subset of coefficient estimates and $\widehat{\Sigma}_S$ is their covariance matrix. rbind( partial_chisq(model, "sex"), partial_chisq(model, "re"), partial_chisq(model, "bmi"), partial_chisq(model, "age") ) #> predictor chi2 df P #> 1 sex 15.17428 1 9.802941e-05 #> 2 re 172.89056 4 2.506276e-36 #> 3 bmi 332.38234 3 9.730894e-72 #> 4 age 1324.07373 3 8.795631e-287 The R implementation in full. library("rms") # Find model terms which are a function of the given predictor, # including (linear and nonlinear) main effects and interactions. # # @param model: A fitted `lm` or `glm` model. # @param predictor: The name of a single predictor. # # Caution! # This function assumes predictors have unique names. associated_terms <- function(model, predictor) { terms <- names(coef(model)) terms[grepl(predictor, terms, perl = TRUE)] } # Compute t(x) @ inv(V) @ x # @param V: a square n-by-n matrix. # @param x: a n-dimensional vector. compute_quadratic <- function(V, x) { x %*% solvet(V, x, tol = 1e-9) } # Compute the Wald chi squared statistic for a subset of model terms. partial_chisq <- function(model, predictor) { terms <- associated_terms(model, predictor) b <- coef(model) V <- vcov(model) idx <- names(b) %in% terms chi2 <- compute_quadratic(V[idx, idx], b[idx]) df <- sum(idx) data.frame( predictor, chi2, df, P = pchisq(chi2, df, lower.tail = FALSE) ) } # BBR, Section 4.3.5 getHdata(nhgh) g <- function(x) 0.09 - x^-(1 / 1.75) ginverse <- function(y) (0.09 - y)^-1.75 formula <- g(gh) ~ rcs(age, 4) + re + sex + rcs(bmi, 4) plot(anova( ols(formula, data = nhgh) )) # Fit model in base R model <- lm(formula, data = nhgh) # `age` is transformed into a restricted cubic spline with 4 knots, # so there are 3 components. associated_terms(model, "age") rbind( partial_chisq(model, "sex"), partial_chisq(model, "re"), partial_chisq(model, "bmi"), partial_chisq(model, "age") ) # BBR, Section 19.8 getHdata(acath) acath <- subset(acath, !is.na(choleste)) formula <- sigdz ~ sex * rcs(age, 5) plot(anova( lrm(formula, data = acath) )) # Fit model in base R model <- glm(formula, family = binomial, data = acath) # The "contribution" of `age` includes a restricted cubuc spline *and* # the interaction with `sex`. associated_terms(model, "age") rbind( partial_chisq(model, "age"), partial_chisq(model, "sex") )
What is a partial chi-square statistic according to Frank Harrell?
This is an appendix to @EdM answer (+1). I look at the implementation (in base R) to make sure I understand the partial $\chi^2$ statistic. The R code borrows heavily from the rms::anova. This is for
What is a partial chi-square statistic according to Frank Harrell? This is an appendix to @EdM answer (+1). I look at the implementation (in base R) to make sure I understand the partial $\chi^2$ statistic. The R code borrows heavily from the rms::anova. This is for illustration only, so there are restrictions; most importantly, it's assumed the predictors have unique names. library("rms") getHdata(nhgh) g <- function(x) 0.09 - x^-(1 / 1.75) formula <- g(gh) ~ rcs(age, 4) + re + sex + rcs(bmi, 4) plot(anova( ols(formula, data = nhgh) )) # Fit model in base R model <- lm(formula, data = nhgh) # `age` is transformed into a restricted cubic spline with 4 knots, # so there are 3 components. associated_terms(model, "age") #> [1] "rcs(age, 4)age" "rcs(age, 4)age'" "rcs(age, 4)age''" The partial $\chi^2$ statistic is $\hat{\beta}_{S}^\top\widehat{\Sigma}_{S}^{-1}\hat{\beta}_S$ where $S$ is the set of terms associated with the predictor (linear, nonlinear, interactions), $\hat{\beta}_S$ is the corresponding subset of coefficient estimates and $\widehat{\Sigma}_S$ is their covariance matrix. rbind( partial_chisq(model, "sex"), partial_chisq(model, "re"), partial_chisq(model, "bmi"), partial_chisq(model, "age") ) #> predictor chi2 df P #> 1 sex 15.17428 1 9.802941e-05 #> 2 re 172.89056 4 2.506276e-36 #> 3 bmi 332.38234 3 9.730894e-72 #> 4 age 1324.07373 3 8.795631e-287 The R implementation in full. library("rms") # Find model terms which are a function of the given predictor, # including (linear and nonlinear) main effects and interactions. # # @param model: A fitted `lm` or `glm` model. # @param predictor: The name of a single predictor. # # Caution! # This function assumes predictors have unique names. associated_terms <- function(model, predictor) { terms <- names(coef(model)) terms[grepl(predictor, terms, perl = TRUE)] } # Compute t(x) @ inv(V) @ x # @param V: a square n-by-n matrix. # @param x: a n-dimensional vector. compute_quadratic <- function(V, x) { x %*% solvet(V, x, tol = 1e-9) } # Compute the Wald chi squared statistic for a subset of model terms. partial_chisq <- function(model, predictor) { terms <- associated_terms(model, predictor) b <- coef(model) V <- vcov(model) idx <- names(b) %in% terms chi2 <- compute_quadratic(V[idx, idx], b[idx]) df <- sum(idx) data.frame( predictor, chi2, df, P = pchisq(chi2, df, lower.tail = FALSE) ) } # BBR, Section 4.3.5 getHdata(nhgh) g <- function(x) 0.09 - x^-(1 / 1.75) ginverse <- function(y) (0.09 - y)^-1.75 formula <- g(gh) ~ rcs(age, 4) + re + sex + rcs(bmi, 4) plot(anova( ols(formula, data = nhgh) )) # Fit model in base R model <- lm(formula, data = nhgh) # `age` is transformed into a restricted cubic spline with 4 knots, # so there are 3 components. associated_terms(model, "age") rbind( partial_chisq(model, "sex"), partial_chisq(model, "re"), partial_chisq(model, "bmi"), partial_chisq(model, "age") ) # BBR, Section 19.8 getHdata(acath) acath <- subset(acath, !is.na(choleste)) formula <- sigdz ~ sex * rcs(age, 5) plot(anova( lrm(formula, data = acath) )) # Fit model in base R model <- glm(formula, family = binomial, data = acath) # The "contribution" of `age` includes a restricted cubuc spline *and* # the interaction with `sex`. associated_terms(model, "age") rbind( partial_chisq(model, "age"), partial_chisq(model, "sex") )
What is a partial chi-square statistic according to Frank Harrell? This is an appendix to @EdM answer (+1). I look at the implementation (in base R) to make sure I understand the partial $\chi^2$ statistic. The R code borrows heavily from the rms::anova. This is for
46,999
What is a partial chi-square statistic according to Frank Harrell?
For other than ordinary least squares (OLS) regression, the anova() function in Harrell's rms package performs Wald tests on individual coefficients and sets of related coefficients; Wald tests are an option for OLS models. The Wald $\chi^2$ statistic used in the test for a coefficient or a set of coefficients is the "partial $\chi^2$ statistic." The code is in the ava() function defined at the start of rms:::anova.rms. For a vector of coefficient estimates coefs and the corresponding subset of the covariance matrix vcov(coefs), it's just the quadratic form combining the coefs with the inverse of vcov(coefs). See this answer for a simple implementation in base R. Subtracting the number of degrees of freedom corrects for the mean $\chi^2$ under the null hypothesis. For OLS, plot.anova.rms() by default multiplies similar partial F-statistics by the (numerator) degrees of freedom to get $\chi^2$ values. It can, upon request, display corresponding partial $R^2$ values.
What is a partial chi-square statistic according to Frank Harrell?
For other than ordinary least squares (OLS) regression, the anova() function in Harrell's rms package performs Wald tests on individual coefficients and sets of related coefficients; Wald tests are an
What is a partial chi-square statistic according to Frank Harrell? For other than ordinary least squares (OLS) regression, the anova() function in Harrell's rms package performs Wald tests on individual coefficients and sets of related coefficients; Wald tests are an option for OLS models. The Wald $\chi^2$ statistic used in the test for a coefficient or a set of coefficients is the "partial $\chi^2$ statistic." The code is in the ava() function defined at the start of rms:::anova.rms. For a vector of coefficient estimates coefs and the corresponding subset of the covariance matrix vcov(coefs), it's just the quadratic form combining the coefs with the inverse of vcov(coefs). See this answer for a simple implementation in base R. Subtracting the number of degrees of freedom corrects for the mean $\chi^2$ under the null hypothesis. For OLS, plot.anova.rms() by default multiplies similar partial F-statistics by the (numerator) degrees of freedom to get $\chi^2$ values. It can, upon request, display corresponding partial $R^2$ values.
What is a partial chi-square statistic according to Frank Harrell? For other than ordinary least squares (OLS) regression, the anova() function in Harrell's rms package performs Wald tests on individual coefficients and sets of related coefficients; Wald tests are an
47,000
How to know if the p value will increase or decrease
I believe that more precision could be added to exactly solve the problem and know what test are we talking about. But because you are talking about 1 sample mean and 1 standard deviation, I will assume a classic Z-test Statistics. You are trying to see if the average of your sample $\bar{x}$ is significantly different from $\mu$. The "precision of your average" is given by the standard deviation expressed as: $\sigma/\sqrt{n}$. All of this can be found here https://en.wikipedia.org/wiki/Z-test One can see that the more sample you add, the more your estimated variance will shrink. This in turns means a higher value for Z and thus a lower p-value, according to the following formula: $$Z = \frac{(\bar{x}-\mu)}{(\sigma/\sqrt{n})}$$ A more intuitive way of seeing it, is that the more you draw sample, the more confident you are about the average because the smaller is standard deviation around that average. Now there will be a point, where that average is "precise enough" to be significantly different than any number $\mu$ that you were trying to compare it to. In the case you were given, the p-value would then decrease. [edit] Thanks for the precision from Sal Mangiafico : in the case where $\bar{x} =\mu$ then the p-value will remain unchanged and equal to 1
How to know if the p value will increase or decrease
I believe that more precision could be added to exactly solve the problem and know what test are we talking about. But because you are talking about 1 sample mean and 1 standard deviation, I will assu
How to know if the p value will increase or decrease I believe that more precision could be added to exactly solve the problem and know what test are we talking about. But because you are talking about 1 sample mean and 1 standard deviation, I will assume a classic Z-test Statistics. You are trying to see if the average of your sample $\bar{x}$ is significantly different from $\mu$. The "precision of your average" is given by the standard deviation expressed as: $\sigma/\sqrt{n}$. All of this can be found here https://en.wikipedia.org/wiki/Z-test One can see that the more sample you add, the more your estimated variance will shrink. This in turns means a higher value for Z and thus a lower p-value, according to the following formula: $$Z = \frac{(\bar{x}-\mu)}{(\sigma/\sqrt{n})}$$ A more intuitive way of seeing it, is that the more you draw sample, the more confident you are about the average because the smaller is standard deviation around that average. Now there will be a point, where that average is "precise enough" to be significantly different than any number $\mu$ that you were trying to compare it to. In the case you were given, the p-value would then decrease. [edit] Thanks for the precision from Sal Mangiafico : in the case where $\bar{x} =\mu$ then the p-value will remain unchanged and equal to 1
How to know if the p value will increase or decrease I believe that more precision could be added to exactly solve the problem and know what test are we talking about. But because you are talking about 1 sample mean and 1 standard deviation, I will assu