idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
14,301 | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical? | In short, $k$-means can be viewed as the limiting case of Expectation-Maximization for spherical Gaussian Mixture Models as the trace of the covariance matrices goes to zero. What follows is a presentation of portions of sections 9.1 and 9.3 of Pattern Recognition and Machine Learning.
$K$-means
$K$-means seeks to find... | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a | In short, $k$-means can be viewed as the limiting case of Expectation-Maximization for spherical Gaussian Mixture Models as the trace of the covariance matrices goes to zero. What follows is a present | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical?
In short, $k$-means can be viewed as the limiting case of Expectation-Maximization for spherical Gaussian Mixture Models as the trace of the covariance matrices goes to zero. What follows is a presentation ... | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a
In short, $k$-means can be viewed as the limiting case of Expectation-Maximization for spherical Gaussian Mixture Models as the trace of the covariance matrices goes to zero. What follows is a present |
14,302 | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical? | @ThomasLumley's answer is excellent.
For a concrete difference, consider that the only thing you get from $k$-means is a partition. The output from fitting a GMM can include much more than that. For example, you can compute the probability a given point came from each of the different fitted components.
A GMM can als... | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a | @ThomasLumley's answer is excellent.
For a concrete difference, consider that the only thing you get from $k$-means is a partition. The output from fitting a GMM can include much more than that. For | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical?
@ThomasLumley's answer is excellent.
For a concrete difference, consider that the only thing you get from $k$-means is a partition. The output from fitting a GMM can include much more than that. For examp... | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a
@ThomasLumley's answer is excellent.
For a concrete difference, consider that the only thing you get from $k$-means is a partition. The output from fitting a GMM can include much more than that. For |
14,303 | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical? | $K$-means can be derived as a Maximum Likelihood (ML) estimator of a fixed partition model with Gaussian distributions with equal and spherical covariance matrices. A fixed partition model has a parameter for every observation that says to which cluster it belongs. Note that this is not an i.i.d. model, because the dis... | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a | $K$-means can be derived as a Maximum Likelihood (ML) estimator of a fixed partition model with Gaussian distributions with equal and spherical covariance matrices. A fixed partition model has a param | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical?
$K$-means can be derived as a Maximum Likelihood (ML) estimator of a fixed partition model with Gaussian distributions with equal and spherical covariance matrices. A fixed partition model has a parameter f... | In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a
$K$-means can be derived as a Maximum Likelihood (ML) estimator of a fixed partition model with Gaussian distributions with equal and spherical covariance matrices. A fixed partition model has a param |
14,304 | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? | In my experience, insufficient evidence is the least ambiguous and most often used way to describe the inability of rejecting $H_0$. The reasoning in my mind being that in statistics, we hardly ever deal with absolutes. That said, this is more an interpretation of language. We can think of a test that fails to rejec... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden | In my experience, insufficient evidence is the least ambiguous and most often used way to describe the inability of rejecting $H_0$. The reasoning in my mind being that in statistics, we hardly ever | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
In my experience, insufficient evidence is the least ambiguous and most often used way to describe the inability of rejecting $H_0$. The reasoning in my mind being that in statistics, we hardly e... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
In my experience, insufficient evidence is the least ambiguous and most often used way to describe the inability of rejecting $H_0$. The reasoning in my mind being that in statistics, we hardly ever |
14,305 | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? | The sentence "... evidence to reject $H_0$" does not make much sense to me because you either reject $H_0$ when $p\leq\alpha$ or you don't. It's your decision to reject or not reject. "Rejection" is not an inherent propery of the $p$-value because it requires an additional criterion set by the researcher.
What makes mo... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden | The sentence "... evidence to reject $H_0$" does not make much sense to me because you either reject $H_0$ when $p\leq\alpha$ or you don't. It's your decision to reject or not reject. "Rejection" is n | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
The sentence "... evidence to reject $H_0$" does not make much sense to me because you either reject $H_0$ when $p\leq\alpha$ or you don't. It's your decision to reject or not reject. "Rejection" ... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
The sentence "... evidence to reject $H_0$" does not make much sense to me because you either reject $H_0$ when $p\leq\alpha$ or you don't. It's your decision to reject or not reject. "Rejection" is n |
14,306 | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? | It might be helpful to distinguish between the "objective" and "subjective" parts of statistical testing. You assume a null hypothesis $H_0$, observe data, compute a statistic, and obtain a $p$-value. You might not have used the "optimal" statistic, obtained the sharpest probabilistic bounds, etc. but there is a fixed ... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden | It might be helpful to distinguish between the "objective" and "subjective" parts of statistical testing. You assume a null hypothesis $H_0$, observe data, compute a statistic, and obtain a $p$-value. | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
It might be helpful to distinguish between the "objective" and "subjective" parts of statistical testing. You assume a null hypothesis $H_0$, observe data, compute a statistic, and obtain a $p$-va... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
It might be helpful to distinguish between the "objective" and "subjective" parts of statistical testing. You assume a null hypothesis $H_0$, observe data, compute a statistic, and obtain a $p$-value. |
14,307 | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? | This is to some extent similar to some other answers, however I feel still worth saying.
What I teach (and have seen elsewhere) is to either test at a fixed level $\alpha$, or to use more graded "evidence language". If we fix a level, I'd just say "We do not reject at level $\alpha$" (or we do, of course). Maybe (if yo... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden | This is to some extent similar to some other answers, however I feel still worth saying.
What I teach (and have seen elsewhere) is to either test at a fixed level $\alpha$, or to use more graded "evid | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
This is to some extent similar to some other answers, however I feel still worth saying.
What I teach (and have seen elsewhere) is to either test at a fixed level $\alpha$, or to use more graded "... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
This is to some extent similar to some other answers, however I feel still worth saying.
What I teach (and have seen elsewhere) is to either test at a fixed level $\alpha$, or to use more graded "evid |
14,308 | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? | The two sentences have nearly the same meaning.
The phrase with 'insufficient' is just placing more stress on the idea that there is a gradual range of evidence, and that there is a 'boundary for the amount of evidence' that has not been passed.
The other phrase can be regarded as a shortened/abbreviated sentence sayin... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden | The two sentences have nearly the same meaning.
The phrase with 'insufficient' is just placing more stress on the idea that there is a gradual range of evidence, and that there is a 'boundary for the | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
The two sentences have nearly the same meaning.
The phrase with 'insufficient' is just placing more stress on the idea that there is a gradual range of evidence, and that there is a 'boundary for ... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
The two sentences have nearly the same meaning.
The phrase with 'insufficient' is just placing more stress on the idea that there is a gradual range of evidence, and that there is a 'boundary for the |
14,309 | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? | Unless the experiment or study result showed a parameter was exactly equal to the Null Hypothesis, then you do have some evidence against the Null. If you have establish a threshold for the p-value was is "sufficient" evidence and that p-value is greater than you threshold , then you have "insufficient" evidence. The p... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden | Unless the experiment or study result showed a parameter was exactly equal to the Null Hypothesis, then you do have some evidence against the Null. If you have establish a threshold for the p-value wa | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
Unless the experiment or study result showed a parameter was exactly equal to the Null Hypothesis, then you do have some evidence against the Null. If you have establish a threshold for the p-valu... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
Unless the experiment or study result showed a parameter was exactly equal to the Null Hypothesis, then you do have some evidence against the Null. If you have establish a threshold for the p-value wa |
14,310 | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? | The accept/reject procedure of a hypothesis test is only designed to yield the long run error rate properties of the test. It deals with 'evidence' in the data only vaguely and only to the extent that it gives a decision that the evidence is strong enough (according to the pre-data specified level of alpha) to require ... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden | The accept/reject procedure of a hypothesis test is only designed to yield the long run error rate properties of the test. It deals with 'evidence' in the data only vaguely and only to the extent that | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
The accept/reject procedure of a hypothesis test is only designed to yield the long run error rate properties of the test. It deals with 'evidence' in the data only vaguely and only to the extent ... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
The accept/reject procedure of a hypothesis test is only designed to yield the long run error rate properties of the test. It deals with 'evidence' in the data only vaguely and only to the extent that |
14,311 | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null? | My preference is to use "no evidence"
The testing in a classical hypothesis test is a binary decision, so in this context I prefer to use "no evidence" vs "evidence". It is best not to conflate the decision to reject the null hypothesis (which is fixed by the data and has no uncertainty) with the underlying truth or f... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden | My preference is to use "no evidence"
The testing in a classical hypothesis test is a binary decision, so in this context I prefer to use "no evidence" vs "evidence". It is best not to conflate the d | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient evidence" to reject the null?
My preference is to use "no evidence"
The testing in a classical hypothesis test is a binary decision, so in this context I prefer to use "no evidence" vs "evidence". It is best not to conflate t... | Interpreting non-statistically significant results: Do we have "no evidence" or "insufficient eviden
My preference is to use "no evidence"
The testing in a classical hypothesis test is a binary decision, so in this context I prefer to use "no evidence" vs "evidence". It is best not to conflate the d |
14,312 | Widespread overfitting in health domain research? | You are correct that overfitting is a rampant problem in health research, just as it is in all other fields in which sample sizes are not huge. One of the biggest mistakes being made in recent years is to assume that machine learning algorithms somehow fix this problem. While algorithms can be tuned with cross-valida... | Widespread overfitting in health domain research? | You are correct that overfitting is a rampant problem in health research, just as it is in all other fields in which sample sizes are not huge. One of the biggest mistakes being made in recent years | Widespread overfitting in health domain research?
You are correct that overfitting is a rampant problem in health research, just as it is in all other fields in which sample sizes are not huge. One of the biggest mistakes being made in recent years is to assume that machine learning algorithms somehow fix this problem... | Widespread overfitting in health domain research?
You are correct that overfitting is a rampant problem in health research, just as it is in all other fields in which sample sizes are not huge. One of the biggest mistakes being made in recent years |
14,313 | Widespread overfitting in health domain research? | It is common to find flawed data analysis in health research, not only flawed machine learning analysis but also flawed standard statistical analysis.
cross-validation just helps you to estimate out-of-sample prediction, but it won't help you to correct standard errors or p-values of individual coefficients in a model... | Widespread overfitting in health domain research? | It is common to find flawed data analysis in health research, not only flawed machine learning analysis but also flawed standard statistical analysis.
cross-validation just helps you to estimate out- | Widespread overfitting in health domain research?
It is common to find flawed data analysis in health research, not only flawed machine learning analysis but also flawed standard statistical analysis.
cross-validation just helps you to estimate out-of-sample prediction, but it won't help you to correct standard errors... | Widespread overfitting in health domain research?
It is common to find flawed data analysis in health research, not only flawed machine learning analysis but also flawed standard statistical analysis.
cross-validation just helps you to estimate out- |
14,314 | Widespread overfitting in health domain research? | As a complement to Frank Harrell's excellent answer, there are now a number of studies that basically find exactly what one would expect:
Evidence of Inflated Prediction Performance: A Commentary on Machine Learning and Suicide Research
A systematic review shows no performance benefit of machine learning over logistic... | Widespread overfitting in health domain research? | As a complement to Frank Harrell's excellent answer, there are now a number of studies that basically find exactly what one would expect:
Evidence of Inflated Prediction Performance: A Commentary on | Widespread overfitting in health domain research?
As a complement to Frank Harrell's excellent answer, there are now a number of studies that basically find exactly what one would expect:
Evidence of Inflated Prediction Performance: A Commentary on Machine Learning and Suicide Research
A systematic review shows no per... | Widespread overfitting in health domain research?
As a complement to Frank Harrell's excellent answer, there are now a number of studies that basically find exactly what one would expect:
Evidence of Inflated Prediction Performance: A Commentary on |
14,315 | Why are parametric tests more powerful than non-parametric tests? | This answer is mostly going to reject the premises in the question. I'd have made it a comment calling for a rephrasing of the question so as not to rely on those premises, but it's much too long, so I guess it's an answer.
Why are parametric tests more powerful than non-parametric tests?
As a general statement, the ... | Why are parametric tests more powerful than non-parametric tests? | This answer is mostly going to reject the premises in the question. I'd have made it a comment calling for a rephrasing of the question so as not to rely on those premises, but it's much too long, so | Why are parametric tests more powerful than non-parametric tests?
This answer is mostly going to reject the premises in the question. I'd have made it a comment calling for a rephrasing of the question so as not to rely on those premises, but it's much too long, so I guess it's an answer.
Why are parametric tests more... | Why are parametric tests more powerful than non-parametric tests?
This answer is mostly going to reject the premises in the question. I'd have made it a comment calling for a rephrasing of the question so as not to rely on those premises, but it's much too long, so |
14,316 | Why are parametric tests more powerful than non-parametric tests? | You apply parametric tests under the assumption that the parametric model is right. This always greatly constrains the set of possibilities you are considering. Hence the power.
Consider a parametric bootstrapping where you constrain all possible distributions to a particular set of distributions such as normal. So ins... | Why are parametric tests more powerful than non-parametric tests? | You apply parametric tests under the assumption that the parametric model is right. This always greatly constrains the set of possibilities you are considering. Hence the power.
Consider a parametric | Why are parametric tests more powerful than non-parametric tests?
You apply parametric tests under the assumption that the parametric model is right. This always greatly constrains the set of possibilities you are considering. Hence the power.
Consider a parametric bootstrapping where you constrain all possible distrib... | Why are parametric tests more powerful than non-parametric tests?
You apply parametric tests under the assumption that the parametric model is right. This always greatly constrains the set of possibilities you are considering. Hence the power.
Consider a parametric |
14,317 | Why are parametric tests more powerful than non-parametric tests? | Parametric methods can be more powerful than non-parametric in some circumstances, but are not universally so. Even when the circumstances most strongly favour the parametric approach the power advantage is often minor or even trivial.
When parametric methods have an advantage in power it comes from one or both of two ... | Why are parametric tests more powerful than non-parametric tests? | Parametric methods can be more powerful than non-parametric in some circumstances, but are not universally so. Even when the circumstances most strongly favour the parametric approach the power advant | Why are parametric tests more powerful than non-parametric tests?
Parametric methods can be more powerful than non-parametric in some circumstances, but are not universally so. Even when the circumstances most strongly favour the parametric approach the power advantage is often minor or even trivial.
When parametric me... | Why are parametric tests more powerful than non-parametric tests?
Parametric methods can be more powerful than non-parametric in some circumstances, but are not universally so. Even when the circumstances most strongly favour the parametric approach the power advant |
14,318 | Probability that number of heads exceeds sum of die rolls | Another way is by simulating a million match-offs between $X$ and $Y$
to approximate $P(X > Y) = 0.9907\pm 0.0002.$ [Simulation in R.]
set.seed(825)
d = replicate(10^6, sum(sample(1:6,100,rep=T))-
rbinom(1,600,.5))
mean(d > 0)
[1] 0.990736
2*sd(d > 0)/1000
[1] 0.0001916057 # aprx 95% margin of simulation erro... | Probability that number of heads exceeds sum of die rolls | Another way is by simulating a million match-offs between $X$ and $Y$
to approximate $P(X > Y) = 0.9907\pm 0.0002.$ [Simulation in R.]
set.seed(825)
d = replicate(10^6, sum(sample(1:6,100,rep=T))-
| Probability that number of heads exceeds sum of die rolls
Another way is by simulating a million match-offs between $X$ and $Y$
to approximate $P(X > Y) = 0.9907\pm 0.0002.$ [Simulation in R.]
set.seed(825)
d = replicate(10^6, sum(sample(1:6,100,rep=T))-
rbinom(1,600,.5))
mean(d > 0)
[1] 0.990736
2*sd(d > 0)/10... | Probability that number of heads exceeds sum of die rolls
Another way is by simulating a million match-offs between $X$ and $Y$
to approximate $P(X > Y) = 0.9907\pm 0.0002.$ [Simulation in R.]
set.seed(825)
d = replicate(10^6, sum(sample(1:6,100,rep=T))-
|
14,319 | Probability that number of heads exceeds sum of die rolls | It is possible to do exact calculations. For example in R
rolls <- 100
flips <- 600
ddice <- rep(1/6, 6)
for (n in 2:rolls){
ddice <- (c(0,ddice,0,0,0,0,0)+c(0,0,ddice,0,0,0,0)+c(0,0,0,ddice,0,0,0)+
c(0,0,0,0,ddice,0,0)+c(0,0,0,0,0,ddice,0)+c(0,0,0,0,0,0,ddice))/6}
sum(ddice * (1-pbinom(1:flips, flips, 1... | Probability that number of heads exceeds sum of die rolls | It is possible to do exact calculations. For example in R
rolls <- 100
flips <- 600
ddice <- rep(1/6, 6)
for (n in 2:rolls){
ddice <- (c(0,ddice,0,0,0,0,0)+c(0,0,ddice,0,0,0,0)+c(0,0,0,ddice,0,0,0) | Probability that number of heads exceeds sum of die rolls
It is possible to do exact calculations. For example in R
rolls <- 100
flips <- 600
ddice <- rep(1/6, 6)
for (n in 2:rolls){
ddice <- (c(0,ddice,0,0,0,0,0)+c(0,0,ddice,0,0,0,0)+c(0,0,0,ddice,0,0,0)+
c(0,0,0,0,ddice,0,0)+c(0,0,0,0,0,ddice,0)+c(0,0,... | Probability that number of heads exceeds sum of die rolls
It is possible to do exact calculations. For example in R
rolls <- 100
flips <- 600
ddice <- rep(1/6, 6)
for (n in 2:rolls){
ddice <- (c(0,ddice,0,0,0,0,0)+c(0,0,ddice,0,0,0,0)+c(0,0,0,ddice,0,0,0) |
14,320 | Probability that number of heads exceeds sum of die rolls | A bit more precise:
The variance of a sum or difference of two independent random variables is the sum of their variances. So, you have a distribution with a mean equal to $50$ and standard deviation $\sqrt{292 + 150} \approx 21$.
If we want to know how often we expect this variable to be below 0, we can try to approxi... | Probability that number of heads exceeds sum of die rolls | A bit more precise:
The variance of a sum or difference of two independent random variables is the sum of their variances. So, you have a distribution with a mean equal to $50$ and standard deviation | Probability that number of heads exceeds sum of die rolls
A bit more precise:
The variance of a sum or difference of two independent random variables is the sum of their variances. So, you have a distribution with a mean equal to $50$ and standard deviation $\sqrt{292 + 150} \approx 21$.
If we want to know how often we... | Probability that number of heads exceeds sum of die rolls
A bit more precise:
The variance of a sum or difference of two independent random variables is the sum of their variances. So, you have a distribution with a mean equal to $50$ and standard deviation |
14,321 | Probability that number of heads exceeds sum of die rolls | The following answer is a bit boring but seems to be the only one to date that contains the genuinely exact answer! Normal approximation or simulation or even just crunching the exact answer numerically to a reasonable level of accuracy, which doesn't take long, are probably the better way to go - but if you want the "... | Probability that number of heads exceeds sum of die rolls | The following answer is a bit boring but seems to be the only one to date that contains the genuinely exact answer! Normal approximation or simulation or even just crunching the exact answer numerical | Probability that number of heads exceeds sum of die rolls
The following answer is a bit boring but seems to be the only one to date that contains the genuinely exact answer! Normal approximation or simulation or even just crunching the exact answer numerically to a reasonable level of accuracy, which doesn't take long,... | Probability that number of heads exceeds sum of die rolls
The following answer is a bit boring but seems to be the only one to date that contains the genuinely exact answer! Normal approximation or simulation or even just crunching the exact answer numerical |
14,322 | Probability that number of heads exceeds sum of die rolls | The exact answer is easy enough to compute numerically — no simulation needed. For educational purposes, here's an elementary Python 3 script to do so, using no premade statistical libraries.
from collections import defaultdict
# define the distributions of a single coin and die
coin = tuple((i, 1/2) for i in (0, 1))... | Probability that number of heads exceeds sum of die rolls | The exact answer is easy enough to compute numerically — no simulation needed. For educational purposes, here's an elementary Python 3 script to do so, using no premade statistical libraries.
from co | Probability that number of heads exceeds sum of die rolls
The exact answer is easy enough to compute numerically — no simulation needed. For educational purposes, here's an elementary Python 3 script to do so, using no premade statistical libraries.
from collections import defaultdict
# define the distributions of a ... | Probability that number of heads exceeds sum of die rolls
The exact answer is easy enough to compute numerically — no simulation needed. For educational purposes, here's an elementary Python 3 script to do so, using no premade statistical libraries.
from co |
14,323 | Posterior distribution and MCMC [duplicate] | If this was not a clear conflict of interest, I would suggest you invest more time on the topic of MCMC algorithm and read a whole book rather than a few (6?) articles that can only provide a partial perspective.
How can you "draw samples from the posterior distribution" without
first knowing the properties of said... | Posterior distribution and MCMC [duplicate] | If this was not a clear conflict of interest, I would suggest you invest more time on the topic of MCMC algorithm and read a whole book rather than a few (6?) articles that can only provide a partial | Posterior distribution and MCMC [duplicate]
If this was not a clear conflict of interest, I would suggest you invest more time on the topic of MCMC algorithm and read a whole book rather than a few (6?) articles that can only provide a partial perspective.
How can you "draw samples from the posterior distribution" wi... | Posterior distribution and MCMC [duplicate]
If this was not a clear conflict of interest, I would suggest you invest more time on the topic of MCMC algorithm and read a whole book rather than a few (6?) articles that can only provide a partial |
14,324 | Posterior distribution and MCMC [duplicate] | How can you "draw samples from the posterior distribution" without
first knowing the properties of said distribution?
In Bayesian analysis we usually know that the posterior distribution is proportional to some known function (the likelihood multiplied by the prior) but we don't know the constant of integration that... | Posterior distribution and MCMC [duplicate] | How can you "draw samples from the posterior distribution" without
first knowing the properties of said distribution?
In Bayesian analysis we usually know that the posterior distribution is proport | Posterior distribution and MCMC [duplicate]
How can you "draw samples from the posterior distribution" without
first knowing the properties of said distribution?
In Bayesian analysis we usually know that the posterior distribution is proportional to some known function (the likelihood multiplied by the prior) but we... | Posterior distribution and MCMC [duplicate]
How can you "draw samples from the posterior distribution" without
first knowing the properties of said distribution?
In Bayesian analysis we usually know that the posterior distribution is proport |
14,325 | Posterior distribution and MCMC [duplicate] | Your confusion is understandable. Surely, if you already know $p(\theta|X)$, why would you need to draw samples of $\theta$ under this distribution? The answer is usually that the distribution is multivariate, and you want to marginalize over some dimensions of $\theta$ but not others. So for instance, $\theta$ might b... | Posterior distribution and MCMC [duplicate] | Your confusion is understandable. Surely, if you already know $p(\theta|X)$, why would you need to draw samples of $\theta$ under this distribution? The answer is usually that the distribution is mult | Posterior distribution and MCMC [duplicate]
Your confusion is understandable. Surely, if you already know $p(\theta|X)$, why would you need to draw samples of $\theta$ under this distribution? The answer is usually that the distribution is multivariate, and you want to marginalize over some dimensions of $\theta$ but n... | Posterior distribution and MCMC [duplicate]
Your confusion is understandable. Surely, if you already know $p(\theta|X)$, why would you need to draw samples of $\theta$ under this distribution? The answer is usually that the distribution is mult |
14,326 | Posterior distribution and MCMC [duplicate] | Just one example to address part (1).
Sometimes you can evaluate the posterior up to a partition function only.
For example, you know that $p(x)= \frac{1}{z}f(x)$, but $z$ is unknown.
The metropolis hasting algorithm:
-Initialize $x_0$
-Choose some distribution $q$
Repeat:
-Sample $y$ from $q(x_{i-1})$
-Accept $y$ if $... | Posterior distribution and MCMC [duplicate] | Just one example to address part (1).
Sometimes you can evaluate the posterior up to a partition function only.
For example, you know that $p(x)= \frac{1}{z}f(x)$, but $z$ is unknown.
The metropolis h | Posterior distribution and MCMC [duplicate]
Just one example to address part (1).
Sometimes you can evaluate the posterior up to a partition function only.
For example, you know that $p(x)= \frac{1}{z}f(x)$, but $z$ is unknown.
The metropolis hasting algorithm:
-Initialize $x_0$
-Choose some distribution $q$
Repeat:
-S... | Posterior distribution and MCMC [duplicate]
Just one example to address part (1).
Sometimes you can evaluate the posterior up to a partition function only.
For example, you know that $p(x)= \frac{1}{z}f(x)$, but $z$ is unknown.
The metropolis h |
14,327 | What kind of curve (or model) should I fit to my percentage data? | another way to go about this would be to use a Bayesian formulation, it can be a bit heavy going to start with but it tends to make it much easier to express specifics of your problem as well as getting better ideas of where the "uncertainty" is
Stan is a Monte Carlo sampler with a relatively easy to use programmatic i... | What kind of curve (or model) should I fit to my percentage data? | another way to go about this would be to use a Bayesian formulation, it can be a bit heavy going to start with but it tends to make it much easier to express specifics of your problem as well as getti | What kind of curve (or model) should I fit to my percentage data?
another way to go about this would be to use a Bayesian formulation, it can be a bit heavy going to start with but it tends to make it much easier to express specifics of your problem as well as getting better ideas of where the "uncertainty" is
Stan is ... | What kind of curve (or model) should I fit to my percentage data?
another way to go about this would be to use a Bayesian formulation, it can be a bit heavy going to start with but it tends to make it much easier to express specifics of your problem as well as getti |
14,328 | What kind of curve (or model) should I fit to my percentage data? | (Edited taking into account comments below. Thanks to @BenBolker & @WeiwenNg for helpful input.)
Fit a fractional logistic regression to the data. It is well suited to percentage data that is bounded between 0 and 100% and is well-justified theoretically in many areas of biology.
Note that you might have to divide all... | What kind of curve (or model) should I fit to my percentage data? | (Edited taking into account comments below. Thanks to @BenBolker & @WeiwenNg for helpful input.)
Fit a fractional logistic regression to the data. It is well suited to percentage data that is bounded | What kind of curve (or model) should I fit to my percentage data?
(Edited taking into account comments below. Thanks to @BenBolker & @WeiwenNg for helpful input.)
Fit a fractional logistic regression to the data. It is well suited to percentage data that is bounded between 0 and 100% and is well-justified theoretically... | What kind of curve (or model) should I fit to my percentage data?
(Edited taking into account comments below. Thanks to @BenBolker & @WeiwenNg for helpful input.)
Fit a fractional logistic regression to the data. It is well suited to percentage data that is bounded |
14,329 | What kind of curve (or model) should I fit to my percentage data? | This isn't a different answer from @mkt but graphs in particular won't fit into a comment. I first fit a logistic curve in Stata (after logging the predictor) to all data and get this graph
An equation is
100 invlogit(-4.192654 + 1.880951 log10(Copies))
Now I fit curves separately for each virus in the simplest sce... | What kind of curve (or model) should I fit to my percentage data? | This isn't a different answer from @mkt but graphs in particular won't fit into a comment. I first fit a logistic curve in Stata (after logging the predictor) to all data and get this graph
An equat | What kind of curve (or model) should I fit to my percentage data?
This isn't a different answer from @mkt but graphs in particular won't fit into a comment. I first fit a logistic curve in Stata (after logging the predictor) to all data and get this graph
An equation is
100 invlogit(-4.192654 + 1.880951 log10(Copies... | What kind of curve (or model) should I fit to my percentage data?
This isn't a different answer from @mkt but graphs in particular won't fit into a comment. I first fit a logistic curve in Stata (after logging the predictor) to all data and get this graph
An equat |
14,330 | What kind of curve (or model) should I fit to my percentage data? | Try sigmoid function. There are many formulations of this shape including a logistic curve. Hyperbolic tangent is another popular choice.
Given the plots, I can't rule out a simple step function either. I'm afraid you will not be able to differentiate between a step function and any number of sigmoid specifications. Y... | What kind of curve (or model) should I fit to my percentage data? | Try sigmoid function. There are many formulations of this shape including a logistic curve. Hyperbolic tangent is another popular choice.
Given the plots, I can't rule out a simple step function eith | What kind of curve (or model) should I fit to my percentage data?
Try sigmoid function. There are many formulations of this shape including a logistic curve. Hyperbolic tangent is another popular choice.
Given the plots, I can't rule out a simple step function either. I'm afraid you will not be able to differentiate b... | What kind of curve (or model) should I fit to my percentage data?
Try sigmoid function. There are many formulations of this shape including a logistic curve. Hyperbolic tangent is another popular choice.
Given the plots, I can't rule out a simple step function eith |
14,331 | What kind of curve (or model) should I fit to my percentage data? | Here are the 4PL (4 parameter logistic) fits, both constrained and unconstrained, with the equation as per C.A. Holstein, M. Griffin, J. Hong, P.D. Sampson, “Statistical Method for Determining and Comparing Limits of Detection of Bioassays”, Anal. Chem. 87 (2015) 9795-9801. The 4PL equation is shown in both figures and... | What kind of curve (or model) should I fit to my percentage data? | Here are the 4PL (4 parameter logistic) fits, both constrained and unconstrained, with the equation as per C.A. Holstein, M. Griffin, J. Hong, P.D. Sampson, “Statistical Method for Determining and Com | What kind of curve (or model) should I fit to my percentage data?
Here are the 4PL (4 parameter logistic) fits, both constrained and unconstrained, with the equation as per C.A. Holstein, M. Griffin, J. Hong, P.D. Sampson, “Statistical Method for Determining and Comparing Limits of Detection of Bioassays”, Anal. Chem. ... | What kind of curve (or model) should I fit to my percentage data?
Here are the 4PL (4 parameter logistic) fits, both constrained and unconstrained, with the equation as per C.A. Holstein, M. Griffin, J. Hong, P.D. Sampson, “Statistical Method for Determining and Com |
14,332 | What kind of curve (or model) should I fit to my percentage data? | I extracted the data from your scatterplot, and my equation search turned up a 3-parameter logistic type equation as a good candidate: "y = a / (1.0 + b * exp(-1.0 * c * x))", where "x" is the log base 10 per your plot. The fitted parameters were a = 9.0005947126706630E+01, b = 1.2831794858584102E+07, and c = 6.6483431... | What kind of curve (or model) should I fit to my percentage data? | I extracted the data from your scatterplot, and my equation search turned up a 3-parameter logistic type equation as a good candidate: "y = a / (1.0 + b * exp(-1.0 * c * x))", where "x" is the log bas | What kind of curve (or model) should I fit to my percentage data?
I extracted the data from your scatterplot, and my equation search turned up a 3-parameter logistic type equation as a good candidate: "y = a / (1.0 + b * exp(-1.0 * c * x))", where "x" is the log base 10 per your plot. The fitted parameters were a = 9.0... | What kind of curve (or model) should I fit to my percentage data?
I extracted the data from your scatterplot, and my equation search turned up a 3-parameter logistic type equation as a good candidate: "y = a / (1.0 + b * exp(-1.0 * c * x))", where "x" is the log bas |
14,333 | What kind of curve (or model) should I fit to my percentage data? | Since I had to open my big mouth about Heaviside, here's the results. I set the transition point to log10(viruscopies) = 2.5 . Then I calculated the standard deviations of the two halves of the data set -- that is, the Heaviside is assuming the data on either side has all derivatives = 0 .
RH side std dev = 4.76
LH s... | What kind of curve (or model) should I fit to my percentage data? | Since I had to open my big mouth about Heaviside, here's the results. I set the transition point to log10(viruscopies) = 2.5 . Then I calculated the standard deviations of the two halves of the data | What kind of curve (or model) should I fit to my percentage data?
Since I had to open my big mouth about Heaviside, here's the results. I set the transition point to log10(viruscopies) = 2.5 . Then I calculated the standard deviations of the two halves of the data set -- that is, the Heaviside is assuming the data on... | What kind of curve (or model) should I fit to my percentage data?
Since I had to open my big mouth about Heaviside, here's the results. I set the transition point to log10(viruscopies) = 2.5 . Then I calculated the standard deviations of the two halves of the data |
14,334 | What does linear stand for in linear regression? | Linear refers to the relationship between the parameters that you are estimating (e.g., $\beta$) and the outcome (e.g., $y_i$). Hence, $y=e^x\beta+\epsilon$ is linear, but $y=e^\beta x + \epsilon$ is not. A linear model means that your estimate of your parameter vector can be written $\hat{\beta} = \sum_i{w_iy_i}$, whe... | What does linear stand for in linear regression? | Linear refers to the relationship between the parameters that you are estimating (e.g., $\beta$) and the outcome (e.g., $y_i$). Hence, $y=e^x\beta+\epsilon$ is linear, but $y=e^\beta x + \epsilon$ is | What does linear stand for in linear regression?
Linear refers to the relationship between the parameters that you are estimating (e.g., $\beta$) and the outcome (e.g., $y_i$). Hence, $y=e^x\beta+\epsilon$ is linear, but $y=e^\beta x + \epsilon$ is not. A linear model means that your estimate of your parameter vector c... | What does linear stand for in linear regression?
Linear refers to the relationship between the parameters that you are estimating (e.g., $\beta$) and the outcome (e.g., $y_i$). Hence, $y=e^x\beta+\epsilon$ is linear, but $y=e^\beta x + \epsilon$ is |
14,335 | What does linear stand for in linear regression? | This post at minitab.com provides a very clear explanation:
A model is linear when it can be written in this format:
Response = constant + parameter * predictor + ... + parameter * predictor
That is, when each term (in the model) is either a constant or the product of a parameter and a predictor variable.
So both ... | What does linear stand for in linear regression? | This post at minitab.com provides a very clear explanation:
A model is linear when it can be written in this format:
Response = constant + parameter * predictor + ... + parameter * predictor
That | What does linear stand for in linear regression?
This post at minitab.com provides a very clear explanation:
A model is linear when it can be written in this format:
Response = constant + parameter * predictor + ... + parameter * predictor
That is, when each term (in the model) is either a constant or the product o... | What does linear stand for in linear regression?
This post at minitab.com provides a very clear explanation:
A model is linear when it can be written in this format:
Response = constant + parameter * predictor + ... + parameter * predictor
That |
14,336 | What does linear stand for in linear regression? | I would be careful in asking this as an "R linear regression" question versus a "linear regression" question. Formulas in R have rules that you may or may not be aware of. For example:
http://wiener.math.csi.cuny.edu/st/stRmanual/ModelFormula.html
Assuming you're asking if the following equation is linear:
a = coeff... | What does linear stand for in linear regression? | I would be careful in asking this as an "R linear regression" question versus a "linear regression" question. Formulas in R have rules that you may or may not be aware of. For example:
http://wiene | What does linear stand for in linear regression?
I would be careful in asking this as an "R linear regression" question versus a "linear regression" question. Formulas in R have rules that you may or may not be aware of. For example:
http://wiener.math.csi.cuny.edu/st/stRmanual/ModelFormula.html
Assuming you're aski... | What does linear stand for in linear regression?
I would be careful in asking this as an "R linear regression" question versus a "linear regression" question. Formulas in R have rules that you may or may not be aware of. For example:
http://wiene |
14,337 | What does linear stand for in linear regression? | You can write out the linear regression as a (linear) matrix equation.
$ \left[ \matrix{a_1 \\a_2 \\a_3 \\a_4 \\a_5 \\ ... \\ a_n} \right] = \left[ \matrix{b_1 & c_1 & b_1*c_1 \\ b_2 & c_2 & b_2*c_2 \\b_3 & c_3 & b_3*c_3 \\b_4 & c_4 & b_4*c_4 \\b_5 & c_5 & b_5*c_5 \\ &...& \\ b_n & c_n & b_n*c_n } \right] \times \left... | What does linear stand for in linear regression? | You can write out the linear regression as a (linear) matrix equation.
$ \left[ \matrix{a_1 \\a_2 \\a_3 \\a_4 \\a_5 \\ ... \\ a_n} \right] = \left[ \matrix{b_1 & c_1 & b_1*c_1 \\ b_2 & c_2 & b_2*c_2 | What does linear stand for in linear regression?
You can write out the linear regression as a (linear) matrix equation.
$ \left[ \matrix{a_1 \\a_2 \\a_3 \\a_4 \\a_5 \\ ... \\ a_n} \right] = \left[ \matrix{b_1 & c_1 & b_1*c_1 \\ b_2 & c_2 & b_2*c_2 \\b_3 & c_3 & b_3*c_3 \\b_4 & c_4 & b_4*c_4 \\b_5 & c_5 & b_5*c_5 \\ &.... | What does linear stand for in linear regression?
You can write out the linear regression as a (linear) matrix equation.
$ \left[ \matrix{a_1 \\a_2 \\a_3 \\a_4 \\a_5 \\ ... \\ a_n} \right] = \left[ \matrix{b_1 & c_1 & b_1*c_1 \\ b_2 & c_2 & b_2*c_2 |
14,338 | What does linear stand for in linear regression? | The specific answer to the question is "yes, that is a linear model". In R the "*" operator used in a formula creates what is known as an interaction. If those two variables are both continuous, then the new variable created will be a mathematical product, but it also has meanings when one or both of the variables are ... | What does linear stand for in linear regression? | The specific answer to the question is "yes, that is a linear model". In R the "*" operator used in a formula creates what is known as an interaction. If those two variables are both continuous, then | What does linear stand for in linear regression?
The specific answer to the question is "yes, that is a linear model". In R the "*" operator used in a formula creates what is known as an interaction. If those two variables are both continuous, then the new variable created will be a mathematical product, but it also ha... | What does linear stand for in linear regression?
The specific answer to the question is "yes, that is a linear model". In R the "*" operator used in a formula creates what is known as an interaction. If those two variables are both continuous, then |
14,339 | What's the maximum value of Kullback-Leibler (KL) divergence | Or even with the same support, when one distribution has a much fatter tail than the other. Take
$$KL(P\vert\vert Q) = \int p(x)\log\left(\frac{p(x)}{q(x)}\right) \,\text{d}x$$
when
$$p(x)=\overbrace{\frac{1}{\pi}\,\frac{1}{1+x^2}}^\text{Cauchy density}\qquad q(x)=\overbrace{\frac{1}{\sqrt{2\pi}}\,\exp\{-x^2/2\}}^\text... | What's the maximum value of Kullback-Leibler (KL) divergence | Or even with the same support, when one distribution has a much fatter tail than the other. Take
$$KL(P\vert\vert Q) = \int p(x)\log\left(\frac{p(x)}{q(x)}\right) \,\text{d}x$$
when
$$p(x)=\overbrace{ | What's the maximum value of Kullback-Leibler (KL) divergence
Or even with the same support, when one distribution has a much fatter tail than the other. Take
$$KL(P\vert\vert Q) = \int p(x)\log\left(\frac{p(x)}{q(x)}\right) \,\text{d}x$$
when
$$p(x)=\overbrace{\frac{1}{\pi}\,\frac{1}{1+x^2}}^\text{Cauchy density}\qquad... | What's the maximum value of Kullback-Leibler (KL) divergence
Or even with the same support, when one distribution has a much fatter tail than the other. Take
$$KL(P\vert\vert Q) = \int p(x)\log\left(\frac{p(x)}{q(x)}\right) \,\text{d}x$$
when
$$p(x)=\overbrace{ |
14,340 | What's the maximum value of Kullback-Leibler (KL) divergence | For distributions which do not have the same support, KL divergence is not bounded. Look at the definition:
$$KL(P\vert\vert Q) = \int_{-\infty}^{\infty} p(x)\ln\left(\frac{p(x)}{q(x)}\right) dx$$
if P and Q have not the same support, there exists some point $x'$ where $p(x') \neq 0$ and $q(x') = 0$, making KL go to in... | What's the maximum value of Kullback-Leibler (KL) divergence | For distributions which do not have the same support, KL divergence is not bounded. Look at the definition:
$$KL(P\vert\vert Q) = \int_{-\infty}^{\infty} p(x)\ln\left(\frac{p(x)}{q(x)}\right) dx$$
if | What's the maximum value of Kullback-Leibler (KL) divergence
For distributions which do not have the same support, KL divergence is not bounded. Look at the definition:
$$KL(P\vert\vert Q) = \int_{-\infty}^{\infty} p(x)\ln\left(\frac{p(x)}{q(x)}\right) dx$$
if P and Q have not the same support, there exists some point ... | What's the maximum value of Kullback-Leibler (KL) divergence
For distributions which do not have the same support, KL divergence is not bounded. Look at the definition:
$$KL(P\vert\vert Q) = \int_{-\infty}^{\infty} p(x)\ln\left(\frac{p(x)}{q(x)}\right) dx$$
if |
14,341 | What's the maximum value of Kullback-Leibler (KL) divergence | To add to the excellent answers by Carlos and Xi'an, it is also interesting to note that a sufficient condition for the KL divergence to be finite is for both random variables to have the same compact support, and for the reference density to be bounded. This result also establishes an implicit bound for the maximum o... | What's the maximum value of Kullback-Leibler (KL) divergence | To add to the excellent answers by Carlos and Xi'an, it is also interesting to note that a sufficient condition for the KL divergence to be finite is for both random variables to have the same compact | What's the maximum value of Kullback-Leibler (KL) divergence
To add to the excellent answers by Carlos and Xi'an, it is also interesting to note that a sufficient condition for the KL divergence to be finite is for both random variables to have the same compact support, and for the reference density to be bounded. Thi... | What's the maximum value of Kullback-Leibler (KL) divergence
To add to the excellent answers by Carlos and Xi'an, it is also interesting to note that a sufficient condition for the KL divergence to be finite is for both random variables to have the same compact |
14,342 | What's the maximum value of Kullback-Leibler (KL) divergence | An answer is here https://arxiv.org/abs/2008.05932
You must define an L-shaped distribution by transforming the probability distribution into a multiplicity distribution, calculating the quantum of the distribution, and going back to a probability distribution.
The maximum of the KL for your distribution P is KL(P||L). | What's the maximum value of Kullback-Leibler (KL) divergence | An answer is here https://arxiv.org/abs/2008.05932
You must define an L-shaped distribution by transforming the probability distribution into a multiplicity distribution, calculating the quantum of th | What's the maximum value of Kullback-Leibler (KL) divergence
An answer is here https://arxiv.org/abs/2008.05932
You must define an L-shaped distribution by transforming the probability distribution into a multiplicity distribution, calculating the quantum of the distribution, and going back to a probability distributio... | What's the maximum value of Kullback-Leibler (KL) divergence
An answer is here https://arxiv.org/abs/2008.05932
You must define an L-shaped distribution by transforming the probability distribution into a multiplicity distribution, calculating the quantum of th |
14,343 | What is the expected value of the logarithm of Gamma distribution? | This one (maybe surprisingly) can be done with easy elementary operations (employing Richard Feynman's favorite trick of differentiating under the integral sign with respect to a parameter).
We are supposing $X$ has a $\Gamma(\alpha,\beta)$ distribution and we wish to find the expectation of $Y=\log(X).$ First, becau... | What is the expected value of the logarithm of Gamma distribution? | This one (maybe surprisingly) can be done with easy elementary operations (employing Richard Feynman's favorite trick of differentiating under the integral sign with respect to a parameter).
We are s | What is the expected value of the logarithm of Gamma distribution?
This one (maybe surprisingly) can be done with easy elementary operations (employing Richard Feynman's favorite trick of differentiating under the integral sign with respect to a parameter).
We are supposing $X$ has a $\Gamma(\alpha,\beta)$ distributio... | What is the expected value of the logarithm of Gamma distribution?
This one (maybe surprisingly) can be done with easy elementary operations (employing Richard Feynman's favorite trick of differentiating under the integral sign with respect to a parameter).
We are s |
14,344 | What is the expected value of the logarithm of Gamma distribution? | The answer by @whuber is quite nice; I will essentially restate his answer in a more general form which connects (in my opinion) better with statistical theory, and which makes clear the power of the overall technique.
Consider a family of distributions $\{F_\theta : \theta \in \Theta\}$ which consitute an exponential... | What is the expected value of the logarithm of Gamma distribution? | The answer by @whuber is quite nice; I will essentially restate his answer in a more general form which connects (in my opinion) better with statistical theory, and which makes clear the power of the | What is the expected value of the logarithm of Gamma distribution?
The answer by @whuber is quite nice; I will essentially restate his answer in a more general form which connects (in my opinion) better with statistical theory, and which makes clear the power of the overall technique.
Consider a family of distribution... | What is the expected value of the logarithm of Gamma distribution?
The answer by @whuber is quite nice; I will essentially restate his answer in a more general form which connects (in my opinion) better with statistical theory, and which makes clear the power of the |
14,345 | How to find the mode of a probability density function? | Saying "the mode" implies that the distribution has one and only one. In general a distribution may have many modes, or (arguably) none.
If there's more than one mode you need to specify if you want all of them or just the global mode (if there is exactly one).
Assuming we restrict ourselves to unimodal distributions*,... | How to find the mode of a probability density function? | Saying "the mode" implies that the distribution has one and only one. In general a distribution may have many modes, or (arguably) none.
If there's more than one mode you need to specify if you want a | How to find the mode of a probability density function?
Saying "the mode" implies that the distribution has one and only one. In general a distribution may have many modes, or (arguably) none.
If there's more than one mode you need to specify if you want all of them or just the global mode (if there is exactly one).
As... | How to find the mode of a probability density function?
Saying "the mode" implies that the distribution has one and only one. In general a distribution may have many modes, or (arguably) none.
If there's more than one mode you need to specify if you want a |
14,346 | How to find the mode of a probability density function? | This answer focuses entirely on mode estimation from a sample, with emphasis on one particular method. If there is any strong sense in which you already know the density, analytically or numerically, then the preferred answer is, in brief, to look for the single maximum or multiple maxima directly, as in the answer fro... | How to find the mode of a probability density function? | This answer focuses entirely on mode estimation from a sample, with emphasis on one particular method. If there is any strong sense in which you already know the density, analytically or numerically, | How to find the mode of a probability density function?
This answer focuses entirely on mode estimation from a sample, with emphasis on one particular method. If there is any strong sense in which you already know the density, analytically or numerically, then the preferred answer is, in brief, to look for the single m... | How to find the mode of a probability density function?
This answer focuses entirely on mode estimation from a sample, with emphasis on one particular method. If there is any strong sense in which you already know the density, analytically or numerically, |
14,347 | How to find the mode of a probability density function? | If you have samples from the distribution in a vector "x", I would do:
mymode <- function(x){
d<-density(x)
return(d$x[which(d$y==max(d$y)[1])])
}
You should tune the density function so it is smooth enough on the top ;-).
If you have only the density of the distribution, I would use an optimiser to find the m... | How to find the mode of a probability density function? | If you have samples from the distribution in a vector "x", I would do:
mymode <- function(x){
d<-density(x)
return(d$x[which(d$y==max(d$y)[1])])
}
You should tune the density function so it i | How to find the mode of a probability density function?
If you have samples from the distribution in a vector "x", I would do:
mymode <- function(x){
d<-density(x)
return(d$x[which(d$y==max(d$y)[1])])
}
You should tune the density function so it is smooth enough on the top ;-).
If you have only the density of ... | How to find the mode of a probability density function?
If you have samples from the distribution in a vector "x", I would do:
mymode <- function(x){
d<-density(x)
return(d$x[which(d$y==max(d$y)[1])])
}
You should tune the density function so it i |
14,348 | How to find the mode of a probability density function? | Step 1: find the first derivative of the function ...put it equal to zero and find the value of x from here ...
Step 2: fund the 2nd derivative of the function if the value of second derivative is negative then the value of x obtained in case of 1st derivative is the mode...and the function is maximum at that value of ... | How to find the mode of a probability density function? | Step 1: find the first derivative of the function ...put it equal to zero and find the value of x from here ...
Step 2: fund the 2nd derivative of the function if the value of second derivative is neg | How to find the mode of a probability density function?
Step 1: find the first derivative of the function ...put it equal to zero and find the value of x from here ...
Step 2: fund the 2nd derivative of the function if the value of second derivative is negative then the value of x obtained in case of 1st derivative is ... | How to find the mode of a probability density function?
Step 1: find the first derivative of the function ...put it equal to zero and find the value of x from here ...
Step 2: fund the 2nd derivative of the function if the value of second derivative is neg |
14,349 | Probability of drawing a given word from a bag of letters in Scrabble | A formula is requested. Unfortunately, the situation is so complicated it appears that any formula will merely be a roundabout way of enumerating all the possibilities. Instead, this answer offers an algorithm which is (a) tantamount to a formula involving sums of products of binomial coefficients and (b) can be port... | Probability of drawing a given word from a bag of letters in Scrabble | A formula is requested. Unfortunately, the situation is so complicated it appears that any formula will merely be a roundabout way of enumerating all the possibilities. Instead, this answer offers a | Probability of drawing a given word from a bag of letters in Scrabble
A formula is requested. Unfortunately, the situation is so complicated it appears that any formula will merely be a roundabout way of enumerating all the possibilities. Instead, this answer offers an algorithm which is (a) tantamount to a formula i... | Probability of drawing a given word from a bag of letters in Scrabble
A formula is requested. Unfortunately, the situation is so complicated it appears that any formula will merely be a roundabout way of enumerating all the possibilities. Instead, this answer offers a |
14,350 | Probability of drawing a given word from a bag of letters in Scrabble | Answers to the referenced question apply here directly: create a dictionary consisting only of the target word (and its possible wildcard spellings), compute the chance that a random rack cannot form the target, and subtract that from $1$. This computation is fast.
Simulations (shown at the end) support the computed a... | Probability of drawing a given word from a bag of letters in Scrabble | Answers to the referenced question apply here directly: create a dictionary consisting only of the target word (and its possible wildcard spellings), compute the chance that a random rack cannot form | Probability of drawing a given word from a bag of letters in Scrabble
Answers to the referenced question apply here directly: create a dictionary consisting only of the target word (and its possible wildcard spellings), compute the chance that a random rack cannot form the target, and subtract that from $1$. This comp... | Probability of drawing a given word from a bag of letters in Scrabble
Answers to the referenced question apply here directly: create a dictionary consisting only of the target word (and its possible wildcard spellings), compute the chance that a random rack cannot form |
14,351 | Probability of drawing a given word from a bag of letters in Scrabble | So this is a Monte Carlo solution, that is, we are going to simulate drawing the tiles a zillion of times and then we are going to calculate how many of these simulated draws resulted in us being able to form the given word. I've written the solution in R, but you could use any other programming language, say Python or... | Probability of drawing a given word from a bag of letters in Scrabble | So this is a Monte Carlo solution, that is, we are going to simulate drawing the tiles a zillion of times and then we are going to calculate how many of these simulated draws resulted in us being able | Probability of drawing a given word from a bag of letters in Scrabble
So this is a Monte Carlo solution, that is, we are going to simulate drawing the tiles a zillion of times and then we are going to calculate how many of these simulated draws resulted in us being able to form the given word. I've written the solution... | Probability of drawing a given word from a bag of letters in Scrabble
So this is a Monte Carlo solution, that is, we are going to simulate drawing the tiles a zillion of times and then we are going to calculate how many of these simulated draws resulted in us being able |
14,352 | Probability of drawing a given word from a bag of letters in Scrabble | For the word "BOOT" with no wildcards:
$$
p_0=\frac{\binom{n_b}{1}\binom{n_o}{2}\binom{n_t}{1}\binom{n-4}{3}}{\binom{n}{7}}
$$
With wildcards, it becomes more tedious. Let $p_k$ indicate the probability of being able to play "BOOT" with $k$ wildcards:
$$
\begin{eqnarray*}
p_0&=&\frac{\binom{n_b}{1}\binom{n_o}{2}\binom{... | Probability of drawing a given word from a bag of letters in Scrabble | For the word "BOOT" with no wildcards:
$$
p_0=\frac{\binom{n_b}{1}\binom{n_o}{2}\binom{n_t}{1}\binom{n-4}{3}}{\binom{n}{7}}
$$
With wildcards, it becomes more tedious. Let $p_k$ indicate the probabili | Probability of drawing a given word from a bag of letters in Scrabble
For the word "BOOT" with no wildcards:
$$
p_0=\frac{\binom{n_b}{1}\binom{n_o}{2}\binom{n_t}{1}\binom{n-4}{3}}{\binom{n}{7}}
$$
With wildcards, it becomes more tedious. Let $p_k$ indicate the probability of being able to play "BOOT" with $k$ wildcards... | Probability of drawing a given word from a bag of letters in Scrabble
For the word "BOOT" with no wildcards:
$$
p_0=\frac{\binom{n_b}{1}\binom{n_o}{2}\binom{n_t}{1}\binom{n-4}{3}}{\binom{n}{7}}
$$
With wildcards, it becomes more tedious. Let $p_k$ indicate the probabili |
14,353 | Probability of drawing a given word from a bag of letters in Scrabble | Meh.
$$\frac{\partial \gamma}{\partial c} = b_0x^c ln(x) \sum_{r=0}^{\infty}\frac{(c+y-1)(c+\alpha)_r(c+\beta)_r}{(c+1)_r(c+\gamma)_r}x^r+$$
$$+b_0x^c\sum_{r=0}^{\infty}\frac{(c+\gamma-1)(c+\alpha)_r(c+\beta)_r}{(c+1)_r(c+\gamma)_r}(\frac{1}{c+\gamma-1}+$$
$$+\sum_{k=0}^{r-1}(\frac{1}{c+\alpha+\kappa}+\frac{1}{c+\beta+... | Probability of drawing a given word from a bag of letters in Scrabble | Meh.
$$\frac{\partial \gamma}{\partial c} = b_0x^c ln(x) \sum_{r=0}^{\infty}\frac{(c+y-1)(c+\alpha)_r(c+\beta)_r}{(c+1)_r(c+\gamma)_r}x^r+$$
$$+b_0x^c\sum_{r=0}^{\infty}\frac{(c+\gamma-1)(c+\alpha)_r( | Probability of drawing a given word from a bag of letters in Scrabble
Meh.
$$\frac{\partial \gamma}{\partial c} = b_0x^c ln(x) \sum_{r=0}^{\infty}\frac{(c+y-1)(c+\alpha)_r(c+\beta)_r}{(c+1)_r(c+\gamma)_r}x^r+$$
$$+b_0x^c\sum_{r=0}^{\infty}\frac{(c+\gamma-1)(c+\alpha)_r(c+\beta)_r}{(c+1)_r(c+\gamma)_r}(\frac{1}{c+\gamma... | Probability of drawing a given word from a bag of letters in Scrabble
Meh.
$$\frac{\partial \gamma}{\partial c} = b_0x^c ln(x) \sum_{r=0}^{\infty}\frac{(c+y-1)(c+\alpha)_r(c+\beta)_r}{(c+1)_r(c+\gamma)_r}x^r+$$
$$+b_0x^c\sum_{r=0}^{\infty}\frac{(c+\gamma-1)(c+\alpha)_r( |
14,354 | Can data cleaning worsen the results of statistical analysis? | It actually depends on the purpose of your research. In my opinion, there could be several:
You want to understand what are the typical factors that causes cases and deaths and that are not affected by epidemic periods and factors that causes epidemics (so you are interested in typical not force major probabilities) -... | Can data cleaning worsen the results of statistical analysis? | It actually depends on the purpose of your research. In my opinion, there could be several:
You want to understand what are the typical factors that causes cases and deaths and that are not affected | Can data cleaning worsen the results of statistical analysis?
It actually depends on the purpose of your research. In my opinion, there could be several:
You want to understand what are the typical factors that causes cases and deaths and that are not affected by epidemic periods and factors that causes epidemics (so ... | Can data cleaning worsen the results of statistical analysis?
It actually depends on the purpose of your research. In my opinion, there could be several:
You want to understand what are the typical factors that causes cases and deaths and that are not affected |
14,355 | Can data cleaning worsen the results of statistical analysis? | I personally wouldn't call this "data cleaning". I think of data cleaning more in the sense of data editing - cleaning up inconsistencies in the data set (e.g. a record has reported age of 1000, or a person aged 4 is a single parent, etc.).
The presence of a real effect in your data does not make it "messy" (to the ... | Can data cleaning worsen the results of statistical analysis? | I personally wouldn't call this "data cleaning". I think of data cleaning more in the sense of data editing - cleaning up inconsistencies in the data set (e.g. a record has reported age of 1000, or a | Can data cleaning worsen the results of statistical analysis?
I personally wouldn't call this "data cleaning". I think of data cleaning more in the sense of data editing - cleaning up inconsistencies in the data set (e.g. a record has reported age of 1000, or a person aged 4 is a single parent, etc.).
The presence o... | Can data cleaning worsen the results of statistical analysis?
I personally wouldn't call this "data cleaning". I think of data cleaning more in the sense of data editing - cleaning up inconsistencies in the data set (e.g. a record has reported age of 1000, or a |
14,356 | Can data cleaning worsen the results of statistical analysis? | To give you a general answer to your question, let me parapharse one of my old general managers: the opportunities of research are found in the outliers of the model you are fitting.
The situation is similar to the experiment performed my Robert Millikan in determining the charge of an electron. Decades after winning ... | Can data cleaning worsen the results of statistical analysis? | To give you a general answer to your question, let me parapharse one of my old general managers: the opportunities of research are found in the outliers of the model you are fitting.
The situation is | Can data cleaning worsen the results of statistical analysis?
To give you a general answer to your question, let me parapharse one of my old general managers: the opportunities of research are found in the outliers of the model you are fitting.
The situation is similar to the experiment performed my Robert Millikan in... | Can data cleaning worsen the results of statistical analysis?
To give you a general answer to your question, let me parapharse one of my old general managers: the opportunities of research are found in the outliers of the model you are fitting.
The situation is |
14,357 | Can data cleaning worsen the results of statistical analysis? | The role of "data cleansing" is to identify when "our laws (model) do not work". Adjusting for Outliers or abnormal data points serve to allow us to get "robust estimates" of the parameters in the current model that we are entertaining. These "outliers" if untreated permit an unwanted distortion in the model parameters... | Can data cleaning worsen the results of statistical analysis? | The role of "data cleansing" is to identify when "our laws (model) do not work". Adjusting for Outliers or abnormal data points serve to allow us to get "robust estimates" of the parameters in the cur | Can data cleaning worsen the results of statistical analysis?
The role of "data cleansing" is to identify when "our laws (model) do not work". Adjusting for Outliers or abnormal data points serve to allow us to get "robust estimates" of the parameters in the current model that we are entertaining. These "outliers" if u... | Can data cleaning worsen the results of statistical analysis?
The role of "data cleansing" is to identify when "our laws (model) do not work". Adjusting for Outliers or abnormal data points serve to allow us to get "robust estimates" of the parameters in the cur |
14,358 | Can data cleaning worsen the results of statistical analysis? | One of the most commonly used methods for finding epidemics in retrospective data is actually to look for outliers - many flu researchers, for example, primarily focus on the residuals of their fitted models, rather than the models themselves, to see places where the "day in, day out" predictions of the model fail - on... | Can data cleaning worsen the results of statistical analysis? | One of the most commonly used methods for finding epidemics in retrospective data is actually to look for outliers - many flu researchers, for example, primarily focus on the residuals of their fitted | Can data cleaning worsen the results of statistical analysis?
One of the most commonly used methods for finding epidemics in retrospective data is actually to look for outliers - many flu researchers, for example, primarily focus on the residuals of their fitted models, rather than the models themselves, to see places ... | Can data cleaning worsen the results of statistical analysis?
One of the most commonly used methods for finding epidemics in retrospective data is actually to look for outliers - many flu researchers, for example, primarily focus on the residuals of their fitted |
14,359 | Strong ignorability: confusion on the relationship between outcomes and treatment | I'll try to break it down a bit.. I think most of the confusion when studying potential outcomes (ie $Y_0,Y_1$) is to realize that $Y_0,Y_1$ are different than $Y$ without bringing in the covariate $X$. The key is to realize that every individual $i$ has potential outcomes $(Y_{i1},Y_{i0})$, but you only observe $Y_{iT... | Strong ignorability: confusion on the relationship between outcomes and treatment | I'll try to break it down a bit.. I think most of the confusion when studying potential outcomes (ie $Y_0,Y_1$) is to realize that $Y_0,Y_1$ are different than $Y$ without bringing in the covariate $X | Strong ignorability: confusion on the relationship between outcomes and treatment
I'll try to break it down a bit.. I think most of the confusion when studying potential outcomes (ie $Y_0,Y_1$) is to realize that $Y_0,Y_1$ are different than $Y$ without bringing in the covariate $X$. The key is to realize that every in... | Strong ignorability: confusion on the relationship between outcomes and treatment
I'll try to break it down a bit.. I think most of the confusion when studying potential outcomes (ie $Y_0,Y_1$) is to realize that $Y_0,Y_1$ are different than $Y$ without bringing in the covariate $X |
14,360 | Strong ignorability: confusion on the relationship between outcomes and treatment | Doubled has a fantastic answer, but I wanted to follow up with some intuitions that have helped me.
First, think of potential outcomes as pre-treatment covariates. I know this seems like a strange thing to do since the word "outcome" is in their name, but considering it this way clarifies some issues. They represent tw... | Strong ignorability: confusion on the relationship between outcomes and treatment | Doubled has a fantastic answer, but I wanted to follow up with some intuitions that have helped me.
First, think of potential outcomes as pre-treatment covariates. I know this seems like a strange thi | Strong ignorability: confusion on the relationship between outcomes and treatment
Doubled has a fantastic answer, but I wanted to follow up with some intuitions that have helped me.
First, think of potential outcomes as pre-treatment covariates. I know this seems like a strange thing to do since the word "outcome" is i... | Strong ignorability: confusion on the relationship between outcomes and treatment
Doubled has a fantastic answer, but I wanted to follow up with some intuitions that have helped me.
First, think of potential outcomes as pre-treatment covariates. I know this seems like a strange thi |
14,361 | Strong ignorability: confusion on the relationship between outcomes and treatment | First, you ask a question about $Y_0,Y_1$, but you have a DAG that depends only on $Y$. This may hinder your understanding.
Anyway, simply put, $(Y_0,Y_1) \perp \!\!\! \perp T|X$ means that there is no hidden confounder, i.e. no factors that both influence the treatment $T$ and the (value of the) outcome $Y$
On a DAG,... | Strong ignorability: confusion on the relationship between outcomes and treatment | First, you ask a question about $Y_0,Y_1$, but you have a DAG that depends only on $Y$. This may hinder your understanding.
Anyway, simply put, $(Y_0,Y_1) \perp \!\!\! \perp T|X$ means that there is | Strong ignorability: confusion on the relationship between outcomes and treatment
First, you ask a question about $Y_0,Y_1$, but you have a DAG that depends only on $Y$. This may hinder your understanding.
Anyway, simply put, $(Y_0,Y_1) \perp \!\!\! \perp T|X$ means that there is no hidden confounder, i.e. no factors ... | Strong ignorability: confusion on the relationship between outcomes and treatment
First, you ask a question about $Y_0,Y_1$, but you have a DAG that depends only on $Y$. This may hinder your understanding.
Anyway, simply put, $(Y_0,Y_1) \perp \!\!\! \perp T|X$ means that there is |
14,362 | Extracting slopes for cases from a mixed effects model (lme4) | The model:
library(lme4)
data(sleepstudy)
fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
The function coef is the right approach for extracting individual
differences.
> coef(fm1)$Subject
(Intercept) Days
308 253.6637 19.6662581
309 211.0065 1.8475834
310 212.4449 5.0184067
330 275.... | Extracting slopes for cases from a mixed effects model (lme4) | The model:
library(lme4)
data(sleepstudy)
fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
The function coef is the right approach for extracting individual
differences.
> coef(fm1)$Subject | Extracting slopes for cases from a mixed effects model (lme4)
The model:
library(lme4)
data(sleepstudy)
fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
The function coef is the right approach for extracting individual
differences.
> coef(fm1)$Subject
(Intercept) Days
308 253.6637 19.6662581
309... | Extracting slopes for cases from a mixed effects model (lme4)
The model:
library(lme4)
data(sleepstudy)
fm1 <- lmer(Reaction ~ Days + (Days|Subject), sleepstudy)
The function coef is the right approach for extracting individual
differences.
> coef(fm1)$Subject |
14,363 | Extracting slopes for cases from a mixed effects model (lme4) | I know it is an old question, but this may be useful (at least for later me anyway) - you can use ranef() and fixef() from lme4 and this is less verbose than the previous answer.
> ranef(fm1)
$Subject
(Intercept) Days
308 2.2585509 9.1989758
309 -40.3987381 -8.6196806
310 -38.9604090 -5.4488565
330 23... | Extracting slopes for cases from a mixed effects model (lme4) | I know it is an old question, but this may be useful (at least for later me anyway) - you can use ranef() and fixef() from lme4 and this is less verbose than the previous answer.
> ranef(fm1)
$Subject | Extracting slopes for cases from a mixed effects model (lme4)
I know it is an old question, but this may be useful (at least for later me anyway) - you can use ranef() and fixef() from lme4 and this is less verbose than the previous answer.
> ranef(fm1)
$Subject
(Intercept) Days
308 2.2585509 9.1989758
3... | Extracting slopes for cases from a mixed effects model (lme4)
I know it is an old question, but this may be useful (at least for later me anyway) - you can use ranef() and fixef() from lme4 and this is less verbose than the previous answer.
> ranef(fm1)
$Subject |
14,364 | Question on how to normalize regression coefficient | Although I cannot do justice to the question here--that would require a small monograph--it may be helpful to recapitulate some key ideas.
The question
Let's begin by restating the question and using unambiguous terminology. The data consist of a list of ordered pairs $(t_i, y_i)$ . Known constants $\alpha_1$ and $\a... | Question on how to normalize regression coefficient | Although I cannot do justice to the question here--that would require a small monograph--it may be helpful to recapitulate some key ideas.
The question
Let's begin by restating the question and using | Question on how to normalize regression coefficient
Although I cannot do justice to the question here--that would require a small monograph--it may be helpful to recapitulate some key ideas.
The question
Let's begin by restating the question and using unambiguous terminology. The data consist of a list of ordered pair... | Question on how to normalize regression coefficient
Although I cannot do justice to the question here--that would require a small monograph--it may be helpful to recapitulate some key ideas.
The question
Let's begin by restating the question and using |
14,365 | Why is controlling for too many variables considered harmful? | There is no such thing as a "sweet spot" for the number of variables to control for in order to get an unbiased estimate of the causal effect. Since we are talking about confounding, we must have in mind the estimation of the causal effect of a particular variable. You use a graphic tool called the DAG to map out the c... | Why is controlling for too many variables considered harmful? | There is no such thing as a "sweet spot" for the number of variables to control for in order to get an unbiased estimate of the causal effect. Since we are talking about confounding, we must have in m | Why is controlling for too many variables considered harmful?
There is no such thing as a "sweet spot" for the number of variables to control for in order to get an unbiased estimate of the causal effect. Since we are talking about confounding, we must have in mind the estimation of the causal effect of a particular va... | Why is controlling for too many variables considered harmful?
There is no such thing as a "sweet spot" for the number of variables to control for in order to get an unbiased estimate of the causal effect. Since we are talking about confounding, we must have in m |
14,366 | Why is controlling for too many variables considered harmful? | I would point out three things:
(1) Generally (related to the estimation of causal effects)
Usually you want to explain phenomena out there in the world with parsimonious models including variables deduced from some theory. You may just add any variable that comes to your mind to a regression model and end up with an a... | Why is controlling for too many variables considered harmful? | I would point out three things:
(1) Generally (related to the estimation of causal effects)
Usually you want to explain phenomena out there in the world with parsimonious models including variables de | Why is controlling for too many variables considered harmful?
I would point out three things:
(1) Generally (related to the estimation of causal effects)
Usually you want to explain phenomena out there in the world with parsimonious models including variables deduced from some theory. You may just add any variable that... | Why is controlling for too many variables considered harmful?
I would point out three things:
(1) Generally (related to the estimation of causal effects)
Usually you want to explain phenomena out there in the world with parsimonious models including variables de |
14,367 | Why is controlling for too many variables considered harmful? | Well, it's related to the concept of p-hacking. Given a sufficient number of plausible confounding variables to be added in the study, it's possible to find a combination of them that would yield significant results (you just plug in or out until you get significant results, and you report those).
There's a very nice p... | Why is controlling for too many variables considered harmful? | Well, it's related to the concept of p-hacking. Given a sufficient number of plausible confounding variables to be added in the study, it's possible to find a combination of them that would yield sign | Why is controlling for too many variables considered harmful?
Well, it's related to the concept of p-hacking. Given a sufficient number of plausible confounding variables to be added in the study, it's possible to find a combination of them that would yield significant results (you just plug in or out until you get sig... | Why is controlling for too many variables considered harmful?
Well, it's related to the concept of p-hacking. Given a sufficient number of plausible confounding variables to be added in the study, it's possible to find a combination of them that would yield sign |
14,368 | Why is controlling for too many variables considered harmful? | The term you are looking for is overfitting. Wikipedia has a good explanation. | Why is controlling for too many variables considered harmful? | The term you are looking for is overfitting. Wikipedia has a good explanation. | Why is controlling for too many variables considered harmful?
The term you are looking for is overfitting. Wikipedia has a good explanation. | Why is controlling for too many variables considered harmful?
The term you are looking for is overfitting. Wikipedia has a good explanation. |
14,369 | Why is controlling for too many variables considered harmful? | There are some helpful mathsy explanations, but I thought perhaps this could use an intuitive example.
Suppose that you're investigating (perhaps for an insurance company) whether hair colour has an impact on crash risk. You look at the data, and at first pass you see that brunettes are 10% more likely to crash than bl... | Why is controlling for too many variables considered harmful? | There are some helpful mathsy explanations, but I thought perhaps this could use an intuitive example.
Suppose that you're investigating (perhaps for an insurance company) whether hair colour has an i | Why is controlling for too many variables considered harmful?
There are some helpful mathsy explanations, but I thought perhaps this could use an intuitive example.
Suppose that you're investigating (perhaps for an insurance company) whether hair colour has an impact on crash risk. You look at the data, and at first pa... | Why is controlling for too many variables considered harmful?
There are some helpful mathsy explanations, but I thought perhaps this could use an intuitive example.
Suppose that you're investigating (perhaps for an insurance company) whether hair colour has an i |
14,370 | What's in a name: hyperparameters | The term hyperparameter is pretty vague. I will use it to refer to a parameter that is in a higher level of the hierarchy than the other parameters. For an example, consider a regression model with a known variance (1 in this case)
$$ y \sim N(X\beta,I) $$
and then a prior on the parameters, e.g.
$$ \beta \sim N(0,\la... | What's in a name: hyperparameters | The term hyperparameter is pretty vague. I will use it to refer to a parameter that is in a higher level of the hierarchy than the other parameters. For an example, consider a regression model with a | What's in a name: hyperparameters
The term hyperparameter is pretty vague. I will use it to refer to a parameter that is in a higher level of the hierarchy than the other parameters. For an example, consider a regression model with a known variance (1 in this case)
$$ y \sim N(X\beta,I) $$
and then a prior on the param... | What's in a name: hyperparameters
The term hyperparameter is pretty vague. I will use it to refer to a parameter that is in a higher level of the hierarchy than the other parameters. For an example, consider a regression model with a |
14,371 | What's in a name: hyperparameters | A hyperparameter is simply a parameter that impacts, completely or partly, other parameters. They do not directly solve the optimization problem you face, but rather optimize parameters that can solve the problem (hence the hyper, because they are not part of the optimization problem, but rather are "addons"). For what... | What's in a name: hyperparameters | A hyperparameter is simply a parameter that impacts, completely or partly, other parameters. They do not directly solve the optimization problem you face, but rather optimize parameters that can solve | What's in a name: hyperparameters
A hyperparameter is simply a parameter that impacts, completely or partly, other parameters. They do not directly solve the optimization problem you face, but rather optimize parameters that can solve the problem (hence the hyper, because they are not part of the optimization problem, ... | What's in a name: hyperparameters
A hyperparameter is simply a parameter that impacts, completely or partly, other parameters. They do not directly solve the optimization problem you face, but rather optimize parameters that can solve |
14,372 | What's in a name: hyperparameters | The other explanations are a bit vague; here's a more concrete explanation that should clarify it.
Hyperparameters are parameters of the model only, not of the physical process that is being modeled. You introduce them "artificially" to make your model "work" in the presence of finite data and/or finite computation tim... | What's in a name: hyperparameters | The other explanations are a bit vague; here's a more concrete explanation that should clarify it.
Hyperparameters are parameters of the model only, not of the physical process that is being modeled. | What's in a name: hyperparameters
The other explanations are a bit vague; here's a more concrete explanation that should clarify it.
Hyperparameters are parameters of the model only, not of the physical process that is being modeled. You introduce them "artificially" to make your model "work" in the presence of finite ... | What's in a name: hyperparameters
The other explanations are a bit vague; here's a more concrete explanation that should clarify it.
Hyperparameters are parameters of the model only, not of the physical process that is being modeled. |
14,373 | What's in a name: hyperparameters | It's not a preciseley defined term, so I'll go ahead and give you yet another definition that seems to be consistent with common usage.
A hyperparameter is a quantity estimated in a machine learning algorithm that does not participate in the functional form of the final predictive function.
Let me unwind that with an... | What's in a name: hyperparameters | It's not a preciseley defined term, so I'll go ahead and give you yet another definition that seems to be consistent with common usage.
A hyperparameter is a quantity estimated in a machine learning | What's in a name: hyperparameters
It's not a preciseley defined term, so I'll go ahead and give you yet another definition that seems to be consistent with common usage.
A hyperparameter is a quantity estimated in a machine learning algorithm that does not participate in the functional form of the final predictive fun... | What's in a name: hyperparameters
It's not a preciseley defined term, so I'll go ahead and give you yet another definition that seems to be consistent with common usage.
A hyperparameter is a quantity estimated in a machine learning |
14,374 | What's in a name: hyperparameters | As precisely pointed out by @jaradniemi, one use of the term hyperparameter comes from hierarchical or multilevel modeling, where you have a cascade of statistical models, one built over/under the others, using usually conditional probability statements.
But the same terminology arises in other contexts with different ... | What's in a name: hyperparameters | As precisely pointed out by @jaradniemi, one use of the term hyperparameter comes from hierarchical or multilevel modeling, where you have a cascade of statistical models, one built over/under the oth | What's in a name: hyperparameters
As precisely pointed out by @jaradniemi, one use of the term hyperparameter comes from hierarchical or multilevel modeling, where you have a cascade of statistical models, one built over/under the others, using usually conditional probability statements.
But the same terminology arises... | What's in a name: hyperparameters
As precisely pointed out by @jaradniemi, one use of the term hyperparameter comes from hierarchical or multilevel modeling, where you have a cascade of statistical models, one built over/under the oth |
14,375 | Fixed effect vs random effect when all possibilities are included in a mixed effects model | The general problem with "fixed" and "random" effects is that they are not defined in a consistent way. Andrew Gelman quotes several of them:
(1) Fixed effects are constant across individuals, and random effects
vary. For example, in a growth study, a model with random intercepts
$a_i$ and fixed slope $b$ correspo... | Fixed effect vs random effect when all possibilities are included in a mixed effects model | The general problem with "fixed" and "random" effects is that they are not defined in a consistent way. Andrew Gelman quotes several of them:
(1) Fixed effects are constant across individuals, and ra | Fixed effect vs random effect when all possibilities are included in a mixed effects model
The general problem with "fixed" and "random" effects is that they are not defined in a consistent way. Andrew Gelman quotes several of them:
(1) Fixed effects are constant across individuals, and random effects
vary. For exam... | Fixed effect vs random effect when all possibilities are included in a mixed effects model
The general problem with "fixed" and "random" effects is that they are not defined in a consistent way. Andrew Gelman quotes several of them:
(1) Fixed effects are constant across individuals, and ra |
14,376 | Fixed effect vs random effect when all possibilities are included in a mixed effects model | Executive summary
It is indeed often said that if all possible factor levels are included in a mixed model, then this factor should be treated as a fixed effect. This is not necessarily true FOR TWO DISTINCT REASONS:
(1) If the number of levels is large, then it can make sense to treat the [crossed] factor as random.
I... | Fixed effect vs random effect when all possibilities are included in a mixed effects model | Executive summary
It is indeed often said that if all possible factor levels are included in a mixed model, then this factor should be treated as a fixed effect. This is not necessarily true FOR TWO D | Fixed effect vs random effect when all possibilities are included in a mixed effects model
Executive summary
It is indeed often said that if all possible factor levels are included in a mixed model, then this factor should be treated as a fixed effect. This is not necessarily true FOR TWO DISTINCT REASONS:
(1) If the n... | Fixed effect vs random effect when all possibilities are included in a mixed effects model
Executive summary
It is indeed often said that if all possible factor levels are included in a mixed model, then this factor should be treated as a fixed effect. This is not necessarily true FOR TWO D |
14,377 | Fixed effect vs random effect when all possibilities are included in a mixed effects model | To add to the other answers:
I don't think you are logically obliged to always use a fixed effect in the manner described in the OP. Even when the usual definitions/guidelines for when to treat a factor as random are not met, I might be inclined to still model it as random when there are a large number of levels, so th... | Fixed effect vs random effect when all possibilities are included in a mixed effects model | To add to the other answers:
I don't think you are logically obliged to always use a fixed effect in the manner described in the OP. Even when the usual definitions/guidelines for when to treat a fact | Fixed effect vs random effect when all possibilities are included in a mixed effects model
To add to the other answers:
I don't think you are logically obliged to always use a fixed effect in the manner described in the OP. Even when the usual definitions/guidelines for when to treat a factor as random are not met, I m... | Fixed effect vs random effect when all possibilities are included in a mixed effects model
To add to the other answers:
I don't think you are logically obliged to always use a fixed effect in the manner described in the OP. Even when the usual definitions/guidelines for when to treat a fact |
14,378 | Fixed effect vs random effect when all possibilities are included in a mixed effects model | If you're talking about the situation where you know all possible levels of a factor of interest, and also have data to estimate the effects, then definitely you don't need to represent levels with random effects.
The reason that you want to set random effect to a factor is because you wish to make inference on the ef... | Fixed effect vs random effect when all possibilities are included in a mixed effects model | If you're talking about the situation where you know all possible levels of a factor of interest, and also have data to estimate the effects, then definitely you don't need to represent levels with ra | Fixed effect vs random effect when all possibilities are included in a mixed effects model
If you're talking about the situation where you know all possible levels of a factor of interest, and also have data to estimate the effects, then definitely you don't need to represent levels with random effects.
The reason tha... | Fixed effect vs random effect when all possibilities are included in a mixed effects model
If you're talking about the situation where you know all possible levels of a factor of interest, and also have data to estimate the effects, then definitely you don't need to represent levels with ra |
14,379 | Fixed effect vs random effect when all possibilities are included in a mixed effects model | Following the above discussion, I thought that side could also be modelled on a random slope across subjects, that is with the following LME model:
lmer(size ~ age + (1+side|subject), demo) [model 4 or lme4]
(because as you said: there is a random variation of size across subject and in addition to this, a random varia... | Fixed effect vs random effect when all possibilities are included in a mixed effects model | Following the above discussion, I thought that side could also be modelled on a random slope across subjects, that is with the following LME model:
lmer(size ~ age + (1+side|subject), demo) [model 4 o | Fixed effect vs random effect when all possibilities are included in a mixed effects model
Following the above discussion, I thought that side could also be modelled on a random slope across subjects, that is with the following LME model:
lmer(size ~ age + (1+side|subject), demo) [model 4 or lme4]
(because as you said:... | Fixed effect vs random effect when all possibilities are included in a mixed effects model
Following the above discussion, I thought that side could also be modelled on a random slope across subjects, that is with the following LME model:
lmer(size ~ age + (1+side|subject), demo) [model 4 o |
14,380 | How is the kurtosis of a distribution related to the geometry of the density function? | The moments of a continuous distribution, and functions of them like the kurtosis, tell you extremely little about the graph of its density function.
Consider, for instance, the following graphs.
Each of these is the graph of a non-negative function integrating to $1$: they are all PDFs. Moreover, they all have exact... | How is the kurtosis of a distribution related to the geometry of the density function? | The moments of a continuous distribution, and functions of them like the kurtosis, tell you extremely little about the graph of its density function.
Consider, for instance, the following graphs.
Eac | How is the kurtosis of a distribution related to the geometry of the density function?
The moments of a continuous distribution, and functions of them like the kurtosis, tell you extremely little about the graph of its density function.
Consider, for instance, the following graphs.
Each of these is the graph of a non-... | How is the kurtosis of a distribution related to the geometry of the density function?
The moments of a continuous distribution, and functions of them like the kurtosis, tell you extremely little about the graph of its density function.
Consider, for instance, the following graphs.
Eac |
14,381 | How is the kurtosis of a distribution related to the geometry of the density function? | [NB this was written in response to another question on site; the answers were merged to the present question. This is why this answer seems to respond to a differently worded question. However much of the post should be relevant here.]
Kurtosis doesn't really measure the shape of distributions. Within some distributio... | How is the kurtosis of a distribution related to the geometry of the density function? | [NB this was written in response to another question on site; the answers were merged to the present question. This is why this answer seems to respond to a differently worded question. However much o | How is the kurtosis of a distribution related to the geometry of the density function?
[NB this was written in response to another question on site; the answers were merged to the present question. This is why this answer seems to respond to a differently worded question. However much of the post should be relevant her... | How is the kurtosis of a distribution related to the geometry of the density function?
[NB this was written in response to another question on site; the answers were merged to the present question. This is why this answer seems to respond to a differently worded question. However much o |
14,382 | How is the kurtosis of a distribution related to the geometry of the density function? | For symmetric distributions (that is those for which the even centred moments are meaningful) kurtosis measures a geometric feature of the underlying pdf.
It is not true that kurtosis measures (or is in general related) to
the peakedness of a distribution. Rather, kurtosis measure
how far the underlying distribution... | How is the kurtosis of a distribution related to the geometry of the density function? | For symmetric distributions (that is those for which the even centred moments are meaningful) kurtosis measures a geometric feature of the underlying pdf.
It is not true that kurtosis measures (or is | How is the kurtosis of a distribution related to the geometry of the density function?
For symmetric distributions (that is those for which the even centred moments are meaningful) kurtosis measures a geometric feature of the underlying pdf.
It is not true that kurtosis measures (or is in general related) to
the peak... | How is the kurtosis of a distribution related to the geometry of the density function?
For symmetric distributions (that is those for which the even centred moments are meaningful) kurtosis measures a geometric feature of the underlying pdf.
It is not true that kurtosis measures (or is |
14,383 | How is the kurtosis of a distribution related to the geometry of the density function? | A different kind of answer: We can illustrate kurtosis geometrically, using ideas from http://www.quantdec.com/envstats/notes/class_06/properties.htm: graphical moments.
Start with the definition of kurtosis:
$$ \DeclareMathOperator{\E}{\mathbb{E}}
k = \E \left( \frac{X-\mu}{\sigma} \right)^4 =\int \left(\frac{x-\mu... | How is the kurtosis of a distribution related to the geometry of the density function? | A different kind of answer: We can illustrate kurtosis geometrically, using ideas from http://www.quantdec.com/envstats/notes/class_06/properties.htm: graphical moments.
Start with the definition of k | How is the kurtosis of a distribution related to the geometry of the density function?
A different kind of answer: We can illustrate kurtosis geometrically, using ideas from http://www.quantdec.com/envstats/notes/class_06/properties.htm: graphical moments.
Start with the definition of kurtosis:
$$ \DeclareMathOperator{... | How is the kurtosis of a distribution related to the geometry of the density function?
A different kind of answer: We can illustrate kurtosis geometrically, using ideas from http://www.quantdec.com/envstats/notes/class_06/properties.htm: graphical moments.
Start with the definition of k |
14,384 | How is the kurtosis of a distribution related to the geometry of the density function? | Kurtosis is not related to the geometry of the distribution at all, at least not in the central portion of the distribution. In the central portion of the distribution (within the $\mu \pm \sigma$ range) the geometry can show an infinite peak, a flat peak, or bimodal peaks, both in cases where the kurtosis is infinite,... | How is the kurtosis of a distribution related to the geometry of the density function? | Kurtosis is not related to the geometry of the distribution at all, at least not in the central portion of the distribution. In the central portion of the distribution (within the $\mu \pm \sigma$ ran | How is the kurtosis of a distribution related to the geometry of the density function?
Kurtosis is not related to the geometry of the distribution at all, at least not in the central portion of the distribution. In the central portion of the distribution (within the $\mu \pm \sigma$ range) the geometry can show an infi... | How is the kurtosis of a distribution related to the geometry of the density function?
Kurtosis is not related to the geometry of the distribution at all, at least not in the central portion of the distribution. In the central portion of the distribution (within the $\mu \pm \sigma$ ran |
14,385 | Is multicollinearity really a problem? | It's a problem for causal inference - or rather, it indicates difficulties in causal inference - but it's not a particular problem for prediction/forecasting (unless it's so extreme that it prevents model convergence or results in singular matrices, and then you won't get predictions anyway). This, I think, is the mean... | Is multicollinearity really a problem? | It's a problem for causal inference - or rather, it indicates difficulties in causal inference - but it's not a particular problem for prediction/forecasting (unless it's so extreme that it prevents m | Is multicollinearity really a problem?
It's a problem for causal inference - or rather, it indicates difficulties in causal inference - but it's not a particular problem for prediction/forecasting (unless it's so extreme that it prevents model convergence or results in singular matrices, and then you won't get predicti... | Is multicollinearity really a problem?
It's a problem for causal inference - or rather, it indicates difficulties in causal inference - but it's not a particular problem for prediction/forecasting (unless it's so extreme that it prevents m |
14,386 | Is multicollinearity really a problem? | It's not an issue for predictive modeling when all you care about is the forecast and nothing else.
Consider this simple model:
$$y=\beta+\beta_xx+\beta_zz+\varepsilon$$
Suppose that $z=\alpha x$
We have perfectly collinear regressors, and a typical OLS solution will not exist because $(X^TX)^{-1}$ has a singularity.
... | Is multicollinearity really a problem? | It's not an issue for predictive modeling when all you care about is the forecast and nothing else.
Consider this simple model:
$$y=\beta+\beta_xx+\beta_zz+\varepsilon$$
Suppose that $z=\alpha x$
We | Is multicollinearity really a problem?
It's not an issue for predictive modeling when all you care about is the forecast and nothing else.
Consider this simple model:
$$y=\beta+\beta_xx+\beta_zz+\varepsilon$$
Suppose that $z=\alpha x$
We have perfectly collinear regressors, and a typical OLS solution will not exist be... | Is multicollinearity really a problem?
It's not an issue for predictive modeling when all you care about is the forecast and nothing else.
Consider this simple model:
$$y=\beta+\beta_xx+\beta_zz+\varepsilon$$
Suppose that $z=\alpha x$
We |
14,387 | Is multicollinearity really a problem? | Multicollinearity is generally not the best scenario for regression analysis. Our life would be much easier if all predictors are orthogonal.
It's a problem for model interpretation (trying to understand the data):
Multicollinearity affects the variance of the coefficient estimators, and therefore estimation precision... | Is multicollinearity really a problem? | Multicollinearity is generally not the best scenario for regression analysis. Our life would be much easier if all predictors are orthogonal.
It's a problem for model interpretation (trying to underst | Is multicollinearity really a problem?
Multicollinearity is generally not the best scenario for regression analysis. Our life would be much easier if all predictors are orthogonal.
It's a problem for model interpretation (trying to understand the data):
Multicollinearity affects the variance of the coefficient estimat... | Is multicollinearity really a problem?
Multicollinearity is generally not the best scenario for regression analysis. Our life would be much easier if all predictors are orthogonal.
It's a problem for model interpretation (trying to underst |
14,388 | Is multicollinearity really a problem? | I'd argue that if the correlation between a variable and another variable (or linear combination of variables) changes between the in-sample and out-of-sample data, you can start to see multicollinearity affecting the accuracy of out-of-sample predictions. Multicollinearity just adds another assumption (consistent corr... | Is multicollinearity really a problem? | I'd argue that if the correlation between a variable and another variable (or linear combination of variables) changes between the in-sample and out-of-sample data, you can start to see multicollinear | Is multicollinearity really a problem?
I'd argue that if the correlation between a variable and another variable (or linear combination of variables) changes between the in-sample and out-of-sample data, you can start to see multicollinearity affecting the accuracy of out-of-sample predictions. Multicollinearity just a... | Is multicollinearity really a problem?
I'd argue that if the correlation between a variable and another variable (or linear combination of variables) changes between the in-sample and out-of-sample data, you can start to see multicollinear |
14,389 | What is the difference between moment generating function and probability generating function? | The probability generating function is usually used for (nonnegative) integer valued random variables, but is really only a repackaging of the moment generating function. So the two contains the same information.
Let $X$ be a non-negative random variable. Then (see https://en.wikipedia.org/wiki/Probability-generating... | What is the difference between moment generating function and probability generating function? | The probability generating function is usually used for (nonnegative) integer valued random variables, but is really only a repackaging of the moment generating function. So the two contains the same | What is the difference between moment generating function and probability generating function?
The probability generating function is usually used for (nonnegative) integer valued random variables, but is really only a repackaging of the moment generating function. So the two contains the same information.
Let $X$ be ... | What is the difference between moment generating function and probability generating function?
The probability generating function is usually used for (nonnegative) integer valued random variables, but is really only a repackaging of the moment generating function. So the two contains the same |
14,390 | What is the difference between moment generating function and probability generating function? | Let us define both first and then specify the difference.
1) In probability theory and statistics, the moment-generating function (mgf) of a real-valued random variable is an alternative specification of its probability distribution.
2) In probability theory, the probability generating function (pgf) of a discrete ra... | What is the difference between moment generating function and probability generating function? | Let us define both first and then specify the difference.
1) In probability theory and statistics, the moment-generating function (mgf) of a real-valued random variable is an alternative specificatio | What is the difference between moment generating function and probability generating function?
Let us define both first and then specify the difference.
1) In probability theory and statistics, the moment-generating function (mgf) of a real-valued random variable is an alternative specification of its probability dist... | What is the difference between moment generating function and probability generating function?
Let us define both first and then specify the difference.
1) In probability theory and statistics, the moment-generating function (mgf) of a real-valued random variable is an alternative specificatio |
14,391 | Why is there always at least one policy that is better than or equal to all other policies? | Just past the quoted part, the same paragraph actually tells you what this policy is: it is the one that takes the best action in every state. In an MDP, the action we take in one state does not affect rewards for actions taken in others, so we can simply maximize the policy state-by-state. | Why is there always at least one policy that is better than or equal to all other policies? | Just past the quoted part, the same paragraph actually tells you what this policy is: it is the one that takes the best action in every state. In an MDP, the action we take in one state does not affec | Why is there always at least one policy that is better than or equal to all other policies?
Just past the quoted part, the same paragraph actually tells you what this policy is: it is the one that takes the best action in every state. In an MDP, the action we take in one state does not affect rewards for actions taken ... | Why is there always at least one policy that is better than or equal to all other policies?
Just past the quoted part, the same paragraph actually tells you what this policy is: it is the one that takes the best action in every state. In an MDP, the action we take in one state does not affec |
14,392 | Why is there always at least one policy that is better than or equal to all other policies? | The existence of an optimal policy is not obvious. To see why, note that the value function provides only a partial ordering over the space of policies. This means:
$$\pi' \geq \pi \iff v_{\pi'}(s) \geq v_{\pi}(s), \forall s \in S $$
Since this is only a partial ordering, there could be a case where two policies, $\pi_... | Why is there always at least one policy that is better than or equal to all other policies? | The existence of an optimal policy is not obvious. To see why, note that the value function provides only a partial ordering over the space of policies. This means:
$$\pi' \geq \pi \iff v_{\pi'}(s) \g | Why is there always at least one policy that is better than or equal to all other policies?
The existence of an optimal policy is not obvious. To see why, note that the value function provides only a partial ordering over the space of policies. This means:
$$\pi' \geq \pi \iff v_{\pi'}(s) \geq v_{\pi}(s), \forall s \in... | Why is there always at least one policy that is better than or equal to all other policies?
The existence of an optimal policy is not obvious. To see why, note that the value function provides only a partial ordering over the space of policies. This means:
$$\pi' \geq \pi \iff v_{\pi'}(s) \g |
14,393 | Why is there always at least one policy that is better than or equal to all other policies? | $\newcommand{\mc}{\mathcal} \newcommand{\mb}{\mathbb}$
Setting
We are considering in the setting of:
Discrete actions
Discrete states
Bounded rewards
Stationary policy
Infinite horizon
The optimal policy is defined as:
$$
\pi^\ast \in \arg \max_\pi V^\pi(s), \forall s \in \mc{S} \tag{1}
$$
and the optimal value funct... | Why is there always at least one policy that is better than or equal to all other policies? | $\newcommand{\mc}{\mathcal} \newcommand{\mb}{\mathbb}$
Setting
We are considering in the setting of:
Discrete actions
Discrete states
Bounded rewards
Stationary policy
Infinite horizon
The optimal p | Why is there always at least one policy that is better than or equal to all other policies?
$\newcommand{\mc}{\mathcal} \newcommand{\mb}{\mathbb}$
Setting
We are considering in the setting of:
Discrete actions
Discrete states
Bounded rewards
Stationary policy
Infinite horizon
The optimal policy is defined as:
$$
\pi^... | Why is there always at least one policy that is better than or equal to all other policies?
$\newcommand{\mc}{\mathcal} \newcommand{\mb}{\mathbb}$
Setting
We are considering in the setting of:
Discrete actions
Discrete states
Bounded rewards
Stationary policy
Infinite horizon
The optimal p |
14,394 | Why is there always at least one policy that is better than or equal to all other policies? | I spent a bit of time researching on this for my master thesis.
Even though this question is a bit old,
I've decided to share my findings anyway since it would have helped me earlier when I ran into this page and didn't think the answers so far gave a full picture.
The short story: The question is non-trivial for gener... | Why is there always at least one policy that is better than or equal to all other policies? | I spent a bit of time researching on this for my master thesis.
Even though this question is a bit old,
I've decided to share my findings anyway since it would have helped me earlier when I ran into t | Why is there always at least one policy that is better than or equal to all other policies?
I spent a bit of time researching on this for my master thesis.
Even though this question is a bit old,
I've decided to share my findings anyway since it would have helped me earlier when I ran into this page and didn't think th... | Why is there always at least one policy that is better than or equal to all other policies?
I spent a bit of time researching on this for my master thesis.
Even though this question is a bit old,
I've decided to share my findings anyway since it would have helped me earlier when I ran into t |
14,395 | Kullback-Leibler divergence - interpretation [duplicate] | Because I compute slightly different values of the KL divergence than reported here, let's start with my attempt at reproducing the graphs of these PDFs:
The KL distance from $F$ to $G$ is the expectation, under the probability law $F$, of the difference in logarithms of their PDFs. Let us therefore look closely at t... | Kullback-Leibler divergence - interpretation [duplicate] | Because I compute slightly different values of the KL divergence than reported here, let's start with my attempt at reproducing the graphs of these PDFs:
The KL distance from $F$ to $G$ is the expect | Kullback-Leibler divergence - interpretation [duplicate]
Because I compute slightly different values of the KL divergence than reported here, let's start with my attempt at reproducing the graphs of these PDFs:
The KL distance from $F$ to $G$ is the expectation, under the probability law $F$, of the difference in loga... | Kullback-Leibler divergence - interpretation [duplicate]
Because I compute slightly different values of the KL divergence than reported here, let's start with my attempt at reproducing the graphs of these PDFs:
The KL distance from $F$ to $G$ is the expect |
14,396 | Kullback-Leibler divergence - interpretation [duplicate] | KL divergence measures how difficult it is to fake one distribution with another one. Assume that you draw an i.i.d. sample of size $n$ from the red distribution and that $n$ is large. It may happen that the empirical distribution of this sample mimicks the blue distribution. This is rare but this may happen... albeit ... | Kullback-Leibler divergence - interpretation [duplicate] | KL divergence measures how difficult it is to fake one distribution with another one. Assume that you draw an i.i.d. sample of size $n$ from the red distribution and that $n$ is large. It may happen t | Kullback-Leibler divergence - interpretation [duplicate]
KL divergence measures how difficult it is to fake one distribution with another one. Assume that you draw an i.i.d. sample of size $n$ from the red distribution and that $n$ is large. It may happen that the empirical distribution of this sample mimicks the blue ... | Kullback-Leibler divergence - interpretation [duplicate]
KL divergence measures how difficult it is to fake one distribution with another one. Assume that you draw an i.i.d. sample of size $n$ from the red distribution and that $n$ is large. It may happen t |
14,397 | When the Central Limit Theorem and the Law of Large Numbers disagree | The error here is likely in the following fact: convergence in distribution implicitly assumes that $F_n(x)$ converges to $F(x)$ at points of continuity of $F(x)$. As the limit distribution is of a constant random variable, it has a jump discontinuity at $x=1$, hence it is incorrect to conclude that the CDF converges t... | When the Central Limit Theorem and the Law of Large Numbers disagree | The error here is likely in the following fact: convergence in distribution implicitly assumes that $F_n(x)$ converges to $F(x)$ at points of continuity of $F(x)$. As the limit distribution is of a co | When the Central Limit Theorem and the Law of Large Numbers disagree
The error here is likely in the following fact: convergence in distribution implicitly assumes that $F_n(x)$ converges to $F(x)$ at points of continuity of $F(x)$. As the limit distribution is of a constant random variable, it has a jump discontinuity... | When the Central Limit Theorem and the Law of Large Numbers disagree
The error here is likely in the following fact: convergence in distribution implicitly assumes that $F_n(x)$ converges to $F(x)$ at points of continuity of $F(x)$. As the limit distribution is of a co |
14,398 | When the Central Limit Theorem and the Law of Large Numbers disagree | For iid random variables $X_i$ with $E[X_i]= \operatorname{var}(X_i)=1$ define
\begin{align}Z_n &= \frac{1}{\sqrt{n}}\sum_{i=1}^n X_i,\\
Y_n &= \frac{1}{{n}}\sum_{i=1}^n X_i.
\end{align}
Now, the CLT says that for every fixed real number $z$, $\lim_{n\to\infty} F_{Z_n}(z) = \Phi(z-1)$. The OP applies the CLT to eva... | When the Central Limit Theorem and the Law of Large Numbers disagree | For iid random variables $X_i$ with $E[X_i]= \operatorname{var}(X_i)=1$ define
\begin{align}Z_n &= \frac{1}{\sqrt{n}}\sum_{i=1}^n X_i,\\
Y_n &= \frac{1}{{n}}\sum_{i=1}^n X_i.
\end{align}
Now, the C | When the Central Limit Theorem and the Law of Large Numbers disagree
For iid random variables $X_i$ with $E[X_i]= \operatorname{var}(X_i)=1$ define
\begin{align}Z_n &= \frac{1}{\sqrt{n}}\sum_{i=1}^n X_i,\\
Y_n &= \frac{1}{{n}}\sum_{i=1}^n X_i.
\end{align}
Now, the CLT says that for every fixed real number $z$, $\lim... | When the Central Limit Theorem and the Law of Large Numbers disagree
For iid random variables $X_i$ with $E[X_i]= \operatorname{var}(X_i)=1$ define
\begin{align}Z_n &= \frac{1}{\sqrt{n}}\sum_{i=1}^n X_i,\\
Y_n &= \frac{1}{{n}}\sum_{i=1}^n X_i.
\end{align}
Now, the C |
14,399 | When the Central Limit Theorem and the Law of Large Numbers disagree | Your first result is the correct one. Your error occurs in the second part, in the following erroneous statement:
$$\lim_{n \rightarrow \infty} F_{\bar{X}_n}(1) = 1.$$
This statement is false (the right-hand-side should be $\tfrac{1}{2}$) and it does not follow from the law of large numbers as asserted. The weak law ... | When the Central Limit Theorem and the Law of Large Numbers disagree | Your first result is the correct one. Your error occurs in the second part, in the following erroneous statement:
$$\lim_{n \rightarrow \infty} F_{\bar{X}_n}(1) = 1.$$
This statement is false (the ri | When the Central Limit Theorem and the Law of Large Numbers disagree
Your first result is the correct one. Your error occurs in the second part, in the following erroneous statement:
$$\lim_{n \rightarrow \infty} F_{\bar{X}_n}(1) = 1.$$
This statement is false (the right-hand-side should be $\tfrac{1}{2}$) and it does... | When the Central Limit Theorem and the Law of Large Numbers disagree
Your first result is the correct one. Your error occurs in the second part, in the following erroneous statement:
$$\lim_{n \rightarrow \infty} F_{\bar{X}_n}(1) = 1.$$
This statement is false (the ri |
14,400 | When the Central Limit Theorem and the Law of Large Numbers disagree | Convergence in probability implies convergence in distribution. But... what distribution? If the limiting distribution has a jump discontinuity then the limits become ambiguous (because multiple values are possible at the discontinuity).
where $F_{\bar X_n}()$ is the distribution function of the sample mean $\bar X_n... | When the Central Limit Theorem and the Law of Large Numbers disagree | Convergence in probability implies convergence in distribution. But... what distribution? If the limiting distribution has a jump discontinuity then the limits become ambiguous (because multiple value | When the Central Limit Theorem and the Law of Large Numbers disagree
Convergence in probability implies convergence in distribution. But... what distribution? If the limiting distribution has a jump discontinuity then the limits become ambiguous (because multiple values are possible at the discontinuity).
where $F_{\... | When the Central Limit Theorem and the Law of Large Numbers disagree
Convergence in probability implies convergence in distribution. But... what distribution? If the limiting distribution has a jump discontinuity then the limits become ambiguous (because multiple value |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.